NullPointerException org.nd4j.autodiff.samediff.internal.Variable.getOutputOfOp()

@agibsonccc Thanks. Please let me know if you have problems with the repository or my instructions are not clear or incorrect.

Thanks

@adonnini I have your repo up and running. The variable name is empty. That would definitely be a reason for a null variable. The variable is the input to your loss function.
When I look at the loss function by scroling up I see 3 inputs.
“” < - your problem
createAndConfigureModel - 38854720/Training - 9139334/Training - 43677067/createAndConfigureModel - 7705483/sd_var
label

Try to debug how you’re defining your loss variables and ensure none of your variables are empty.

@agibsonccc Thanks.
Please keep in mind that after every execution of the code you will need to manually delete the UCI directory from the resources directory (I know it’s a nuisance. Sorry).
Thanks

@agibsonccc For some reason I had not seen you latest comment. I am sorry. I’ll take a look and follow your suggestions. I’ll update you. Thanks

@agibsonccc It looks like I was able to resolve the nullPointerException issue. However, another one cropped up (see below). I searched for help/information on the Internet. Not surprisingly, I did not find anything I could use.

I updated the repository to the latest version of the code which when run fails as described below where you will find the traceback log and a copy of my TrainingConfig

When you get a chance, please let me know what you think.

Thanks

TRACEBACK LOG

Exception in thread "main" java.lang.RuntimeException: java.lang.IllegalStateException: No updater found for variable "outReduced - 3418601"
	at trajectorypredictiontransformer.LocationNextNeuralNetworkV7_04.fitAndEvaluateTestDataset(LocationNextNeuralNetworkV7_04.java:1539)
	at trajectorypredictiontransformer.LocationNextNeuralNetworkV7_04.sameDiff3(LocationNextNeuralNetworkV7_04.java:957)
	at trajectorypredictiontransformer.LocationNextNeuralNetworkV7_04.main(LocationNextNeuralNetworkV7_04.java:218)
Caused by: java.lang.IllegalStateException: No updater found for variable "outReduced - 3418601"
	at org.nd4j.common.base.Preconditions.throwStateEx(Preconditions.java:639)
	at org.nd4j.common.base.Preconditions.checkState(Preconditions.java:301)
	at org.nd4j.autodiff.samediff.internal.TrainingSession.getOutputs(TrainingSession.java:190)
	at org.nd4j.autodiff.samediff.internal.TrainingSession.getOutputs(TrainingSession.java:45)
	at org.nd4j.autodiff.samediff.internal.AbstractSession.output(AbstractSession.java:533)
	at org.nd4j.autodiff.samediff.internal.AbstractSession.output(AbstractSession.java:154)
	at org.nd4j.autodiff.samediff.internal.TrainingSession.trainingIteration(TrainingSession.java:129)
	at org.nd4j.autodiff.samediff.SameDiff.fitHelper(SameDiff.java:1936)
	at org.nd4j.autodiff.samediff.SameDiff.fit(SameDiff.java:1792)
	at org.nd4j.autodiff.samediff.SameDiff.fit(SameDiff.java:1660)
	at trajectorypredictiontransformer.LocationNextNeuralNetworkV7_04.fitAndEvaluateTestDataset(LocationNextNeuralNetworkV7_04.java:1536)
	... 2 more

TrainingConfig

        TrainingConfig config = new TrainingConfig.Builder()
//                .l2(1e-4)                               //L2 regularization
//                .l2(0.001)
//                .updater(new Adam(learningRate))        //Adam optimizer with specified learning rate
                .updater(new Nesterovs(0.01, 0.9))
//                .updater(new Nesterovs(0.0001, 0.9))
                .dataSetFeatureMapping("input")         //DataSet features array should be associated with variable "input"
                .dataSetLabelMapping("label")           //DataSet label array should be associated with variable "label"
//                .minimize("lossLOG")
//                .minimize(lossMSE.name())
//                .minimize("lossMSE")
//                .trainEvaluation(outReduced.name(), 0, evaluation)  // add a training evaluation
                .trainEvaluation(outReduced.name(), 0, evaluation)  // add a training evaluation
//                .trainEvaluation("outReduced"+" - "+mRandomNumericalId, 0, evaluation)  // add a training evaluation
//                .trainEvaluation("out", 0, evaluation)  // add a training evaluation
                .build();

@adonnini are you adding this as a variable after it’s created? An updater should propagate just fine to that. Something seems off there.

@agibsonccc thanks. I am not sure what you mean by

You are referring to

right?

Do you mean do I perform an add operation? Sorry.

Thanks

Yes. out of curiousity could you show the code where you’re creating this? Either the updater is trying to update something it shouldn’t be (eg: a constant) OR something’s not being added correctly.

@agibsonccc
Below you will find the part of createAndConfigureModel where outReduced - 581491 is defined. createAndConfigureModel runs without any errors. Immediately after returning from createAndConfigureModel, fitAndEvaluateTestDataset() runs. The error java.lang.IllegalStateException: No updater found for variable "outReduced - 5814915" occurs when running

outSingle = sd.fit(tNext) where tNext is the dataset containing features and labels

Please let me know if you need any other information.

Thanks

-----------------------
        tf_model = new TransformerArchitectureModel.TFModel(sd, encoder_ip_size, decoder_ip_size, model_op_size, emb_size,
                num_heads, ff_hidden_size, n, dropout, weights, bias, batch_size, labelCount);

        getEncoderInputDecoderInputAndDecoderMasks();

        outArray = tf_model.forward(encInput, decInput, decSourceMask, decInputMask).getArr();

        out = sd.var("out"+" - "+mRandomNumericalId, outArray);

        INDArray outReducedArray = outArray.get(NDArrayIndex.interval(0, label.eval(placeholderData).shape()[0]), NDArrayIndex.interval(0, label.eval(placeholderData).shape()[1]), NDArrayIndex.interval(0, label.eval(placeholderData).shape()[2]));

        outReduced = sd.var("outReduced"+" - "+mRandomNumericalId, outReducedArray);

        /////////

        INDArray labelArray = label.eval(placeholderData);
        INDArray labelArrayResized = Nd4j.create(outReduced.eval().shape()[0], outReduced.eval().shape()[1], outReduced.eval().shape()[2]);
        INDArray labelArrayResizedPopulated = labelArrayResized.assign(labelArray);
        labelResizedPopulated = sd.var("labelResizedPopulated"+" - "+mRandomNumericalId, labelArrayResizedPopulated);

        lossMSE = sd.loss.meanSquaredError("lossMSE", labelResizedPopulated, outReduced, null);

        sd.setLossVariables(lossMSE);

                int iterations = 70;

//   creating configuration

        evaluation = new Evaluation();
        double learningRate = 1e-3;
        TrainingConfig config = new TrainingConfig.Builder()
                .updater(new Nesterovs(0.01, 0.9))
                .dataSetFeatureMapping("input")         //DataSet features array should be associated with variable "input"
                .dataSetLabelMapping("label")           //DataSet label array should be associated with variable "label"
                .trainEvaluation(outReduced.name(), 0, evaluation)  // add a training evaluation
                .build();

        sd.setTrainingConfig(config);

Hmmm…something doesn’t make sense here. You don’t normally create ndarrays and pass them in to sd.var those would be placeholders if you were doing that. That would be true for your labels, inputs and anything else. Variables should be derived.

You can’t use the ndarray indexing api , nor ANY api, pass it in to var and expect it to work. NDArrays are inherently static. You need to use the equivalent SDINdex api in order to do anything there.
The api is meant to be a mirror of the ndarray index api.

You have to remember both tf and samediff are declarative. NDarrays and numpy libraries are not. Declarative means a graph with a BUNCH of underlying information is tracked in order to build a graph as you go.

You can’t do that if you just pass something in.

Thanks. This is very helpful. I will follow your suggestion.
Do you think it will help resolve the problem?
Thanks

@adonnini it somewhat should. Right now the variable isn’t really integrated with the graph at all.

@agibsonccc som good news. I looked at my code again and noticed that, for some reason which is not clear to me, I was running createAndConfigureModel twice, a second time after running an initial fit.
To make a long story short, I commented out that part of the code. Now, execution runs to (apparently) successful completion.
I still want to follow your suggestion regarding the use of the SDINdex api.

Thanks for your help. I hope you don’t mind if I contact you again if I have any questions/issues as I move forward.

@agibsonccc it turns out that the model I am saving is greater than 2GB which means that saving it as a flat buffer file will not work.
I wonder if this is a sign that something is not working quite right in my model creation process.
What do you think?
Please let me know if there is any information I can send you to shed some light on this issue.
Thanks

@adonnini hmm what data type is everything? Is it float or double?

I double cheked. The data type is float

The file limit is problematic…I remember it being 4g not 2 though. Let me look into it.