passing the dataset to the fit method of the MultiLayerNetwork class

Hello everyone, I am forming a dataset from INDArray 3 ranks, the dataset is formed without errors. But when passing it to the fit method of the MultiLayerNetwork class, an error occurs: Rank is [3]; columns() call is not valid. What could be the problem?

@Arasaka can you pass more context? You are probably using columns() somewhere. That shortcut is only available for matrices. (Hence the name columns()). If you want the last dimension you can also use:
arr.size(-1)

@agibsonccc thanks for the answer. But the worst part is that I don’t use the columns() method anywhere. Is it possible to pass an INDArray of rank 3 to the fit method? Here is my stack trace:
Exception in thread “main” java.lang.IllegalStateException: Rank is [3]; columns() call is not valid
at org.nd4j.linalg.api.ndarray.BaseNDArray.columns(BaseNDArray.java:4054)
at org.deeplearning4j.nn.layers.feedforward.embedding.EmbeddingLayer.preOutput(EmbeddingLayer.java:85)
at org.deeplearning4j.nn.layers.feedforward.embedding.EmbeddingLayer.activate(EmbeddingLayer.java:126)
at org.deeplearning4j.nn.layers.AbstractLayer.activate(AbstractLayer.java:262)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.ffToLayerActivationsInWs(MultiLayerNetwork.java:1147)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.computeGradientAndScore(MultiLayerNetwork.java:2798)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.computeGradientAndScore(MultiLayerNetwork.java:2756)
at org.deeplearning4j.optimize.solvers.BaseOptimizer.gradientAndScore(BaseOptimizer.java:174)
at org.deeplearning4j.optimize.solvers.StochasticGradientDescent.optimize(StochasticGradientDescent.java:61)
at org.deeplearning4j.optimize.Solver.optimize(Solver.java:52)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fitHelper(MultiLayerNetwork.java:1767)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fit(MultiLayerNetwork.java:1688)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fit(MultiLayerNetwork.java:3614)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fit(MultiLayerNetwork.java:3601)
at Runner.main(Runner.java:58)

@Arasaka ah I see the EmbeddingLayer is calling that. Can you post your network and ideally a reproducer?

It might not be pre processing the data correctly. This is common when you set the inputs manually.

@agibsonccc
Would you like to lay out the network build or dataset assembly?

@agibsonccc my network build:
public static MultiLayerNetwork buildModel() {
int vocabSize = 10000;
int embeddingSize = 300;
int numberHidden = 256;
int numberClasses = 2;
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(123)
.updater(new Adam())
.list()
.layer(new EmbeddingLayer.Builder()
.nIn(vocabSize)
.nOut(embeddingSize)
.build())
.layer(new LSTM.Builder()
.nIn(embeddingSize)
.nOut(numberHidden)
.activation(Activation.TANH)
.build())
.layer(new DenseLayer.Builder()
.nIn(numberHidden)
.nOut(numberClasses)
.activation(Activation.SOFTMAX)
.build())
.layer(new RnnOutputLayer.Builder(LossFunctions.LossFunction.MCXENT)
.nIn(numberHidden)
.nOut(numberClasses)
.build())
.build();
MultiLayerNetwork network = new MultiLayerNetwork(conf);
network.init();

    return network;
}

@Arasaka can you remove the manual nIns and use setInputType?
Something like:
.setInputType(InputType.recurrent(W2V_VECTOR_SIZE, 1000))

See more here;

@agibsonccc, i’ll try and post here.

@agibsonccc
I did it like this, is it right?

public static MultiLayerNetwork buildModel() {
int vocabSize = 10000;
int embeddingSize = 300;
int numberHidden = 256;
int numberClasses = 2;
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(123)
.updater(new Adam())
.list()
.layer(new EmbeddingLayer.Builder()
.nIn(vocabSize)
.nOut(embeddingSize)
.build())
.layer(new LSTM.Builder()
.nOut(numberHidden)
.activation(Activation.TANH)
.build())
.layer(new DenseLayer.Builder()
.nOut(numberClasses)
.activation(Activation.SOFTMAX)
.build())
.layer(new RnnOutputLayer.Builder(LossFunctions.LossFunction.MCXENT)
.nOut(numberClasses)
.build())
.setInputType(InputType.reccurrent(embeddingSize))
.build();
MultiLayerNetwork network = new MultiLayerNetwork(conf);
network.init();
return network;
}

@Arasaka remove your other nIn declaration as well. setInputType goes through each layer and adds pre processors as well as sets the input number for you. Run that let me know if you have any issues.

@agibsonccc that is to remove still in EmbeddingLayer?

@Arasaka yes. nIn is not needed anywhere in the network besides in setInputType

@agibsonccc understood, thank you. Now I will run in and write to you.

@agibsonccc now when transferring the dataset for training to the model, the following error occurs:
“Cannot do forward pass for embedding layer with input more than one column. Expected input shape: [numExamples,1] with each entry being an integer index (layer name: layer0, layer index: 0, layer type: EmbeddingLayer)”. As I understand it, the error occurs because the input data has the dimension [numExamples, maxLength, vectorLength], and the EmbeddingLayer expects input data of the dimension [numExamples, 1], where each element is the index of the word in the dictionary.

@Arasaka switch to using the EmbeddingSequenceLayer instead. Sorry I wasn’t thinking about that earlier. You can find tests and the class here: Sign in to GitHub · GitHub

@agibsonccc i changed EmbeddingLayer to EmbeddingSequenceLayer. What will be the result - I will write.

@Arasaka since you’re using 3d the sequence layer expects 3d. That’s why we had the 2d columns assumption in there.

@agibsonccc i replaced like this:
public static MultiLayerNetwork buildModel() {
int vocabSize = 10000;
int embeddingSize = 300;
int numberHidden = 256;
int numberClasses = 2;
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(123)
.updater(new Adam())
.list()
.layer(new EmbeddingSequenceLayer.Builder() //change
.nOut(embeddingSize)
.build())
.layer(new LSTM.Builder()
.nOut(numberHidden)
.activation(Activation.TANH)
.build())
.layer(new DenseLayer.Builder()
.nOut(numberClasses)
.activation(Activation.SOFTMAX)
.build())
.layer(new RnnOutputLayer.Builder(LossFunctions.LossFunction.MCXENT)
.nOut(numberClasses)
.build())
.setInputType(InputType.reccurrent(embeddingSize))
.build();
MultiLayerNetwork network = new MultiLayerNetwork(conf);
network.init();
return network;
}

I hope I did it right

@Arasaka at first glance that looks good. Give that a shot and I"ll check on you tomorrow.

@agibsonccc hello, today and tomorrow will have to wait a while with this. I hope you wait.