Error input size in multilayer conf

Hello I’m a french computer science student and I’m starting to use DL4J. But one exception (among others I guess) gives me a hard time:
Exception in thread “main” org.deeplearning4j.exception.DL4JInvalidInputException: Input size (1 columns; shape = [15, 1]) is invalid: does not match layer input size (layer # inputs = 2) (layer name: layer0, layer index: 0, layer type: DenseLayer)

I have a csv file with 2 columns and looks like this (with about 30,000 lines):
13264.620117,2
14.82,5
1505.619995,6
20.98,1
23.27,7
35.57,3
36.150002,9

The second column represents the label and the first one a value (any)

My model configuration is as follows:

Does anyone have any answers or advice

There are a few things wrong here.

Those two statements together mean that you have one (=1) input and one output.

Yet you setup your model to use two inputs and outputs:
image

Then, you go on and configure your network very weirdly:
image
What you are doing here is that you tell your model that the first layer is going to get numInputs and your second (=output in this case) layer is going to get numInputs while at the same time, you tell it to use outputNum outputs.

Given that you say that you have struggled with other problems, I expect that you probably had numInputs and numOutputs as different values orginally, but at some point changed that to 2 so they are equal and things don’t “break”.

.nIn and .nOut specify for each layer what they expect to get in, and how much they should output. For example, if you wanted to configure a network that gets an MNIST sized input (28x28 pixel=784 values) and then goes though several layers getting smaller until it goes to the final 10 possible results, i.e. a 6 layer network that goes like 784 → 512 → 256 → 128 → 64 → 32 → 10, you would start with the first layer having .nIn(784).nOut(512) and the next layer having .nIn(512).nOut(256), the next layer having .nIn(256).nOut(128), …, with the output layer then having .nIn(32).nOut(10).

As you can see that is a lot of redundancy, that can be automatically inferred, so instead of setting .nIn on every layer manually, you can also add .setInputType(InputType.feedForward(numInputs)) and it would take care of calculating the correct .nIn value for you, even when you go on to creating more complex networks. For the example I’ve given above it would look like this: .setInputType(InputType.feedForward(784)) and then only setting .nOut would still be required.

Thank you
So I changed the Multilayer conf as follows:


But I have the same exception coming back:

Exception in thread “main” org.deeplearning4j.exception.DL4JInvalidInputException: Input size (1 columns; shape = [15, 1]) is invalid: does not match layer input size (layer # inputs = 2) (layer name: layer0, layer index: 0, layer type: DenseLayer)

Perhaps my error is in my DataSetIterator…?

Or maybe I misunderstood your explanation, too.

Also, (silly question) why the error occurs when :

for(int i=0; i<1000; i++ ) {
model.fit(trainIter); ← THAT LINE ???
}

The nIn are comment’s

Ideally, you would edit your posts instead of posting new ones. I’ve merged them now.

Your problem is that you still used numInputs = 2, which says that you have 2 input values, while you have said yourself:

So you actually only have a single input value and you have to reflect that in your numInputs variable!
That was the very first thing I told you in my first answer.

The training only starts at this point. This is where the data first meets the model. And the error you are getting is telling you exactly that: The data you are feeding it (15 examples with 1 value) doesn’t match what it expects (examples with 2 values).

Thanks, I’ll take a closer look.

For both posts, as I’m a new user, I can’t put 2 pictures per post, so I made 2 posts.

And thanks again

Yeah, that was the mistake all along.
And then I had this mistake:
Exception in thread “main” java.lang.IllegalArgumentException: Labels and preOutput must have equal shapes: got shapes [15, 10] vs [15, 2]

So I modified numOutpus = 10
now it’s rolling!

All that’s left is to make a relevant and viable model.

Thanks again @treo !
PS : I didn’t know this forum and it was hard to find docs for DL4j and in french not worth it …

Translated with DeepL Translate: The world's most accurate translator (free version)

Can you tell me more about that? The docs and this forum are linked directly from deeplearning4j.org, and the forum is again linked from the documentation.
Is there anything we can do to make this any easier?

I expressed myself badly, I would say that one falls very often on the gitter forum (and its continuous chat format is not very practical to my taste) and rarely on this forum. And I didn’t search enough on deeplearning4j.org to find it then.

so there’s not much to really change.
I’d say, I need to look for something better.

I see. That is actually the reason we abandoned gitter in favor of this forum.