There are a few things wrong here.
Those two statements together mean that you have one (=1) input and one output.
Yet you setup your model to use two inputs and outputs:
Then, you go on and configure your network very weirdly:
What you are doing here is that you tell your model that the first layer is going to get
numInputs and your second (=output in this case) layer is going to get
numInputs while at the same time, you tell it to use
Given that you say that you have struggled with other problems, I expect that you probably had
numOutputs as different values orginally, but at some point changed that to
2 so they are equal and things don’t “break”.
.nOut specify for each layer what they expect to get in, and how much they should output. For example, if you wanted to configure a network that gets an MNIST sized input (28x28 pixel=784 values) and then goes though several layers getting smaller until it goes to the final 10 possible results, i.e. a 6 layer network that goes like 784 -> 512 -> 256 -> 128 -> 64 -> 32 -> 10, you would start with the first layer having
.nIn(784).nOut(512) and the next layer having
.nIn(512).nOut(256), the next layer having
.nIn(256).nOut(128), …, with the output layer then having
As you can see that is a lot of redundancy, that can be automatically inferred, so instead of setting
.nIn on every layer manually, you can also add
.setInputType(InputType.feedForward(numInputs)) and it would take care of calculating the correct
.nIn value for you, even when you go on to creating more complex networks. For the example I’ve given above it would look like this:
.setInputType(InputType.feedForward(784)) and then only setting
.nOut would still be required.