How to shape input correctly for Convolution1D?

I’m trying to get a simple 1D convolution working, but I can’t seem to find a format for the data that isn’t rejected by the network. I’ve put a full gist here:

But the core of it is:

	double [][] features = new double [][] {
		{ 1, 1, 1, 1, 1, 1, 1, 1 },
		{ 1, 0, 1, 1, 1, 1, 1, 1 },
		{ 1, 1, 1, 1, 1, 1, 1, 1 },
		{ 1, 0, 1, 1, 1, 1, 1, 1 },
		{ 1, 1, 1, 1, 1, 1, 1, 1 },
		{ 1, 0, 1, 1, 1, 1, 1, 1 },
		{ 1, 1, 1, 1, 1, 1, 1, 1 },
		{ 1, 0, 1, 1, 1, 1, 1, 1 },
		{ 1, 1, 1, 1, 1, 1, 1, 1 },
		{ 1, 0, 1, 1, 1, 1, 1, 1 },
	};
	
	double [][] labels = new double [][] {
		{1, 0},
		{0, 1},
		{1, 0},
		{0, 1},
		{1, 0},
		{0, 1},
		{1, 0},
		{0, 1},
		{1, 0},
		{0, 1},
	};
	
	int data_len = features[0].length;
	
	MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
			.seed(0)
			.l2(0.0005)
			.weightInit(WeightInit.XAVIER)
			.updater(new Adam(1e-3))
			.list()
			.layer(new Convolution1D.Builder(5)
					//nIn and nOut specify depth. nIn here is the nChannels and nOut is the number of filters to be applied
					.nIn(1)
					.stride(1)
					.nOut(10)
					.activation(Activation.IDENTITY)
					.build())
			.layer(new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
					.nOut(2)
					.activation(Activation.SOFTMAX)
					.build())
			.setInputType(InputType.convolutionalFlat(data_len,1,1))
			.build();

	
	DataSet dataset = new DataSet();
	dataset.setFeatures(Nd4j.create(features));
	dataset.setLabels(Nd4j.create(labels));
	
	MultiLayerNetwork model = new MultiLayerNetwork (conf);
	model.init(); 
	model.setListeners(new ScoreIterationListener(10));
	
	List<DataSet> datasets = new ArrayList<DataSet>();
	datasets.add(dataset);

	DataSetIterator trainIter = new ListDataSetIterator(datasets);
	for(int n=0; n<50; n++) { 
	    model.fit(trainIter); 
	}

When I run this, it will fail with

Exception in thread "main" org.deeplearning4j.exception.DL4JInvalidInputException: Cannot do forward pass in Convolution layer (layer name = layer0, layer index = 0): input array channels does not match CNN layer configuration (data format = NCHW, data input channels = 8, [minibatch, channels, height, width]=[10, 8, 1, 1]; expected input channels = 1) (layer name: layer0, layer index: 0, layer type: Convolution1DLayer)

I’ve tried all sorts of reshaping the data, and variations to the InputType, but I can’t get anything to work.

I’m wondering if I am missing something basic?

Thanks!

Maybe you can try to set the inputType like this:
.setInputType(InputType.recurrent(1, data_len))

Thanks! Gave that a try, but it failed even earlier in the process:

Exception in thread "main" java.lang.IllegalArgumentException: Invalid size: cannot get size of dimension 2 for rank 2 NDArray (array shape: [10, 8])
	at org.nd4j.linalg.api.ndarray.BaseNDArray.size(BaseNDArray.java:4510)
	at org.deeplearning4j.nn.layers.convolution.Convolution1DLayer.preOutput(Convolution1DLayer.java:178)

I feel like it’s on the right track though as there is definitely something mismatching in how the layers are talking to each other. ie: what goes in as height into one layer is getting treated as depth in another, I think. I can see in the code that various reshaping operations are happening depending on the input type and data format. But I can’t see how to match things together correctly.

Any further thoughts or help would be much appreciated!