RnnToCnnPreProcessor

Hi
I want to use a Convolution3D Layer before Lstm layer in a computation graph.
The input to the network is an Rnn rank 3 array with shape [100, 5, 71].
I used RnnToCnnPreProcessor to change the input array to fit the Cnn.
but i got the error: New shape length doesn’t match original length: [2100] vs [350]. Original shape: [70, 5] New Shape: [70, 1, 10, 3]
Please tell me how to setup the preprocessors?
here is the graph configuration:
ComputationGraphConfiguration conf = new NeuralNetConfiguration.Builder()
.updater(new Adam().builder().learningRate(0.01).build())
.seed(123)
.miniBatch(false)
.biasInit(0)
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.weightInit(WeightInit.XAVIER_UNIFORM)
.graphBuilder()
.addInputs(“input”) //can use any label for this
.addLayer(“L1”, new Convolution3D.Builder().nIn(iter.inputColumns()).nOut(lstmLayerSize).activation(Activation.RELU6).kernelSize(10,3,iter.exampleLength).stride(2,2,iter.exampleLength).build(), “input”)
.addLayer(“L2”, new LSTM.Builder().nIn(iter.inputColumns()).nOut(lstmLayerSize).updater(new Adam().builder().learningRate(0.008).build()).activation(Activation.TANH).build(), “L1”)
.addLayer(“L3”, new DenseLayer.Builder().nIn(lstmLayerSize).nOut(lstmLayerSize).updater(new Adam().builder().learningRate(0.01).build()).activation(Activation.TANH).build(),“L2”)
.addLayer(“L4”,new RnnOutputLayer.Builder().nIn(lstmLayerSize).nOut(iter.inputColumns()).updater(new Adam().builder().learningRate(0.01).build()).build(), “L3”)
.setOutputs(“L4”)
.inputPreProcessor(“L1”, new RnnToCnnPreProcessor(10,3,1))
.inputPreProcessor(“L2”, new CnnToRnnPreProcessor(10,3,1))
.inputPreProcessor(“L3”, new RnnToFeedForwardPreProcessor())
.inputPreProcessor(“L4”, new FeedForwardToRnnPreProcessor())
.build();