Reshape/Flatten layer support?

I’m using a 1d convolution layer as the output layer with multi channels.
but the output layer treats the channels as the multiple mini batches.

so it multiplies the mini batch size with the channel size.
and it makes the error on training.


the layer get the batch size from the graph but it indicates the batch size of the output layer.
FeedForwardToRnnPreProcessor.preProcess()
divides the shape with the over-sized mini batch size. and it becomes zero.

@pmingkr you can use the reshape preprocessor here:

for the multilayenr network api.

For anything more complicated, consider using the computation graph api which has a reshape vertex.