I tired to use BatchNormalization layer after my 1DCNN, however, it always gives me an error message showing either nIn/nOut is not specified, or does not support rank 3, so I can’t use it.
My workaround is to reshape the output of 1DCNN from 2D to 1D, specify nOut as a singer int. In this way I successfully added Batch Normalization layer into my model. Is it correct to do it in this way?
The problem is that when I trained a model in Keras, the model with BN is better than the model without BN. But when I use the above workaround in DL4J, and the model with BN perform worth than the model without. I am wondering what the reason is, and how to fix it?