BatchNormalization Layer only support single int as nIn and nOut?

I tired to use BatchNormalization layer after my 1DCNN, however, it always gives me an error message showing either nIn/nOut is not specified, or does not support rank 3, so I can’t use it.

My workaround is to reshape the output of 1DCNN from 2D to 1D, specify nOut as a singer int. In this way I successfully added Batch Normalization layer into my model. Is it correct to do it in this way?

The problem is that when I trained a model in Keras, the model with BN is better than the model without BN. But when I use the above workaround in DL4J, and the model with BN perform worth than the model without. I am wondering what the reason is, and how to fix it?

Thanks.

@XJ8 did you try using keras model import? https://deeplearning4j.konduit.ai/keras-import/overview

@agibsonccc Thank you, Adam.

I did try importing a different Keras model before, but batch normalization gave me problem. I think I should try again. Try to train from an imported model, and see if there is any performance difference.

@agibsonccc It turns out my workaround is incorrect. I tried the same workaround logic in Keras, the performance is indeed worse with this BN workaround than without. Need to really understand BN to see the reasons.

@XJ8 Could you post your architecture? Happy to help dive in a bit.

Thanks. I posted the model in the same google drive folder. Because BN in DL4J doesn’t support rank 3, so for 1DCNN, I have to reshape it, batch normalize, and reshape back.

@XJ8 could you post your architecture? That doesn’t sound right. If you’re using a 1d cnn, it should just be inserting a 1 in to the dimension for batch norm if needed, if needed any pre processor should take care of that.