More Batch normalization problems with 1dconvolution

Hi

I am having more issues with Batch normalization when trying to use 1dCNN, the first parts of my model>
.addLayer(“stem-cnn1”,new Convolution1DLayer.Builder(3,2).nIn(483).nOut(32).build(),“input”)
.addLayer(“stem-batch1”, new BatchNormalization.Builder(false).decay(0.995).eps(0.001).nIn(32).nOut(32).build(),“stem-cnn1”)
.addLayer(“stem-cnn2”,new Convolution1DLayer.Builder(3).nIn(32).nOut(32).build(),“stem-batch1”)
.addLayer(“stem-batch2”,new BatchNormalization.Builder(false).decay(0.995).eps(0.001).nIn(32).nOut(32).build(),“stem-cnn2”)

1DCNN uses rank 3 INDArrays as it subclasses off of 2DCNN
/*
//TODO: We will eventually want to NOT subclass off of ConvolutionLayer.
//Currently, we just subclass off the ConvolutionLayer and hard code the “width” dimension to 1
* This approach treats a multivariate time series with L timesteps and
* P variables as an L x 1 x P image (L rows high, 1 column wide, P
* channels deep). The kernel should be H<L pixels high and W=1 pixels
* wide.
*/

but Batch normalization not supported with rank 3, I don’t understand?

Exception in thread “main” java.lang.IllegalStateException: Batch normalization on activations of rank 3 not supported (layer name: stem-batch1, layer index: 0, layer type: BatchNormalization)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.preOutput(BatchNormalization.java:506)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.activate(BatchNormalization.java:393)
at org.deeplearning4j.nn.graph.vertex.impl.LayerVertex.doForward(LayerVertex.java:110)
at org.deeplearning4j.nn.graph.ComputationGraph.ffToLayerActivationsInWS(ComputationGraph.java:2134)
at org.deeplearning4j.nn.graph.ComputationGraph.computeGradientAndScore(ComputationGraph.java:1371)
at org.deeplearning4j.nn.graph.ComputationGraph.computeGradientAndScore(ComputationGraph.java:1340)
at org.deeplearning4j.optimize.solvers.BaseOptimizer.gradientAndScore(BaseOptimizer.java:174)
at org.deeplearning4j.optimize.solvers.StochasticGradientDescent.optimize(StochasticGradientDescent.java:61)
at org.deeplearning4j.optimize.Solver.optimize(Solver.java:52)
at org.deeplearning4j.nn.graph.ComputationGraph.fitHelper(ComputationGraph.java:1164)
at org.deeplearning4j.nn.graph.ComputationGraph.fit(ComputationGraph.java:1114)
at org.deeplearning4j.nn.graph.ComputationGraph.fit(ComputationGraph.java:1081)
at org.deeplearning4j.nn.graph.ComputationGraph.fit(ComputationGraph.java:1017)
at main.PatternDetect.run(PatternDetect.java:92)
at main.Start.main(Start.java:15)

on snapshot cuda 11.2

edit: seems i’m not the only one hitting this issue>

error has changed with todays snapshot but still no luck.

[main] WARN org.deeplearning4j.nn.layers.normalization.BatchNormalization - CuDNN BatchNormalization forward pass execution failed - falling back on built-in implementation
java.lang.IllegalArgumentException: Invalid size: cannot get size of dimension 3 for rank 3 NDArray (array shape: [57, 32, 246])
at org.nd4j.linalg.api.ndarray.BaseNDArray.size(BaseNDArray.java:4529)
at org.deeplearning4j.cuda.normalization.CudnnBatchNormalizationHelper.preOutput(CudnnBatchNormalizationHelper.java:265)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.preOutput(BatchNormalization.java:451)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.activate(BatchNormalization.java:393)
at org.deeplearning4j.nn.graph.vertex.impl.LayerVertex.doForward(LayerVertex.java:110)
at org.deeplearning4j.nn.graph.ComputationGraph.ffToLayerActivationsInWS(ComputationGraph.java:2134)
at org.deeplearning4j.nn.graph.ComputationGraph.computeGradientAndScore(ComputationGraph.java:1371)
at org.deeplearning4j.nn.graph.ComputationGraph.computeGradientAndScore(ComputationGraph.java:1340)
at org.deeplearning4j.optimize.solvers.BaseOptimizer.gradientAndScore(BaseOptimizer.java:174)
at org.deeplearning4j.optimize.solvers.StochasticGradientDescent.optimize(StochasticGradientDescent.java:61)
at org.deeplearning4j.optimize.Solver.optimize(Solver.java:52)
at org.deeplearning4j.nn.graph.ComputationGraph.fitHelper(ComputationGraph.java:1164)
at org.deeplearning4j.nn.graph.ComputationGraph.fit(ComputationGraph.java:1114)
at org.deeplearning4j.nn.graph.ComputationGraph.fit(ComputationGraph.java:1081)
at org.deeplearning4j.nn.graph.ComputationGraph.fit(ComputationGraph.java:1017)

I noticed that cuDNN only seems to support 4D and 5D tensors>

So does this mean that cuDNN acclerated BN is not possible for 1DCNN networks in deeplearning4j?
If so how does the fall back implementation work with rank 3 or is this another bug?
Keras can do BN on 1DCNN without constant reshaping as the linked thread in my first post alludes to so it must be possible.