Cannot infer input type for reshape array

Hi!

I am trying to import a keras model created in Python. I am working with convolutional 3D layers which requires reshaping dense layer input into 5 dimensional shapes, however it does not seem that this is allowed by the reshape processor class?

The exception is:

Exception in thread “main” java.lang.UnsupportedOperationException: Cannot infer input type for reshape array [0, 60, 1, 3, 4]

The error is thrown in the following method in the ReshapePreprocessor class:

@Override
public InputType getOutputType(InputType inputType) throws InvalidInputTypeException {
long shape = getShape(this.targetShape, 0);
InputType ret;
switch (shape.length) {
case 2:
ret = InputType.feedForward(shape[1]);
break;
case 3:
RNNFormat format = RNNFormat.NCW;
if(this.format != null && this.format instanceof RNNFormat)
format = (RNNFormat)this.format;

            ret = InputType.recurrent(shape[2], shape[1], format);
            break;
        case 4:
            if (inputShape.length == 1 || inputType.getType() == InputType.Type.RNN) {
                ret = InputType.convolutional(shape[1], shape[2], shape[3]);
            } else {

                CNN2DFormat cnnFormat = CNN2DFormat.NCHW;
                if (this.format != null && this.format instanceof CNN2DFormat)
                    cnnFormat = (CNN2DFormat) this.format;

                if (cnnFormat == CNN2DFormat.NCHW) {
                    ret = InputType.convolutional(shape[2], shape[3], shape[1], cnnFormat);
                } else {
                    ret = InputType.convolutional(shape[1], shape[2], shape[3], cnnFormat);
                }
            }
            break;
        default:
            throw new UnsupportedOperationException(
                    "Cannot infer input type for reshape array " + Arrays.toString(shape));
    }
    return ret;
}

@Acander have you tried with snapshots? I believe this has been fixed. See: Snapshots - Deeplearning4j

Hi! I tried with snapshots, but the problem does not seem to have been fixed :confused: Please advice.

Can you provide us with a self-contained reproduction project?

If we don’t have to spend a lot of time to reproduce the exact problem you are running into, we can spend more time on fixing the bug :slight_smile:

1 Like

Actually…yeah we don’t support rank 5 in the execution, but we do in the import.

@Acander I added an issue: Support CNN3D in ReshapePreprocessor · Issue #9288 · eclipse/deeplearning4j · GitHub
I won’t be able to get to this till after golden week in japan is over. (Thursday) if you want to try to implement the necessary change yourself, most of the machinery should already be there.
Anything more than me approving a pull request or clicking “build” with some configuration I won’t be doing till after the holiday though.

1 Like

Okay, I understand. Thank you for your answer! I don’t dare implementing the change myself, but I think I can wait until you get around to it.

Again, thanks for everything!

@Acander Ag add cnn 3d input preprocessor by agibsonccc · Pull Request #9291 · eclipse/deeplearning4j · GitHub

Hi @agibsonccc! Thanks for updating the code. I do however get another error now (only including part of the error stack involving d3j classes):

Exception in thread “main” java.lang.IllegalArgumentException: Illegal format found null
at org.deeplearning4j.nn.modelimport.keras.preprocessors.ReshapePreprocessor.getOutputType(ReshapePreprocessor.java:179)
at org.deeplearning4j.nn.modelimport.keras.layers.core.KerasReshape.getOutputType(KerasReshape.java:183)
at org.deeplearning4j.nn.modelimport.keras.KerasModel.inferOutputTypes(KerasModel.java:473)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.(KerasSequentialModel.java:148)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.(KerasSequentialModel.java:57)
at org.deeplearning4j.nn.modelimport.keras.utils.KerasModelBuilder.buildSequential(KerasModelBuilder.java:326)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasSequentialModelAndWeights(KerasModelImport.java:296)

It seems to do with the reshape processor class, specifically the Dataformat format variable that is set to seems to be set to null via the constructor in the KerasReshape class (line 182). Have you not accommodated for the convolution3D dataformat in the kerasReshape class? Or have I done something wrong?

@Acander actually we do here: deeplearning4j/KerasReshape.java at c715aea405980eb043420af86c671032bbb78ec6 · eclipse/deeplearning4j · GitHub

Could you DM me your model just to be sure?

{"class_name": "Sequential", "config": {"name": "sequential_1", "layers": [{"class_name": "InputLayer", "config": {"batch_input_shape": [null, 32], "dtype": "float32", "sparse": false, "ragged": false, "name": "input_2"}}, {"class_name": "Dense", "config": {"name": "dense_4", "trainable": true, "dtype": "float32", "units": 720, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "LeakyReLU", "config": {"name": "leaky_re_lu", "trainable": true, "dtype": "float32", "alpha": 0.10000000149011612}}, {"class_name": "BatchNormalization", "config": {"name": "batch_normalization_3", "trainable": true, "dtype": "float32", "axis": [1], "momentum": 0.99, "epsilon": 0.001, "center": true, "scale": true, "beta_initializer": {"class_name": "Zeros", "config": {}}, "gamma_initializer": {"class_name": "Ones", "config": {}}, "moving_mean_initializer": {"class_name": "Zeros", "config": {}}, "moving_variance_initializer": {"class_name": "Ones", "config": {}}, "beta_regularizer": null, "gamma_regularizer": null, "beta_constraint": null, "gamma_constraint": null}}, {"class_name": "Dropout", "config": {"name": "dropout_1", "trainable": true, "dtype": "float32", "rate": 0.2, "noise_shape": null, "seed": null}}, {"class_name": "Reshape", "config": {"name": "reshape", "trainable": true, "dtype": "float32", "target_shape": [60, 1, 3, 4]}}, {"class_name": "Conv3D", "config": {"name": "conv3d_4", "trainable": true, "dtype": "float32", "filters": 256, "kernel_size": [3, 3, 3], "strides": [1, 1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1, 1], "groups": 1, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "LeakyReLU", "config": {"name": "leaky_re_lu_1", "trainable": true, "dtype": "float32", "alpha": 0.10000000149011612}}, {"class_name": "UpSampling3D", "config": {"name": "up_sampling3d", "trainable": true, "dtype": "float32", "size": [2, 2, 2], "data_format": "channels_last"}}, {"class_name": "Conv3D", "config": {"name": "conv3d_5", "trainable": true, "dtype": "float32", "filters": 128, "kernel_size": [3, 3, 3], "strides": [1, 1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1, 1], "groups": 1, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "LeakyReLU", "config": {"name": "leaky_re_lu_2", "trainable": true, "dtype": "float32", "alpha": 0.10000000149011612}}, {"class_name": "UpSampling3D", "config": {"name": "up_sampling3d_1", "trainable": true, "dtype": "float32", "size": [2, 2, 2], "data_format": "channels_last"}}, {"class_name": "Conv3D", "config": {"name": "conv3d_6", "trainable": true, "dtype": "float32", "filters": 16, "kernel_size": [3, 3, 3], "strides": [1, 1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1, 1], "groups": 1, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "LeakyReLU", "config": {"name": "leaky_re_lu_3", "trainable": true, "dtype": "float32", "alpha": 0.10000000149011612}}, {"class_name": "UpSampling3D", "config": {"name": "up_sampling3d_2", "trainable": true, "dtype": "float32", "size": [2, 2, 2], "data_format": "channels_last"}}, {"class_name": "Conv3D", "config": {"name": "conv3d_7", "trainable": true, "dtype": "float32", "filters": 8, "kernel_size": [3, 3, 3], "strides": [1, 1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1, 1], "groups": 1, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}, {"class_name": "LeakyReLU", "config": {"name": "leaky_re_lu_4", "trainable": true, "dtype": "float32", "alpha": 0.10000000149011612}}, {"class_name": "Conv3D", "config": {"name": "conv3d_8", "trainable": true, "dtype": "float32", "filters": 1, "kernel_size": [3, 3, 3], "strides": [1, 1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1, 1], "groups": 1, "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "GlorotUniform", "config": {"seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}}]}, "keras_version": "2.4.0", "backend": "tensorflow"}

Could you put the content above in a JSON file and open it in a web browser? I think that representation is the clearest. I cannot send you the original JSON file since the DM does not allow it. I trying to implement the model described in this paper: https://arxiv.org/pdf/1903.04144.pdf

@Acander fixed here: Add default value for cnn3d keras import channel layout by agibsonccc · Pull Request #9305 · eclipse/deeplearning4j · GitHub
Do you mind if I add the model to our test cases?

1 Like

No, I don’t mind. You can do that!

I might be going of topic now, but I am currently trying to import the Keras model with KerasModelImport.importKerasSequentialModelAndWeights(model_config, model_weights), but I get the below error:

java.lang.NoSuchMethodException: org.deeplearning4j.nn.layers.mkldnn.MKLDNNBatchNormHelper.(java.lang.Class, org.nd4j.linalg.api.buffer.DataType)
at java.base/java.lang.Class.getConstructor0(Class.java:3349)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2553)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:103)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:89)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:74)
at org.deeplearning4j.nn.layers.HelperUtils.createHelper(HelperUtils.java:93)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.initializeHelper(BatchNormalization.java:74)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.(BatchNormalization.java:70)
at org.deeplearning4j.nn.conf.layers.BatchNormalization.instantiate(BatchNormalization.java:94)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:714)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:604)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:260)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:249)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasSequentialModelAndWeights(KerasModelImport.java:297)
at MLSCM.d4jMavenTest.SCMGenerator.(SCMGenerator.java:55)
at MLSCM.d4jMavenTest.SCMGenerator.main(SCMGenerator.java:89)
Exception in thread “main” java.lang.RuntimeException: java.lang.NoSuchMethodException: org.deeplearning4j.nn.layers.mkldnn.MKLDNNBatchNormHelper.(java.lang.Class, org.nd4j.linalg.api.buffer.DataType)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:108)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:89)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:74)
at org.deeplearning4j.nn.layers.HelperUtils.createHelper(HelperUtils.java:93)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.initializeHelper(BatchNormalization.java:74)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.(BatchNormalization.java:70)
at org.deeplearning4j.nn.conf.layers.BatchNormalization.instantiate(BatchNormalization.java:94)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:714)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:604)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:260)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:249)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasSequentialModelAndWeights(KerasModelImport.java:297)
at MLSCM.d4jMavenTest.SCMGenerator.(SCMGenerator.java:55)
at MLSCM.d4jMavenTest.SCMGenerator.main(SCMGenerator.java:89)
Caused by: java.lang.NoSuchMethodException: org.deeplearning4j.nn.layers.mkldnn.MKLDNNBatchNormHelper.(java.lang.Class, org.nd4j.linalg.api.buffer.DataType)
at java.base/java.lang.Class.getConstructor0(Class.java:3349)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2553)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:103)
… 13 more

Has it to do with with the following?

10:09:51.157 [main] WARN org.deeplearning4j.nn.modelimport.keras.layers.normalization.KerasBatchNormalization - Warning: batch normalization axis 1
DL4J currently picks batch norm dimensions for you, according to industrystandard conventions. If your results do not match, please file an issue.

Could you import the model?

@Acander model imported fine. It’s something related to using onednn, I’ll have to investigate. Should be an easy fix though.
It’s related to this:

If you want as a workaround, go ahead and just mvn clean install -DskipTests on deeplearning4j-nn and comment that out.
Just make sure the helper stays null. It just attempts to use onednn/mkldnn. What it should do is a fall back if it fails to load. That specific error you’re seeing is a reflection error though. I’ll audit all the helpers to make sure they have the normal methods setup.

I’ll also add a JVM argument that allows all helpers to be disabled via a system property.

I’ll implement a real fix tomorrow.

Hi, again!
I see the code imported through maven is slightly different from what you displayed above. Have you implemented your fix? I am unable to make any changes to the code and I still get the same error. Am I importing the model incorrectly? Could you show me your code for that?

    if ("CUDA".equalsIgnoreCase(backend)) {
        helper = DL4JClassLoading.createNewInstance(
                "org.deeplearning4j.cuda.normalization.CudnnBatchNormalizationHelper",
                BatchNormalizationHelper.class,
                dataType);
        log.debug("CudnnBatchNormalizationHelper successfully initialized");
    } else if ("CPU".equalsIgnoreCase(backend)){
        helper = new MKLDNNBatchNormHelper(dataType);
        log.trace("Created MKLDNNBatchNormHelper, layer {}", layerConf().getLayerName());
    }

    if (helper != null && !helper.checkSupported(layerConf().getEps(), layerConf().isLockGammaBeta())) {
        log.debug("Removed helper {} as not supported with epsilon {}, lockGammaBeta={}", helper.getClass(), layerConf().getEps(), layerConf().isLockGammaBeta());
        helper = null;
    }

The full output stack (I don’t think I added it above):

21:24:06.237 [main] WARN org.deeplearning4j.nn.modelimport.keras.layers.normalization.KerasBatchNormalization - Warning: batch normalization axis 1
DL4J currently picks batch norm dimensions for you, according to industrystandard conventions. If your results do not match, please file an issue.
21:24:06.502 [main] INFO org.nd4j.linalg.factory.Nd4jBackend - Loaded [CpuBackend] backend
21:24:06.511 [main] ERROR org.nd4j.common.config.ND4JClassLoading - Cannot find class [org.nd4j.linalg.jblas.JblasBackend] of provided class-loader.
21:24:06.512 [main] ERROR org.nd4j.common.config.ND4JClassLoading - Cannot find class [org.canova.api.io.data.DoubleWritable] of provided class-loader.
21:24:06.514 [main] ERROR org.nd4j.common.config.ND4JClassLoading - Cannot find class [org.nd4j.linalg.jblas.JblasBackend] of provided class-loader.
21:24:06.515 [main] ERROR org.nd4j.common.config.ND4JClassLoading - Cannot find class [org.canova.api.io.data.DoubleWritable] of provided class-loader.
21:24:08.092 [main] INFO org.nd4j.nativeblas.NativeOpsHolder - Number of threads used for linear algebra: 4
21:24:08.095 [main] INFO org.nd4j.linalg.cpu.nativecpu.CpuNDArrayFactory - Binary level Generic x86 optimization level AVX/AVX2
21:24:08.131 [main] INFO org.nd4j.nativeblas.Nd4jBlas - Number of threads used for OpenMP BLAS: 4
21:24:08.155 [main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Backend used: [CPU]; OS: [Windows 10]
21:24:08.155 [main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Cores: [8]; Memory: [4,0GB];
21:24:08.155 [main] INFO org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner - Blas vendor: [OPENBLAS]
21:24:08.162 [main] INFO org.nd4j.linalg.cpu.nativecpu.CpuBackend - Backend build information:
GCC: “10.2.0”
STD version: 201103L
DEFAULT_ENGINE: samediff::ENGINE_CPU
HAVE_FLATBUFFERS
HAVE_OPENBLAS
21:24:08.417 [main] INFO org.deeplearning4j.nn.multilayer.MultiLayerNetwork - Starting MultiLayerNetwork with WorkspaceModes set to [training: ENABLED; inference: ENABLED], cacheMode set to [NONE]
21:24:08.461 [main] ERROR org.deeplearning4j.common.config.DL4JClassLoading - Cannot create instance of class ‘org.deeplearning4j.nn.layers.mkldnn.MKLDNNBatchNormHelper’.
java.lang.NoSuchMethodException: org.deeplearning4j.nn.layers.mkldnn.MKLDNNBatchNormHelper.(java.lang.Class, org.nd4j.linalg.api.buffer.DataType)
at java.base/java.lang.Class.getConstructor0(Class.java:3349)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2553)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:103)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:89)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:74)
at org.deeplearning4j.nn.layers.HelperUtils.createHelper(HelperUtils.java:93)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.initializeHelper(BatchNormalization.java:74)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.(BatchNormalization.java:70)
at org.deeplearning4j.nn.conf.layers.BatchNormalization.instantiate(BatchNormalization.java:94)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:714)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:604)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:260)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:249)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasSequentialModelAndWeights(KerasModelImport.java:297)
at MLSCM.d4jTesting.SCMGenerator.(SCMGenerator.java:55)
at MLSCM.d4jTesting.SCMGenerator.main(SCMGenerator.java:89)
Exception in thread “main” java.lang.RuntimeException: java.lang.NoSuchMethodException: org.deeplearning4j.nn.layers.mkldnn.MKLDNNBatchNormHelper.(java.lang.Class, org.nd4j.linalg.api.buffer.DataType)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:108)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:89)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:74)
at org.deeplearning4j.nn.layers.HelperUtils.createHelper(HelperUtils.java:93)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.initializeHelper(BatchNormalization.java:74)
at org.deeplearning4j.nn.layers.normalization.BatchNormalization.(BatchNormalization.java:70)
at org.deeplearning4j.nn.conf.layers.BatchNormalization.instantiate(BatchNormalization.java:94)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:714)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.init(MultiLayerNetwork.java:604)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:260)
at org.deeplearning4j.nn.modelimport.keras.KerasSequentialModel.getMultiLayerNetwork(KerasSequentialModel.java:249)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasSequentialModelAndWeights(KerasModelImport.java:297)
at MLSCM.d4jTesting.SCMGenerator.(SCMGenerator.java:55)
at MLSCM.d4jTesting.SCMGenerator.main(SCMGenerator.java:89)
Caused by: java.lang.NoSuchMethodException: org.deeplearning4j.nn.layers.mkldnn.MKLDNNBatchNormHelper.(java.lang.Class, org.nd4j.linalg.api.buffer.DataType)
at java.base/java.lang.Class.getConstructor0(Class.java:3349)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2553)
at org.deeplearning4j.common.config.DL4JClassLoading.createNewInstance(DL4JClassLoading.java:103)
… 13 more

@Acander make sure to update your dependencies running mvn -U $YOUR_COMMAND.
We already merged the fix for this:

You can see the tests working as follows:

The layers now look like this:

In my comment above, I’m not sure what was confused but let me clear up a few things:

  1. I told you to use a workaround by commenting out the creation of the helper. It’s optional and just meant to be an accelerator for people to use. I tried to describe the nature of the error:
    it’s a reflection error in java. The constructor wasn’t present. We added tests for that so it shouldn’t be an issue now. The workaround I gave you just commented out creating that so you could get passed the invalid constructor.

  2. Again, this has nothing to do with model import. Try to understand what the actual nature of the error is. It has nothing to do with model import and I’m sorry if I wasn’t clear about that. Model import just parses a keras file then tries to instantiate the creation of the equivalent configuration in dl4j. The part that fails is not the model import part here, it’s a completely unrelated issue as described above.

Hopefully that helps explain the fix as well as the nature of the issue. Let me know if you have any other issues with the updates or otherwise.

Hi @agibsonccc! I have updated maven many times now, but I still get the same error. The merged fix you have mentioned seems to have with the the MKLDNNLSTMHelper.class, but from reading the error above the problem seems to have with the MKLDNNBatchNormHelper. I am sorry if I seem rude! It is not my meaning. It could of course be some kind of import error.

@Acander if this happens again, do you mind installing from master? For this specific issue, just run:

git clone https://github.com/eclipse/deeplearning4j
cd deeplearning4j/deeplearning4j-nn && mvn clean install -DskipTests

Snapshots may not have updated. I believe it’s fixed now, but do that just in case.
In order to check this for yourself, go here:
https://oss.sonatype.org/content/repositories/snapshots/org/deeplearning4j/deeplearning4j-nn/1.0.0-SNAPSHOT/

This is for the specific module where your problem is occurring. This is where you can browse snapshots for any artifact by groupId/artifactId/version.

Generally scroll down to the bottom to see when the last snapshots have been updated.
Cross check this with when the pull request was merged:

And no you’re not being rude at all! We’re trying to get your problem solved and I’d like to help. Good luck!