Unsupported keras layer type UpSampling3D

Issue Description
I saw DL4J supports Upsampling3D but when I tried to load my h5 model, an error popped up as:

Exception in thread “main” org.deeplearning4j.nn.modelimport.keras.exceptions.UnsupportedKerasConfigurationException: Unsupported keras layer type UpSampling3D. Please file an issue at https://github.com/eclipse/deeplearning4j/issues.
at org.deeplearning4j.nn.modelimport.keras.utils.KerasLayerUtils.getKerasLayerFromConfig(KerasLayerUtils.java:334)
at org.deeplearning4j.nn.modelimport.keras.KerasModel.prepareLayers(KerasModel.java:218)
at org.deeplearning4j.nn.modelimport.keras.KerasModel.(KerasModel.java:164)
at org.deeplearning4j.nn.modelimport.keras.KerasModel.(KerasModel.java:96)
at org.deeplearning4j.nn.modelimport.keras.utils.KerasModelBuilder.buildModel(KerasModelBuilder.java:307)
at org.deeplearning4j.nn.modelimport.keras.KerasModelImport.importKerasModelAndWeights(KerasModelImport.java:172)
at org.deeplearning4j.examples.convolution.captcharecognition.FaultRecognition.main(FaultRecognition.java:90)

Version Information

  • Deeplearning4j version
  • Platform information (OS, etc) AWS Linux
  • No CUDA

Additional Information
My simple u-net model was build in original keras not tf.keras.

from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras import backend as keras
from keras import metrics

Try again with beta7.

Sorry for reply so late, I am running beta-7 version, but still have same issue.

I am loading model as:

String fullModel = “Model.hdf5”;
ComputationGraph model = KerasModelImport.importKerasModelAndWeights(fullModel);


<name>DeepLearning4j Examples Parent</name>
<description>Examples of training different data sets</description>
    <!-- Change the nd4j.backend property to nd4j-cuda-10.0-platform, nd4j-cuda-10.1-platform or nd4j-cuda-10.2-platform to use CUDA GPUs -->


    <!-- Scala binary version: DL4J's Spark and UI functionality are released with both Scala 2.10 and 2.11 support -->

    <hadoop.version>2.2.0</hadoop.version>  <!-- Hadoop version used by Spark 1.6.3 and 2.2.1 (and likely others) -->

That is weird, we should be supporting that layer in beta7 as far as I can tell.
Can you share your model file, or a script to create an equivalent model, so we can debug the issue?

Sure, here is how I implement:

Construct a simple Unet model by:

def unet(pretrained_weights = None,input_size = (128,128,128,1),DropoutRatio=0.5 ):
inputs = Input(input_size)

conv1 = Conv3D(16, (3,3,3), activation = 'relu', padding = 'same')(inputs) # kernel_initializer = 'he_normal'
conv1 = Conv3D(16, (3,3,3), activation = 'relu', padding = 'same')(conv1) 
pool1 = MaxPooling3D(pool_size=(2, 2, 2))(conv1)

conv2 = Conv3D(32, (3,3,3), activation = 'relu', padding = 'same')(pool1) 
conv2 = Conv3D(32, (3,3,3), activation = 'relu', padding = 'same')(conv2) 
pool2 = MaxPooling3D(pool_size=(2, 2, 2))(conv2)

conv3 = Conv3D(64, (3,3,3), activation = 'relu', padding = 'same')(pool2)
conv3 = Conv3D(64, (3,3,3), activation = 'relu', padding = 'same')(conv3)
pool3 = MaxPooling3D(pool_size=(2, 2, 2))(conv3)

conv4 = Conv3D(128, (3,3,3), activation = 'relu', padding = 'same')(pool3)
conv4 = Conv3D(128, (3,3,3), activation = 'relu', padding = 'same')(conv4)

merge5 = concatenate([UpSampling3D(size=(2,2,2))(conv4),conv3])
conv5 = Conv3D(64, (3,3,3), activation = 'relu', padding = 'same')(merge5)
conv5 = Conv3D(64, (3,3,3), activation = 'relu', padding = 'same')(conv5)

merge6 = concatenate([UpSampling3D(size=(2,2,2))(conv5),conv2])
conv6 = Conv3D(32, (3,3,3), activation = 'relu', padding = 'same')(merge6)
conv6 = Conv3D(32, (3,3,3), activation = 'relu', padding = 'same')(conv6)      

merge7 = concatenate([UpSampling3D(size=(2,2,2))(conv6),conv1])
conv7 = Conv3D(16, (3,3,3), activation = 'relu', padding = 'same')(merge7)
conv7 = Conv3D(16, (3,3,3), activation = 'relu', padding = 'same')(conv7)

conv8 = Conv3D(1, (1,1,1), activation = 'sigmoid')(conv7)

model = Model(input = inputs, output = conv8)
model.compile(optimizer = Adam(lr = 1e-4), loss = modified_crossentropy, metrics = ['accuracy'])   
return model

build a model as:

import modelBuilder
model = modelBuilder.unet(input_size = input_size, DropoutRatio= DropoutRatio)

I saved my model by running:

model_checkpoint = ModelCheckpoint(‘model_saved_to_disk.hdf5’, monitor=‘loss’,verbose=1, save_best_only=True)

Going into deeplearning4j, My project is original "



mvn install clean

to build this project successfully.

Then I used MultiDigitNumberRecognition.java as a template to create a new java in the folder of dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/captcharecognition, called test.java.

I added following code and tried to load my model first:

String downloadPath = "model_saved_to_disk.hdf5";
 File cachedKerasFile = new File(downloadPath);
ComputationGraph model1 = KerasModelImport.importKerasModelAndWeights(cachedKerasFile.getAbsolutePath());

I am new to DL4J, I may miss some configuration or incorrect setup? Appreciated if you can point out any issue I have.

Any solutions? I tried to read other model from someone else but will upsampling3D, still didn’t work.

Unfortunately I didn’t have time yet to look into it

HI, I think I may find out what’s going on there, may be a bug for this class KerasLayerUtils.java forgot to include getLAYER_CLASS_NAME_ZERO_UPSAMPLING_3D().

Is there any thing I can do to modify this? or to add getLAYER_CLASS_NAME_ZERO_UPSAMPLING_3D to Java code?


Thank you very much for delving into the code yourself and finding the cause of the bug. I’ve created an issue to track the fix of the bug:

As a workaround you should be able to register the layer as a custom layer (see https://deeplearning4j.konduit.ai/keras-import/custom-layers#keraslayer)

I haven’t tried the following code, but I think it should work:

KerasLayer.registerCustomLayer("UpSampling3D", KerasUpsampling3D.class);

Thank you so much. It works! Still trying to understand more features from DL4J, will ask more questions then:) :grinning: