Importing a Keras model into DL4J

I am new to ML and DL4J but I have prior experience in Java.

A colleague trained a model in Keras and saved it as a hdf5 file. I am trying to load this in Java and do inference on it.

I am loading the model using:
ComputationGraph cg = KerasModelImport.importKerasModelAndWeights(filepath,false);

I get an error in this line that
Exception in thread “main” org.deeplearning4j.nn.modelimport.keras.exceptions.UnsupportedKerasConfigurationException: Layer output_XM could not be mapped to Layer, Vertex, or InputPreProcessor. Please file an issue at https://github.com/eclipse/deeplearning4j/issues.

output_XM is a layer in the model - so I think the model is being read and there is an error midway through reading it. But I am not able to understand what this error means. Searching on google for this error message isn’t throwing up any useful help.

The same model has also been saved as a frozen pb file.
I try to load this pb file using SameDiff with the line
sd = SameDiff.importFrozenTF(file);

Now this throws the error:

Exception in thread “main” java.lang.RuntimeException: org.nd4j.shade.protobuf.InvalidProtocolBufferException: While parsing a protocol message, the input ended unexpectedly in the middle of a field. This could mean either that the input has been truncated or that an embedded message misreported its own length.

Is there a mistake in either or both of ways I am trying to load the model? If so, can someone point me to the right way to load this model? Or help me understand the error messages, so that I can find a different (correct) way to load the model.

Thanks
Rey

@reydv that looks like an invalid file. Could you show me how you generated the keras file?
Layer output_XM - is this a custom layer? If so we can import those but there is a bit of manual work.

Regarding the importFrozenTf please use the new framework:

That method is deprecated.

Either way I’d appreciate a view of the graph using netron.app if you could. That or feel free to DM me the model so I can look at it in a viewer.

Thanks for your quick response. I will try out the newer method to load the frozen pb model file.

And yes, output_XM is a custom layer that computes a custom loss function.

I am sharing a graph of the model generated from the h5 file by netron.app (thanks also for this suggestion to generate a graph of the model). The graph from the frozen pb model file look strange and I am trying to understand before sharing it here.

Thanks
Rey
output_xm_model_graph

@reydv then yes you’ll need to either make that a standard loss function in keras or implement the custom import yourself. Here’s an example of how to do that:

Please feel free to answer follow up questions.

Thanks for your time and the pointer to implementation example of a custom loss function.

I have a couple more questions:

  1. Can we subclass model or layer? Is there an example of this?

  2. Is it possible to load the weights and biases (from a prior trained model) separately for each layer after building a model?.

Thanks
Rey

@reydv Yes subclass samediff layer and register it. Samediff is a lower level tensorflow/pytorch like API that allows you to define things at the op level. Follow the example I gave you.

Your question number 2…could you tell me your goal rather than what you think the solution to your problem is? That sounds very specific and usually means you’re probably trying to achieve something.