Questions on DL4J application

Hi, I’m building an application that mimics a neural net with multiple layers, yet is designed so that each layer is decoupled from each other.

As an example, assume we have a 2 layer NN with convolution → dense layer. The standard way using DL4J framework is to create a builder and declare the respective layers in the network. My application requires each layer to be created separately, with passing of tensor values from the convolution to the dense layer.

I would like to ask if the DL4J library does expose the APIs at the neural net level for use. In particular, I’m interested only in using the forward and back propagation logic of each neural net module, without having to “build” the module as a whole.

Appreciate any advise here. Thanks

Could you elaborate on this a bit? The point of a neural network is to create a set of layers that you pass from 1 layer to another. If you don’t want a forward pass to propagate to another layer, wouldn’t that just be another network?> I don’t quite get what you want to do here.

The typical use case for this is reinforcement learning, but you can start with external errors and see if that helps you:
https://github.com/deeplearning4j/oreilly-book-dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/misc/externalerrors/MultiLayerNetworkExternalErrors.java

@agibsonccc thanks for the follow up.

I’m doing a research project where we have different model layers residing on different machines. In this case, we don’t want to treat the entire model as a large computational graph but rather, as separate entities, with only message passing between each entity.

For clarity, assume we have a model of {1 … L} layers. What I’m hoping to achieve is that for the forward pass, we simply compute the output of layer (L - 1) as input to layer L. As layers L & (L - 1) may be on different machines, we need to serialize and transmit the output from layer (L - 1) → L.

For back propagation, once the loss is computed, it is then passed over to layer L to compute its own loss, weights and bias derivative. This loss derivative is then pass over to layer (L -1) to further carry on the back propagation process, until all layers of the model has been accounted for.

Hope this helps clarify your question. I’m looking to see if DL4J has the necessary APIs needed to fulfil this kind of model training process.

If the core simpler framework doesn’t, samediff should. It’s lower level:

Generally folks have used dl4j in the way I just mentioned where you do updates manually.
Someone also did federated learning with nd4j which is similar:

When you say the layers themselves are on different machines, do yo mean the parameters? There’s different ways to interpret that, either way you should be able to find a way to make it work.