Querying MultiLayerNetwork for configuration properties

Hi everyone,
I started using deeplearning4j and started wondering whether it is possible to ask a MultiLayerNetwork object e.g. what the activation function of the output layer is or what loss function is applied.

I want to know the activation function of a network to determine what the output range is.

I am using beta7.

Thank you for helping me getting started with dl4j :slight_smile:,
David

If you created the network wouldn’t you know? poste some more context.

The reason I want to check this is primarily to avoid bugs, e.g. if I change the code at some point, but forget that I am using an old network and get weird bugs that will probably be hard to debug. The project is in an early development phase and so such changes might happen and I want to save me the hassle.

Another reason is that some of the people I work with on the project are beginners and I don’t want the code to be robust to wrong inputs for that reason.

The question of course is then: How can I know, if I didn’t create the network (myself), or do not remember anymore…

This can change between types of networks but here is an example. I just used auto complete to figure this out if you get stuck and have to make some tweaks.

   String s = net.getOutputLayer().conf().toString();
   String l = net.getOutputLayer().toString();
    System.out.println(l);
   System.out.println(s);

output
org.deeplearning4j.nn.layers.OutputLayer{conf=NeuralNetConfiguration(layer=OutputLayer(super=BaseOutputLayer(super=FeedForwardLayer(super=BaseLayer(activationFn=identity, weightInitFn=org.deeplearning4j.nn.weights.WeightInitXavier@1, biasInit=0.0, gainInit=1.0, regularization=, regularizationBias=, iUpdater=Nesterovs(learningRate=0.01, learningRateSchedule=null, momentum=0.5, momentumISchedule=null, momentumSchedule=null), biasUpdater=null, weightNoise=null, gradientNormalization=None, gradientNormalizationThreshold=1.0), nIn=50, nOut=1), lossFn=LossL2(), hasBias=true)), miniBatch=true, maxNumLineSearchIterations=5, seed=12345, optimizationAlgo=STOCHASTIC_GRADIENT_DESCENT, variables=[W, b], stepFunction=null, minimize=true, cacheMode=NONE, dataType=FLOAT, iterationCount=0, epochCount=0), score=0.0, optimizer=null, listeners=[ScoreIterationListener(5)]}
NeuralNetConfiguration(layer=OutputLayer(super=BaseOutputLayer(super=FeedForwardLayer(super=BaseLayer(activationFn=identity, weightInitFn=org.deeplearning4j.nn.weights.WeightInitXavier@1, biasInit=0.0, gainInit=1.0, regularization=, regularizationBias=, iUpdater=Nesterovs(learningRate=0.01, learningRateSchedule=null, momentum=0.5, momentumISchedule=null, momentumSchedule=null), biasUpdater=null, weightNoise=null, gradientNormalization=None, gradientNormalizationThreshold=1.0), nIn=50, nOut=1), lossFn=LossL2(), hasBias=true)), miniBatch=true, maxNumLineSearchIterations=5, seed=12345, optimizationAlgo=STOCHASTIC_GRADIENT_DESCENT, variables=[W, b], stepFunction=null, minimize=true, cacheMode=NONE, dataType=FLOAT, iterationCount=0, epochCount=0)

Ok, thank you. I didn’t think of parsing the toString output. Not exactly user friendly, but fine. Thanks again.