Computing gradient without backpropagation

Is it possible to compute gradient using DL4J API without backpropagation?
For my problem only the gradient at the output layer is needed given some input to the network - I don’t have any loss defined
Or is it only solvable with raw SameDiff layers?

@valb3r could you clarify the use case? Are you using RL or something like that? If so, ExternalErrors would be the way to go.

@agibsonccc I’m implementing so-called Implicit Differential Rendering pipeline in Deeplearning4j (i.e. https://arxiv.org/pdf/2003.09852.pdf / python source: Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance). In short it consists of:

  1. Ordinary feed-forward dense network for Signed Distance Function (SDF) (implicit-sdf-)
  2. Special 1-layer network that represents Differential Rendering step that has as the input gradient of 1st network and its feed-forward value (SDF of a point) (SdfDifferentiableRenderVertex)
  3. Ordinary feed-forward dense network for lighting conditions and material (appearance-)

So my problem is to get gradient (evaluated at the input point) of 1st network propagated as the detached input into 2nd network, but there is no loss value available for this setup. Currently, I’ve ended with something like this:

 NeuralNetConfiguration.Builder builder = new NeuralNetConfiguration.Builder()
                .updater(new RmsProp(LEARNING_RATE))
                .weightInit(WeightInit.XAVIER)
                .activation(Activation.TANH);

        ComputationGraphConfiguration.GraphBuilder graphBuilder = builder.graphBuilder()
                .backpropType(BackpropType.Standard)
                .addInputs("ray-trace-point")
                //.setInputTypes(feedForward(3))
                .appendLayer("implicit-sdf-0", new DenseLayer.Builder().activation(Activation.TANH).nIn(3).nOut(10).build())
                .appendLayer("implicit-sdf-1", new DenseLayer.Builder().activation(Activation.TANH).nIn(10).nOut(10).build())
                .appendLayer("implicit-sdf", new DenseLayer.Builder().nIn(10).nOut(3).build())
                .appendLayer("implicit-sdf-output", new OutputLayer.Builder().activation(Activation.IDENTITY).lossFunction(LossFunctions.LossFunction.L1).nIn(3).nOut(3).build())
                .addInputs("idr-sdf-gradient")
                .addInputs("idr-center")
                .addInputs("idr-ray")
                .addInputs("idr-t0")
                .addVertex(
                        "idr-render",
                        new SdfDifferentiableRenderVertex(),
                        "implicit-sdf",
                        "idr-sdf-gradient",
                        "idr-center",
                        "idr-ray",
                        "idr-t0"
                )
                .appendLayer("appearance-0", new DenseLayer.Builder().activation(Activation.TANH).nIn(3).nOut(10).build())
                .appendLayer("appearance-1", new DenseLayer.Builder().activation(Activation.TANH).nIn(10).nOut(1).build())
                .appendLayer("appearance-output",
                        new OutputLayer.Builder()
                                .activation(new ActivationTanH())
                                .nIn(1)
                                .nOut(1)
                                .lossFunction(new LossL2())
                                .build()
                )
                .setOutputs("implicit-sdf-output", "appearance-output");

        this.model = new ComputationGraph(graphBuilder.build());
        this.model.init();

but I do not see a way to get a gradient of implicit-sdf-output without having loss.

If so, ExternalErrors would be the way to go.

Thanks will check what is possible there

`

@valb3r hmm…then that might have to be done with pure samediff instead. You’ll have a lot of overhead with multiple SameDiff vertices. If you do go this route ensure you try M1.1 there was a regression in M2 for training that will be fixed in the next release.

Ok, thanks, will try SameDiff