Build Customer Loss Function

I am building a regression model with dense layers. The provided MSE loss function does not accurately reflect the main concerns of the functional domain. In addition to MSE, I would like to penalize more when the model fails to predicate value beyond certain threshold. So the real loss function would be the weighted average of MSE and getting predication wrong when actual is greater than a threshold. What’s the best starting place to implement a customer loss function? I looked at the ILossFunction interface. It looks that one has to supply the gradient computation instead of just supplying the expression of the loss function and letting the numerical gradient calculation taking it over. I would appreciate any useful pointer. Thanks.

You are right that custom loss functions currently require both an implementation for a forward pass and a backward pass.

But there is nothing that should stop you from utilizing SameDiff (for examples of general use see https://github.com/eclipse/deeplearning4j-examples/tree/master/nd4j-examples/src/main/java/org/nd4j/samediff and https://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/samediff) to do that.

I know SameDiff isn’t as well documented as we’d like it to be, but if you don’t feel like implementing the backwards pass yourself.

Paul,

Thanks for the pointers.

Take a look at SameDiffLoss:-

You only have to implement:-

public abstract SDVariable defineLoss(SameDiff sd, SDVariable layerInput, SDVariable labels);

and the grad calculation is autodiff’d for you via SameDiff and wrapped up so you can use it in DL4J. It’s very helpful as it does all the heavy lifting for you.