Issues about modifying the source code

Hi all, due to my research purpose, I need to split the forwarding and backwarding phase in training into two separate functions, which is all done in one function (i.e. fit() ) in DL4J. Can anyone give me some advices on how to achieve my idea? Sincerely appreciate for your help!

@fubuki you can either try using samediff (the lower level api with more control) or use dl4j’s external errors to do your own backpropagation.
In that case, you can call the gradient functions on your own.

Otherwise for the dl4j api ,a test example can be found here:

Thank you for your advice! For the first solution, do you mean that I can only use samediff api to achieve me idea without using DL4J?

@fubuki generally samediff is meant for lower level control of graphs similar to tensorflow/pytorch. You can look at:

Take a look at our quickstart as well:

It’s up to you which route to go. Samediff is superceding dl4j as the main api long term though. Dl4j’s computation graph and multi layernetwork do have this functionality built in though via external errors as mentioned above. Samediff is capable of external errors as well though.

You may find more here:

I got it. Thank you for your guidance!

Hi, after referring to the github link you shared, I am still confused about the backward process in Samediff. Generally, for the backward process, it uses the forward output to perform the back propagation and updates the weights of the model using the gradient. I am not sure which the function / step is that updates the weights of the model.

@fubuki you would use external errors in combination with the fit function in that case. You can also use calculate gradients. You can configure a training configuration similar to other training examples after specifying external errors in the other examples listed.

I would try it out. Thank you for your help!