Incremental training

Does Dl4j supports incremental training ?

It depends on how you mean incremental training. In principle all SGD based training is incremental, as you train your model with different batches.

You can always reload your model and continue training.

Yes , exactly load a trained model and add new training data and train on top of it . Wondering if the model may encounter “catastrophic forgetting” and forget what it’s being learned previously.

That is always something that may happen when training a neural network - it doesn’t really matter which framework you are using for that.

In some cases it might be even a feature - for example if you have data that changes over time, and you need the trained network to forget about previous training.

If you want to ensure that it doesn’t do that, all you have to do is to use a union of old and new data when training.

Is there an option to combine multiple models and make an inference ? I am just trying out various options to avoid re training when there is more data added to the system .

There is a lot of research literature about that around. But all of that depends on your actual use case.

I am looking for any specific option available in dl4j ? Like loading computation graph from models m1 and m2 ,and combine them to make a inference. I understand its quiet tricky …just exploring options

If you mean ensemble models, then no, there is no out of the box way that I’d be aware of in DL4J. But it isn’t something that would be hard to implement on your own. You basically load the models you want to load, and run a (weighted) majority vote. The answer with the most votes wins.

You can make this as complicated as you want, e.g. you can train a model to decide if which model should get an input and decide alone on the output, or even a whole hierarchy of things like that, or even combine that with other libraries. The sky is the limit - you are using java after all.