Hi everyone, I heard this is a good place to post questions. I am working on a new approach and will try to describe below.
So right now I am working with standard feedforward MultiLayerNetworks, using dropout for regularization and building several noisy models this way (saving all seeds and resulting models). However, after training, I want to do scoring (Evaluation) with some of the input features dropped out, rather than just use the drop out technique during fit optimization. The idea is to use Monte Carlo simulations to find important inputs(features) and correlations between inputs. I have a system with limited number of examples and far too many features right now: n = 9000 and m = 1200.
I am looking through the documentation and source code (I have changed Nd4j before and got something merged into the main repo for PCA in the past) and am having trouble seeing what API methods and classes would enable this in DL4j, which is much bigger. I looked through the Layer class (it would probably be most efficient to alter the model in-place, since MC sampling involves making changes to the inputs and then accepting/rejecting changes based on my criteria - the Evaluation itself is very cheap) and have also looked at TransferLearning documentation, but I could not find anything there either, and I don’t want to do any refitting but just implement my random walker that disconnects/reconnects nodes with acceptance/rejection criteria based on evaluations.
Does anyone know a way to effectively disconnect the nodes within layers after model.fit training and before Evaluation? I’m assuming it has to be in the code somewhere and I only need it in a native x86 CPU architecture.