Is it possible to binarize the code layer of a plain autoencoder (DenseLayer) or the out-layer of a VariationalAutoencoder? My naive approach would be trying to regularize the activation [1], however I can’t see an obvious way how to do that with DL4J. Am I missing something?
If activation regularization is not possible, are there any other suggestions on how I could extract a binary code from the autoencoder?
You can remove whatever layers you want and create neural networks from that.
For semantic hashing, I would look at using normal autoencoders trained end to end where you do:
autoencoder, autoencoder, (repeat for how ever many encoding layers you need typically 2 to 3)
then do the reverse architecture for the encoder.
“Binarizing” just comes down to a sigmoid threshold you set for 0/1. Sometimes semantic hashing just uses the floats with a nearest neighbors look up with the weights as an index of sorts.
Thanks for this idea! I got a very very rough idea how this approach might work. On the other hand, I failed with my experiments with org.deeplearning4j.nn.conf.layers.AutoEncoder. Also I don’t get where the transfer learning step would occur (I’m quite new to deep learning and to DL4J): given my first trained autoencoder-network (e.g. for the MNISTAutoencoder starting with 784 → 784), how could parts of this network be transferred to the next network iteration (e.g. 784 → 250 → 784)? Or did I completely misunderstand you?
Either way, I have done some more experiments with a custom BaseActivationFunction which activates only to -1 or 1 and uses TanhDerivative for the backprop-step. Results look quite promising [1]. So I’d consider my problem as solved.