Hi! I’m playing around with TransferLearning (for education purposes only). I have trained a very simple ComputationGraph that detects horizontal and vertical lines in a NxM matrix. Now I want to create a ComputationGraph that uses the exact output of the first model as input to a new model and output the intersections (if any).
However, as you can’t use OutputLayer as an input, I don’t know how to do that.
I can’t just skip the OutputLayer, as it is implicitly a DenseLayer, so even with added IdentityLayers to intercept the output earlier, I won’t get 1:1 the correct results as inputs for the new model.
I see 2 possibilities here:
Adding an IdentityLayer before the OutputLayer and creating a custom ActivationFunction that assigns only the n-th previous layer’s activation to the current layer’s n-th neuron.
Adding an IdentityLayer before the OutputLayer and somehow setting and freezing 1.0 for every n-th to n-th connection and 0.0 for the rest.
Why can’t there be an OutputLayer that is not dense?