@huzpsb great appreciated. Dense layers shoudn’t be hard. Let me do a quick POC for you.
Looking at this first and just running it. I first did this:
import org.deeplearning4j.nn.multilayer.MultiLayerNetwork;
import org.nd4j.linalg.api.ndarray.INDArray;
import org.nd4j.linalg.factory.Nd4j;
import java.io.File;
public class MLPOpExecution {
public static void main(String...args) throws Exception {
Nd4j.getExecutioner().enableDebugMode(true);
Nd4j.getExecutioner().enableVerboseMode(true);
MultiLayerNetwork multiLayerNetwork = MultiLayerNetwork.load(new File("mlp.zip"),true);
INDArray rand = Nd4j.rand(1,22);
multiLayerNetwork.output(rand);
System.out.println(multiLayerNetwork.summary());
}
}
This got me with verbose/debug mode enabled:
Executing op: [matmul]
About to get variable in execute output
node_1:0 result shape: [1, 30]; dtype: FLOAT; first values [0.301271, 0.0868343, -0.256554, 7.70856e-39, 0.438268, 0.0257094, -0.084022, 0.444107, -0.379782, 1.11918, -1.41936e-38, 0.617232, 0.200801, 0.589107, 0.287842, 0.503616, 8.49135e-39, 0.311303, 0.866285, -0.190678, -0.125231, -0.385527, -0.679998, -0.228666, 0.481709, 2.11147e-38, 0.764672, -0.839745, 0.0416826, -0.221541]
Executing op: [add]
About to get variable in execute output
node_1:0 result shape: [1, 30]; dtype: FLOAT; first values [0.38315, 0.0807286, -0.296927, -0.00299783, 0.454721, 0.0242642, -0.0802173, 0.503197, -0.407838, 1.1786, -1.60034e-18, 0.669206, 0.229167, 0.623709, 0.316148, 0.511799, -0.0030775, 0.434601, 0.880709, -0.227417, -0.115579, -0.390552, -0.6831, -0.172079, 0.545913, -0.00552176, 0.859666, -0.883162, 0.0380682, -0.167185]
Executing op: [matmul]
About to get variable in execute output
node_1:0 result shape: [1, 30]; dtype: FLOAT; first values [0.265153, 0.244696, 0.161446, 0.157242, -0.187973, -0.0313901, -0.388538, -0.040024, -0.226831, -2.85167e-38, -0.0510315, 0.35548, 0.372838, -0.158745, 0.314486, 0.138599, 0.455396, 0.269985, 0.477674, 0.407454, -0.257615, 0.574969, 0.456761, 0.208015, -0.108402, 0.655384, 0.406791, 0.118828, -0.386967, 0.436312]
Executing op: [add]
About to get variable in execute output
node_1:0 result shape: [1, 30]; dtype: FLOAT; first values [0.225367, 0.312929, 0.167147, 0.117386, -0.147852, -0.0765696, -0.438198, -0.0206834, -0.162802, -0.00561494, -0.113384, 0.470834, 0.449319, -0.118911, 0.31141, 0.155436, 0.452145, 0.362151, 0.610688, 0.501175, -0.223319, 0.569607, 0.412892, 0.189691, -0.154787, 0.599811, 0.43007, 0.109047, -0.293645, 0.51485]
Executing op: [matmul]
About to get variable in execute output
node_1:0 result shape: [1, 30]; dtype: FLOAT; first values [-0.0996243, 0.208537, 0.261221, 0.0982679, 0.0328799, 0.298155, -7.56417e-40, 0.0997858, 0.0696631, 0.101161, 0.230413, 0.341019, 0.0622007, 0.103668, 0.375, 0.0267858, 5.19955e-39, 0.125258, 1.09221e-38, 0.0244133, 0.103144, 0.267562, -0.0267185, 0.0209421, 0.0765266, 0.500674, 0.250974, 0.15208, 0.211295, 1.58391e-38]
Executing op: [add]
About to get variable in execute output
node_1:0 result shape: [1, 30]; dtype: FLOAT; first values [0.0261627, 0.160388, 0.266522, 0.113639, 0.0757348, 0.290564, -0.000198537, 0.161714, 0.153787, 0.0441523, 0.189115, 0.47859, 0.19817, 0.144805, 0.350771, 0.000570966, -0.0016779, 0.168824, -0.114423, 0.00561272, 0.218659, 0.290733, -0.0846675, 0.131817, -0.0589332, 0.531538, 0.243041, 0.246128, 0.250609, -3.67472e-20]
Executing op: [matmul]
About to get variable in execute output
node_1:0 result shape: [1, 3]; dtype: FLOAT; first values [-0.102108, -0.53706, 0.433646]
Executing op: [add]
About to get variable in execute output
node_1:0 result shape: [1, 3]; dtype: FLOAT; first values [-0.158008, -0.450346, 0.386856]
Executing op: [softmax]
About to get variable in execute output
node_1:0 result shape: [1, 3]; dtype: FLOAT; first values [0.28811, 0.215079, 0.49681]
So the main ops are matmul, softmax, and add.
I’ll update this post with some flags for you in a bit.