Basic deeplearning4j classification example

Hi All!

Need a very simple example for a jump-start java - Basic deeplearning4j classification example - Stack Overflow If somebody can share it it’d really save me some time :slight_smile:

What’s wrong with the example that Susan posted? Did you have any more specific problem with it? If you have arrays of floats, just turn them into an INDArray using putScalar() like in the example or by using Nd4j.create(double[][]).

To transform the output of a neural network into a double array just use .toDoubleMatrix or .toDoubleVector from the result.

Hi! It’s totally fine, thank you very much! I’ve already started working on my project basing on her example.

I had post this message here before Susan replied on Stackoverflow but my post appeared only now by some reason.

@eduardo I continued to work on my model the is based on Susan’s example and tried to utilize GPU for training but I can’t say that it became astonishingly quicker. GPU load was 2-3% sometimes 5%, memory utilization 1.8/6 Gb on network like this and ~10k training set:

=======================================================================
LayerName (LayerType)   nIn,nOut   TotalParams   ParamsShape           
=======================================================================
layer0 (DenseLayer)     52,208     11,024        W:{52,208}, b:{1,208} 
layer1 (DenseLayer)     208,104    21,736        W:{208,104}, b:{1,104}
layer2 (OutputLayer)    104,42     4,410         W:{104,42}, b:{1,42}  
-----------------------------------------------------------------------
            Total Parameters:  37,170
        Trainable Parameters:  37,170
           Frozen Parameters:  0
=======================================================================

If you can give me some quick hint where to start looking for mistakes it would be great :).

It is quite normal for small networks like that to be slower on the GPU since it’s spending most of it’s time coordinating thousands of cores rather than doing the computation. If you use a neural network like YOLO that has over 60,000,000 parameters (and much more computationally intensive convolution operations) on a much larger data set, you’ll see the GPU outperform the CPU.

To see the difference with your data. try a large minibatch size of 1024 or 4096 (as much as fits in your GPU memory) and use try double or triple the dense layers. I can’t guarantee that such an over-parameterized network will be any good, but it should show the performance difference.