Can I use AN AMD or Nvidia graphics card to speed up MNIST training and recognition?

Any examples of that?

Models that work on MNIST are usually very small. Even the entire mnist dataset is less than 50MB even when uncompressed.

GPU’s usually excel when they are used with large models and data sizes, as otherwise they usually are bottlenecked by the PCIe communication latency.

Because MNIST data is so small and the models used on it are equally small, you will typically need to use very large batch sizes to see any benefit at all when using a GPU.

But in principle you can take any MNIST example (see https://github.com/eclipse/deeplearning4j-examples/tree/master/dl4j-examples) and change to the CUDA backend in the pom.xml (see https://github.com/eclipse/deeplearning4j-examples/blob/master/dl4j-examples/pom.xml#L32-L33) , and you will be ready to use a NVidia GPU.

As for AMD GPUs, those are currently not supported.

Thank you,treo :handshake::handshake::handshake: