Can nd4j-onnxruntime use GPU for ONNX model inference?

Can nd4j-onnxruntime use GPU for ONNX model inference?

deeplearning4j-cuda-11.2 1.0.0-M1.1

nd4j-cuda-11.2 1.0.0-M1

nd4j-onnxruntime 1.0.0-M1.1

When I use this version of nd4j-onnxruntime for inference, the output logs indicate that it is running on the CPU.

@LiuJia that mainly uses cpu I’d have to do a separate gpu bindings one. I think at the time javacpp upstream didn’t have a gpu artifact. I can check though!

1 Like