Can nd4j-onnxruntime use GPU for ONNX model inference?
deeplearning4j-cuda-11.2 1.0.0-M1.1
nd4j-cuda-11.2 1.0.0-M1
nd4j-onnxruntime 1.0.0-M1.1
When I use this version of nd4j-onnxruntime for inference, the output logs indicate that it is running on the CPU.