Hi!
I’m having issues with getting my existing model to work with with CUDA.
I have a model with a custom dataset that works perfectly fine with CPU. My Metrics are
========================Evaluation Metrics========================
# of classes: 2
Accuracy: 0.7165
Precision: 0.7142
Recall: 0.7157
F1 Score: 0.7149
Precision, recall & F1: reported for positive class (class 1 - "1") only
=========================Confusion Matrix=========================
0 1
-----------
3867 1524 | 0 = 0
1513 3808 | 1 = 1
But when I update my pom to be
<dependency>
<groupId>org.nd4j</groupId>
<artifactId>nd4j-cuda-10.1</artifactId>
<version>1.0.0-beta7</version>
</dependency>
My eval becomes
========================Evaluation Metrics========================
# of classes: 2
Accuracy: 0.0000
Precision: 0.0000
Recall: 0.0000
F1 Score: 0.0000
Precision, recall & F1: reported for positive class (class 1 - "1") only
Warning: 1 class was never predicted by the model and was excluded from average precision
Classes excluded from average precision: [0]
Warning: 1 class was never predicted by the model and was excluded from average recall
Classes excluded from average recall: [1]
=========================Confusion Matrix=========================
0 1
-------------
0 10712 | 0 = 0
0 0 | 1 = 1
I can run the LeNetMNIST example with CUDA so I don’t think it’s a problem with my CUDA setup. I have also tried running my model with the existing MnistDataSetIterator and it also works fine. So I think it has something to do with my dataset. Is there something I am missing with custom datasets and CUDA?
I’m running this with a
2060Super
CUDA 10.1
Nvidia Drivers 460.89
Thanks!