GPU training, CPU scoring?

We are doing the following

  1. Trained with GPUs using the GPU bindings.
  2. Serialize that model to disk.
  3. Using the CPU bindings we deserialize and get an error (see below)

Is it possible to support this use case and is the GPU really required for scoring?

[main] INFO org.nd4j.linalg.factory.Nd4jBackend - Loaded [JCublasBackend] backend [main] INFO org.nd4j.nativeblas.NativeOpsHolder - Number of threads used for linear algebra: 32 [main] INFO org.nd4j.nativeblas.Nd4jBlas - Number of threads used for OpenMP BLAS: 0 Could not load classifier from file /home/acoustic/angak/acoustic_training/augment/output/dcase48K.cfr: unread block data

This looks odd. The serialized model does not depend on the backend that was used to train it.

What you’ve posted here isn’t enough to tell you what’s wrong. Can you tell us more about your problem?

Sorry, but I guess I posted this prematurely. The person seeing this can not reproduce. Thanks.