Memory management

Hi all! Sorry about the naive question, but is there any rule of thumb to parametrize the training system in relation to RAM? I have a 16GB Linux machine with a Titan V with 12GB RAM. I’m trying to train a VGG16 with 32 minibatch size and it is not possible. With 16 is ok. Nevertheless, the GPU is only at 60% usage… The JVM parameters are the following:
“-Xms8G”, “-Xmx8G”, “-Dorg.bytedeco.javacpp.maxbytes=12G”, “-Dorg.bytedeco.javacpp.maxphysicalbytes=14G”

Off-heap memory is almost not used… around 780MB only…

Cheers,

/rp