I made some NN and did some experiments.
I ran it on my local machine (4 cores, 8 threads, 8 GB memory + SSD) and there are perfomance log and results:
Time: about 50 minutes
Accuracy: 83,7 %
Next I ran the same NN in Google Cloud on VM with 1 core; 6,5 GB memory:
Time: about 40 minutes
Accuracy : the same 83,7%
SCREENSHOTS (DARK THEME IS LOCAL MACHINE / WHITE IS VIRTUAL IN GC)
So the question is: why is the differents between time training , moreover the result is better where perfomance is worse.
In addition: how I can increase perfomance , because the same NN in TF make training in 10 minutes what is in 4 times better that dl4j.
Maybe I can turn on more threads? Cause I have 8 but used only 4 or what ?
Really do not understand this differents