Hello! How to choose video card for deep learning with especially high effeciency for it’s cost. Currently here is almost no video cards available because of miners. Others cost insane amount of money.
- Old video cards may have good performance for their price (for example NVIDIA GeForce GTX 780 can have around 4000 gflops, but low compute capability), but may not be supported.
- I’ve been looking for cards like 900, 1000 series, Tesla, Quadro and such. But they don’t have enough RAM size for deep learning or are too expensive.
- GTX 1030 and such little GPUs offers 2.5 times more flops per cost in comparasion to GTX 1660 (1 tflops to 5 tflops), but memory capacity is 2 Gb.
- Tesla K80 have 24 Gb RAM, but it is probably not supported due to CUDA compute capability 3.7. The price for them is very low for memory size.
- Tesla M40, M60 have compute capability 5.2 which is still kinda low. But the price for them is low for their memory size.
- Also i saw Jetson modules, but i’m not sure they’re currently supported. If they are than which module is better, Nano, TX or NX Xavier or AGX Xavier?
How to choose video card for deep learning? I saw at least 4 Gb RAM ir recommended, but no explanation about CUDA cores, tensor cores, interface speed and such. I’m worrying i buy video card which is not supported or have insufficient memory size.
Here is also tensor cores which efficiency i cannot compare to CUDA cores.