Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I want a gaming computer than won't limit my future ML learning. Are there any suggestions for that use case?


There's no such thing as long as you buy an actual mid to high tier GPU. Even an ancient GTX1070 would be more than enough - and for sufficiently large datasets even an RTX3090 will take hours to process whatever you're crunching.

Just buy a PC that you like for gaming(with an Nvidia gpu) and don't worry about ML yet - it's incredibly unlikely that you can pick something that would limit you in any way. Small datasets will run on anything, large datasets will take hours to process no matter what you run them on. It's not a "limit".


Some off-the-shelf gaming PCs are not very Linux friendly though, so they should watch out for that, especially the laptop varieties. Getting a lot of the ML stuff working locally in Windows is a nightmare.


You're probably better off building your own machine and dual booting Windows and Linux. Here's a good guide for ML requirements, only a little out of date (published before the release of the 3080):

http://timdettmers.com/2018/12/16/deep-learning-hardware-gui...


just make sure it's NVidia. whatever graphics card you want -- all their consumer cards will work great for deep learning.

make sure your motherboard and processor support whatever the newest version of PCIe is -- a major factor with deep learning is bandwidth moving data on/off the GPU.

AMD GPUs can theoretically be used for machine learning, but right now software support is lacking -- you will spent more time configuring and installing than learning. (AMD CPUs are fine though.)

it doesn't really matter that much though -- any gaming PC with a new-ish NVidia card can be used to do quite a bit of interesting ML.


This is also a reason why it might make sense to hold off unless you have some kind of time-sensitive project.

Nvidia came to dominate the market at a time when AMD wasn't making particularly competitive GPUs, but that isn't really the case anymore. For anything not so expensive that nobody is really going to buy it anyway, the current and expected (in less than a month) AMD GPUs are competitive on performance.

The result is that a lot of large customers, who see value in not being locked into a single supplier, are going to be pushing for frameworks that work across multiple vendors. And then you could plausibly be wasting your time learning Nvidia-specific technology which is about to become disfavored. So you might want to wait and see.


I tried to go red twice. Red team has been winning at perf/$ for a decade! I thought I did my homework and established compatibility and suitability for the purposes I cared about. Unfortunately, both times I eventually ran into unanticipated incompatibilities I couldn't work around. I wound up paying the green tax anyway and also the price spread + ebay fees. Oof.

Twice bitten... once shy? In any case, I'm going to let someone else be the guinea pig this time.


i think most people would just use TF/PyTorch and ignore the specific technology on the backend. not much GPU specific stuff to learn -- very, very few deep learning people write their own CUDA code.

so the question is just -- when will it be very simple to install these packages for AMD GPUs, with enough mathematical operations implemented and optimized to let you do the things you want to do.

right now things sort of work, but it's definitely in a bleeding edge early adopter state. it's seemed like AMD is on the cusp of catching up for a couple years now, but it's taken longer than I expected.


> very few deep learning people write their own CUDA code.

True, but even once TF/PyTorch support AMD well it's highly possible that an unanticipated CUDA dependency will pop up in one's computational journey. NVidia subsidized CUDA seminars for a decade and now it's all over the place, both in the flagship frameworks and in the nooks and crannies.


That's a terrible advice, while Navi2 might be finally competitive(?), not being able to run most models due to CUDA/ROCm differences would seriously limit one's ML work.


Buy whatever makes you feel happy. I agree with gambiting that anything you choose won’t limit you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: