Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One big caveat is that they use a parallel algorithm on a 44 cores machine to compete with a single GPU.

I can train a neural network on the GPU in my laptop but it has only 8 cores.



It all depends on economics, what you have now doesn't really matter a couple years down the line when the simple progress of time will show if this research has been used and built upon or not. 44+ cores, that's already available for workstations, and with the 4th generation of Threadripper (that's supposed to be shipping this year), that amount of cores is probably going to be even cheaper than it is now. On the other hand, GPUs are absurdly expensive right now as a result of multiple factors (people being home and gaming due to covid, crypto miners, lower supply of parts due to covid), but we'll see in a year.

Also, I'm pretty sure the goal of this research was not that you can train a DNN on your laptop, but that there is more democratization in the distribution of training power. Maybe some day a mid sized organization will be able to train a competitive model on a buch of off the shelf (or rented) servers...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: