Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The cost of GPU time isn't just the cost that you see (buying them initially, paying for service if they are not yours, paying for electricity if they are) but the cost to the environment. Data centre power draws are increasing significantly and the recent explosion in LLM model creation is part of that.

Yes, things are getting better per unit (GPUs get more efficient, better yet AI-optimised chipsets are an order more efficient than using GPUs, etc.) but are they getting better per unit of compute faster than the number of compute units being used is increasing ATM?



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: