Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pytorch + CUDA is a headache I've seen a lot of people have at my uni, and one I've never had to deal with thanks to uv. Good tooling really does go a long way in these things.

Although, I must say that for certain docker pass through cases, the debugging logs just aren't as detailed



uv doesn’t fundamentally solve the issues. It didn’t invent venv or pip.

What fundamentally solves the issue is to use an onnx version of the model.


Do you know if it's possible to run ONNX versions of models on a Mac?

I should try those on the NVIDIA Spark, be interesting to see if they are easy to work with on ARM64.


Yup. The beauty of it is that the underlying ai accelerator/hardware is completely abstracted away. There’s a CoreML ONNX execution provider, though I haven’t used it.

No more fighting with hardcoded cuda:0 everywhere.

The only pain point is that you’ll often have to manually convert a PyTorch model from huggingface to onnx unless it’s very popular.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: