Pytorch + CUDA is a headache I've seen a lot of people have at my uni, and one I've never had to deal with thanks to uv. Good tooling really does go a long way in these things.
Although, I must say that for certain docker pass through cases, the debugging logs just aren't as detailed
Yup. The beauty of it is that the underlying ai accelerator/hardware is completely abstracted away. There’s a CoreML ONNX execution provider, though I haven’t used it.
No more fighting with hardcoded cuda:0 everywhere.
The only pain point is that you’ll often have to manually convert a PyTorch model from huggingface to onnx unless it’s very popular.
Although, I must say that for certain docker pass through cases, the debugging logs just aren't as detailed