A 32B version of Qwen2.5-Coder is coming soon as you can check out in the README of their repository.
In HuggingFace: "Qwen2.5-Coder-32B has become the current state-of-the-art open-source coderLLM, with its coding abilities matching those of GPT-4o." - https://huggingface.co/Qwen/Qwen2.5-Coder-7B
I’ve tried a bunch of different models that are essentially different instruction tuning on base models, and that seems to be generally true in my experience. I don’t think you can fine tune your way into a significantly better code model. At best, one that can follow instructions better, but not one that can usually write noticeably better code or solve harder problems.