Blur is perhaps surprisingly one of the degradations we know best how to undo. It's been studied extensively because there's just so many applications, for microscopes, telescopes, digital cameras. The usual tricks revolve around inverting blur kernels, and making educated guesses about what the blur kernel and underlying image might look like. My advisors and I were even able to train deep neural networks using only blurry images using a really mild assumption of approximate scale-invariance at the training dataset level [1].
Just to add to this: intentional/digital blur is even easier to undo as the source image is still mostly there. You just have to find the inverse metric.
This is how one of the more notorious pedophiles[1] was caught[2].
You're absolutely right! Diffusion models basically invert noise (random Gaussian samples that you add independently to every pixel) but they can also work with blur instead of noise.
Generally when you're dealing with a blurry image you're gonna be able to reduce the strength of the blur up to a point but there's always some amount of information that's impossible to recover. At this point you have two choices, either you leave it a bit blurry and call it a day or you can introduce (hallucinate) information that's not there in the image. Diffusion models generate images by hallucinating information at every stage to have crisp images at the end but in many deblurring applications you prefer to stay faithful to what's actually there and you leave the tiny amount of blur left at the end.
[1] https://ieeexplore.ieee.org/document/11370202