If your program isn't memory safe, it's very often the case that someone can make your program run their program, at which point the kernel doesn't know that your program didn't intend to modify itself.
W^X/NX bits and other technologies don't totally obviate the issue, as ROP gadgets can be used to defeat it. And so on, there's a whole domain of computer science dedicated to that arms race and no evidence that it's stopping any time soon.
On the other hand, a Rust program without unsafe should simply never execute code outside of its defined control flow, it's not possible to hijack any control flow on the stack or heap, and so a running program under arbitrary user input can only ever explore execution paths that exist in the absence of memory unsafety. No ROP gadgets or heap/stack smashing, no overflows leading to overwriting a function definition in memory, no overwriting the parameters of a function to cause impossible branches to be taken, and so on.
I recall that there was at least one shipping video game where "someone" was "my future self" -- they exploited a buffer overflow after-the-fact in a shipped video game (a EULA dialog?) in order to make it run an updater.
> Not every program is intended to be connected to the internet or have user provided input.
While true, I think the vast majority of programs do do one of these two things (and increasingly, both), whereas the majority of programmers seem to think that what they work on "isn't a security issue".
It's this disconnect that is giving us the Internet of Crap. Your phone apps, your fridge, your video games, your text editor -- every program I interact with every day -- it's all a security issue
The classic program safety vs programmer time tradeoff.