The problem with that is that C++26 Contracts are just glorified asserts. They trigger at runtime, not compile time. So if your LLM-generated code would have worked 99% of the time and then crashed in the field... well, now it will work 99% of the time and (if you're lucky) call the contract-violation handler in the field.
Arguably that's better (more predictable misbehavior) than the status quo. But it's not remotely going to fix the problem with LLM-generated code, which is that you can't trust it to behave correctly in the corner cases. Contracts can't make the code magically behave better; all they can do is make it misbehave better.
In my experience, llms don't reason well about expected states, contracts, invariants, etc.
Partly because that don't have long term memory and are often forced to reason about code in isolation.
Maybe this means all invariants should go into AGENTS.md/CLAUDE.md files, or into doc strings so a new human reader will quickly understand assumptions.
Regardless, I think a habit of putting contracts to make pre- and post-conditions clear could help an AI reason about code.
Maybe instead of suggesting a patch to cover up a symptom, an AI may reason that a post-condition somewhere was violated, and will dig towards the root cause.
This applies just as well to asserts, too.
Contracts/asserts actually need to be added to tell a reader something.
Arguably that's better (more predictable misbehavior) than the status quo. But it's not remotely going to fix the problem with LLM-generated code, which is that you can't trust it to behave correctly in the corner cases. Contracts can't make the code magically behave better; all they can do is make it misbehave better.