OK, I think I understand what this is about: the vulnerability that they reported (and Microsoft fixed) is that there was a trick you could use to run your own code with root privileges inside the container - when the system was designed to have you only execute code as a non-root user.
It turned out not to really matter, because the container itself was still secured - you couldn't make network requests from it and you couldn't break out of it, so really all you could do with root was mess up a container that only you had access to anyway.
I would give the one engineer the credit for doing things better, not Microsoft. Microsoft overall culture of security is terrible. Look at the CISA report.
Okay, so I give the team that put this together credit. Hopefully the parent company sees based on this that it's worth letting teams invest more in quality and security work, over features.
In the modern world vulnerabilities are stacks. Asserting that "the container itself was still secured" is just a statement that the attackers didn't find anything there. But container breakouts and VM breakouts are known things. All it takes is a few mistakes in configuration or a bug in a virtio driver or whatever. This is a real and notable result.
The problem is that you're encouraging people to keep stuff like this to themselves until they can use it to perform an exploit that they'd get paid for, which is the opposite of what Microsoft wants - they'd much rather you report it now so that if an exploit does get found that requires root they would potentially be protected.
The simple question for Microsoft to answer is - does it matter to them if attackers have root access on the container? If the answer is yes then the bug bounty for root access should at least pay something to encourage reporting. If the answer is no then this shouldn't have been marked as a vulnerability because root access is not considered a security issue.
But a $5 wrench isn't a critical security vulnerability just because someone somewhere might one day find the right person to apply it to to extract important credentials.
Not really the right metaphor. A $5 wrench isn't a "vulnerability" because it's $5! Tools that are accessible to everyone are part of the threat model, not something you can eliminate or avoid. This trick is novel and new.
Like, consider your personal cult was built around an "unopenable" bolt-tighted box. Then someone invents the wrench in an attempt to open it. That would be a clear "security vulnerability", right?
Not a serious one if all the wrench actually gets you is access to the room that contains the box that no known tool can open, which is a closer analogy to what happened.
Again, though, you're taking "all that gets you" as a prior when (abandoning the metaphor) container and VM escapes are routine vulnerabilities. They just weren't the subject of this particular team who wanted to hack on AI. You don't do security analysis by presuming the absence of vulnerabilities!
Modern security is defense in depth. The AI pre-prompting setup was the first layer, and it was escaped. The UID separation inside the container was another, and it was broken. The container would have been next. And hopefully there are network firewalls and egress rules on top of that, etc... And all of those can and have failed in the past.
And an exploit that breaks out of the sandbox is not really anything if it needs root to work... so if a hacker had those two MS wouldn't care about them selling those bugs because both of them are not serious. See, perfect security and it didn't cost them anything.
Microsoft have a bug bounty program which is credible and well run.
Suing people who responsibly disclose security issues to you is a disastrous thing to do. Word spreads instantly and now you won't get any responsibly disclosed bug reports in the future.
They're not. It's better to think of Copilot as a collaborative storytelling session with a text autocomplete system, which some other program is rudely hijacking to insert the result of running certain commands.
Sometimes the (completion randomly selected from the outputs of the) predictive text model goes "yes, and". Other times, it goes "no, because". As observed in the article, if it's autocompleting the result of many "yes, and"s, the story is probably going to have another "yes, and" next, but if a story starts off with a certain kind of demand, it's probably going to continue with a refusal.
funny how it sounds kind of the opposite of how people might work. Get enough 'no's from someone and they might finally cave in. get enough 'yes'es and they might get sick of doing everything you ask.
It's narrowing down the space of all possible conversations. One with a lot of nos is probably a conversation with someone who says no a lot. An early LLM result was that you got higher-quality translations if you demarcated the answer with "the expert French translator says:" instead of just "French translation:"
Sales people are specifically trained to manipulate people by asking them questions that they will say ‘yes’ to because once people start to say yes, they tend to continue to say it.
Only when certain pressure is applied. If you're paying attention when someone's doing this to you, you can feel (and disregard) the tendency to keep saying "yes".
It turned out not to really matter, because the container itself was still secured - you couldn't make network requests from it and you couldn't break out of it, so really all you could do with root was mess up a container that only you had access to anyway.