I mean... so did a lot of other rulers. As far as emperors go, Aurelius wasn't that bad. You have to judge historical people by their peers, not by your own modern standards.
right. But cursor _said_ they had some magic. At some point you have to trust vendors. I don't know exactly how AWS guarantees eleven nines of durability on S3. But I sure hope that they do.
Here is what they say, at the very top they explain that llm's are inherently unreliable. It looks like they offer security tools and safeguards, but they also provide an auto run option. There is nothing a vendor can really be responsible for someone shooting themselves in the face. You can argue that they shouldn't provide that, but that's what people want, so they do, with warnings.
It sounds like this user either didn't use security controls, approved prompts they didn't understand, or disabled the checks entirely. Working in IT/tech a big chunk of my life so far and seeing all the dumb crap people who even know better do, I would bet my house on that being the most likely scenario rather than cursor somehow being at fault here.
yeah and when you interview the junior dev who also convinces you they're smart and have something special, they also delete prod and guess what... not that devs fault.
You absolutely do not. When someone makes an unbelievable claim, such as having magic guardrails for LLMs that prevent dangerous actions (what would that even mean?!), you don’t have to trust that claim.
If you trust someone’s claim without justification, that’s on you.
> At some point you have to trust vendors. I don't know exactly how AWS guarantees eleven nines of durability on S3. But I sure hope that they do.
Trust is earned, it's built on reputations at the individual, corporate, and industry-wide levels. AWS has 20 years of reputation on which I can judge the value of their promises.
Not only has the LLM industry (it is not "AI" and never will be) absolutely not earned anything like that level of trust, the thing the technology has proven most effective at is in fact scamming. Making up something that looks/sounds convincing, especially if you aren't thinking too hard about it, is what they're best at. Combine that with a lot of money flying around and trust levels should be somewhere around "Elon Musk promises".
At this point there have been so many blatant examples of why you should never give a LLM "agent" control over production systems, but the allure of just giving some vague direction to a chatbot and telling it not to screw things up it just irresistible to some like Sideshow Bob stepping on rakes [1].
If everyone around you is whacking themselves in the face with the rake, and you know you can avoid it just by using your brain and not stepping on the rake, and avoid entirely by just keeping your rakes contained, but a rake vendor comes to you saying that instead they have built a new rake that they swear won't whack you in the face even if you leave it right in your walking path, do you trust them?
My grandparents would have loved this. They spent most of the mornings scanning through obituaries for old friends who had died. Might be one of those bittersweet hobbies you get into when you reach your 80s.
Slightly off topic, but I find it to be a testament of how software has already eaten the world when friggin Michelin has a tech blog. What's next? General Electric releasing a frontend framework?
to be fair, if the author is truthful in his description of this Karen it sounds more like somebody who uses whatever leverage they have to make other people miserable. Did you see Everything, Everywhere, All at Once? Those people exist in real life too.
I'd guess they are selling access to other people somehow. Like it used to be the case that a stolen phone would rack up enormous overseas call charges until it was reported and disabled.
If your goal is to just burn as much money as possible, as fast as possible, simply spamming expensive image/video generation requests would probably do the trick, if the key's rate limits are high enough.
There's also a practice that primarily seems to occur in china where stolen keys are resold via proxy services. A single key can provide access to thousands of users, racking up costs very fast (again, assuming the rate limits are high enough).
I work in this space for a competitor to Persona, so take my opinion as potentially biased, but I have two points:
1. just because the DPA lists 17 subprocessors, it doesn't mean your data gets sent to all of them. As a company you put all your subprocessors in the DPA, even if you don't use them. We have a long list of subprocessors, but any one individual going through our system is only going to interact with two or three at most. Of course, Persona _could_ be sending your data to all 17 of them, legally, but I'd be surprised if they actually do.
2. the article makes it sound like biometric data is some kind of secret, but especially your _face_ is going to be _everywhere_ on the internet. Who are we kidding here? Why would _that_ be the problem? Your search/click behavior or connection metadata would seem a lot more private to me.
Because it should still be my choice as to what you do with it, which data you associate with it, and how you store it. Removing that choice is anti-privacy.
It's way less your choice what happens with a photo of your face in pretty much every other situation.
When your face is on your LinkedIn profile, anyone can download it and do whatever they want with it. Legally. Here, the vendor has to tell you how they use it.
Someone downloading it randomly is not the same as me volunteering information said random person wouldn't otherwise have and having that information be stored next to my image in a database that can be breached.
All for a checkmark next to my profile that says I'm a real human.
Why not show a summary of who actually received the data? It should be easy to implement. You could also add what data is retained and an estimate of how long it is kept for. It could be a summary page that I can print as a PDF after the process is complete.
I'd consider that a feature that would increase trust in such a platform. These platforms require trust, right?
> We have a long list of subprocessors, but any one individual going through our system is only going to interact with two or three at most.
So, in aggregate, all 17 data leeches are getting info. They are not getting info on all you users, but different subsets hit different subsets of the "subprocessors" you use.
And there's literally no way of knowing whether or not my data hits "two" or "three" or all 17 "at the most".
> but especially your _face_ is going to be _everywhere_ on the internet. Who are we kidding here? Why would _that_ be the problem?
If you don't see this as a problem, you are a part of the problem
I agree that DPA:s, as they are written today, aren't good. I was just pointing out that the reality probably isn't as bad as the article made it sound.
> If you don't see this as a problem, you are a part of the problem
I think you're misunderstanding me. I'm just saying that there are way bigger fish to fry in terms of privacy on the internet than passport data. In the end, your face is on every store's CCTV camera, your every friends phone, and every school yearbook since you were a kid. Unless you ask all of them to also delete it once they are done with it.
But it makes a big difference if some CCTV camera captures my face and comes up with "unknown person" or if it finds my associated passport and other information.
By the way, ever since facebook was a thing I always asked my friends not to tag me in any photos and took similar measures at every opportunity to keep my data somewhat private.
> I agree that DPA:s, as they are written today, aren't good.
That is, multiple regulations already explicitly restrict the amount of data you can collect and pass on to third parties.
And yet you're here saying "it's not that bad, we don't send eggregious amounts of data to all 17 data brokers at once, inly to 2 or 3 at a time, no big deal"
> In the end, your face is on every store's CCTV camera, your every friends phone
If you don't see how this is a problem already, and is now exacerbated by huge databases cross-referencing your entire life, you are a part of the problem
> your _face_ is going to be _everywhere_ on the internet. Who are we kidding here? Why would _that_ be the problem?
It's a strange logic. "Evil thing X will happen anyway so it's acceptable for me to work in a company doing evil thing X". You should be ashamed of building searchable databases of faces
reply