Hacker Newsnew | past | comments | ask | show | jobs | submit | laszlojamf's commentslogin

I mean... so did a lot of other rulers. As far as emperors go, Aurelius wasn't that bad. You have to judge historical people by their peers, not by your own modern standards.

My take is that you should judge historical people based on the choices they had.

Strictly speaking nobody has to do anything.

this reminds a lot about the Three-Body Problem series. There it's aliens sabotaging science. In reality it was always the NSA.


The NSA can't touch China before turning the whole commerce with the US into shards.

They are improving like crazy on clear energies and the Oil Jihad can't do nil.


Why did this get downvoted so much?


right. But cursor _said_ they had some magic. At some point you have to trust vendors. I don't know exactly how AWS guarantees eleven nines of durability on S3. But I sure hope that they do.


Here is what they say, at the very top they explain that llm's are inherently unreliable. It looks like they offer security tools and safeguards, but they also provide an auto run option. There is nothing a vendor can really be responsible for someone shooting themselves in the face. You can argue that they shouldn't provide that, but that's what people want, so they do, with warnings.

It sounds like this user either didn't use security controls, approved prompts they didn't understand, or disabled the checks entirely. Working in IT/tech a big chunk of my life so far and seeing all the dumb crap people who even know better do, I would bet my house on that being the most likely scenario rather than cursor somehow being at fault here.

https://cursor.com/docs/enterprise/llm-safety-and-controls


yeah and when you interview the junior dev who also convinces you they're smart and have something special, they also delete prod and guess what... not that devs fault.


> At some point you have to trust vendors.

You absolutely do not. When someone makes an unbelievable claim, such as having magic guardrails for LLMs that prevent dangerous actions (what would that even mean?!), you don’t have to trust that claim.

If you trust someone’s claim without justification, that’s on you.



Yeah. It would be pretty dumb for them to make that kind of claim.

Thanks for providing that doc.


> At some point you have to trust vendors. I don't know exactly how AWS guarantees eleven nines of durability on S3. But I sure hope that they do.

Trust is earned, it's built on reputations at the individual, corporate, and industry-wide levels. AWS has 20 years of reputation on which I can judge the value of their promises.

Not only has the LLM industry (it is not "AI" and never will be) absolutely not earned anything like that level of trust, the thing the technology has proven most effective at is in fact scamming. Making up something that looks/sounds convincing, especially if you aren't thinking too hard about it, is what they're best at. Combine that with a lot of money flying around and trust levels should be somewhere around "Elon Musk promises".

At this point there have been so many blatant examples of why you should never give a LLM "agent" control over production systems, but the allure of just giving some vague direction to a chatbot and telling it not to screw things up it just irresistible to some like Sideshow Bob stepping on rakes [1].

If everyone around you is whacking themselves in the face with the rake, and you know you can avoid it just by using your brain and not stepping on the rake, and avoid entirely by just keeping your rakes contained, but a rake vendor comes to you saying that instead they have built a new rake that they swear won't whack you in the face even if you leave it right in your walking path, do you trust them?

1: https://www.youtube.com/watch?v=ouau9SVVrBA


I mean, AWS doesn't really "guarantee" anything, they just say if they can't meet the bar they'll refund you in credits which is equivalent to money.


The same way most people hear "legacy" and think it's something good


It is? :)


My grandparents would have loved this. They spent most of the mornings scanning through obituaries for old friends who had died. Might be one of those bittersweet hobbies you get into when you reach your 80s.


Slightly off topic, but I find it to be a testament of how software has already eaten the world when friggin Michelin has a tech blog. What's next? General Electric releasing a frontend framework?


Toyota has an open source game engine written in Flutter!

https://www.youtube.com/watch?v=98n32VstnpI


> general electric releasing a frontend framework

or toyota releasing a game engine: https://www.theverge.com/games/875995/toyota-fluorite-game-e...


You'd be surprised: https://www.ethosdesignsystem.com/

(okay it's a design system, not so much a framework, but still)


Funny example since they’re known for automotive parts and their food guide. It’s almost on brand.


I mean how else are you going to get people to drive long distances and buy more tires without giving them yummy destinations to eat at?


to be fair, if the author is truthful in his description of this Karen it sounds more like somebody who uses whatever leverage they have to make other people miserable. Did you see Everything, Everywhere, All at Once? Those people exist in real life too.


Slightly unrelated question: how would you spend $82k on prompts in 48 hours? Just phishing?


I'd guess they are selling access to other people somehow. Like it used to be the case that a stolen phone would rack up enormous overseas call charges until it was reported and disabled.


If your goal is to just burn as much money as possible, as fast as possible, simply spamming expensive image/video generation requests would probably do the trick, if the key's rate limits are high enough.

There's also a practice that primarily seems to occur in china where stolen keys are resold via proxy services. A single key can provide access to thousands of users, racking up costs very fast (again, assuming the rate limits are high enough).


OpenClaw or a bunch of agents.


Whic run on computers somewhere. So Google has a record of the source of the fraudulent calls.


I work in this space for a competitor to Persona, so take my opinion as potentially biased, but I have two points: 1. just because the DPA lists 17 subprocessors, it doesn't mean your data gets sent to all of them. As a company you put all your subprocessors in the DPA, even if you don't use them. We have a long list of subprocessors, but any one individual going through our system is only going to interact with two or three at most. Of course, Persona _could_ be sending your data to all 17 of them, legally, but I'd be surprised if they actually do. 2. the article makes it sound like biometric data is some kind of secret, but especially your _face_ is going to be _everywhere_ on the internet. Who are we kidding here? Why would _that_ be the problem? Your search/click behavior or connection metadata would seem a lot more private to me.


> Why would _that_ be the problem

Because it should still be my choice as to what you do with it, which data you associate with it, and how you store it. Removing that choice is anti-privacy.


It's way less your choice what happens with a photo of your face in pretty much every other situation.

When your face is on your LinkedIn profile, anyone can download it and do whatever they want with it. Legally. Here, the vendor has to tell you how they use it.


Someone downloading it randomly is not the same as me volunteering information said random person wouldn't otherwise have and having that information be stored next to my image in a database that can be breached.

All for a checkmark next to my profile that says I'm a real human.


> your _face_ is going to be _everywhere_ on the internet.

Why is that your assumption?


Unless you have friends without phones and live in a city without cameras, I think that's a pretty fair assumption


Those records are not connected to your ID and personal data.


Why not show a summary of who actually received the data? It should be easy to implement. You could also add what data is retained and an estimate of how long it is kept for. It could be a summary page that I can print as a PDF after the process is complete.

I'd consider that a feature that would increase trust in such a platform. These platforms require trust, right?


The problem with anyone using my face to identify me is that it's hard for me to leave home without it.


yes, that's why people _can_ identify you by it. Identification was the _purpose_ here.


> I work in this space for a competitor to Persona

So that means you are participating in the evil that KYC services are.


> We have a long list of subprocessors, but any one individual going through our system is only going to interact with two or three at most.

So, in aggregate, all 17 data leeches are getting info. They are not getting info on all you users, but different subsets hit different subsets of the "subprocessors" you use.

And there's literally no way of knowing whether or not my data hits "two" or "three" or all 17 "at the most".

> but especially your _face_ is going to be _everywhere_ on the internet. Who are we kidding here? Why would _that_ be the problem?

If you don't see this as a problem, you are a part of the problem


I agree that DPA:s, as they are written today, aren't good. I was just pointing out that the reality probably isn't as bad as the article made it sound.

> If you don't see this as a problem, you are a part of the problem

I think you're misunderstanding me. I'm just saying that there are way bigger fish to fry in terms of privacy on the internet than passport data. In the end, your face is on every store's CCTV camera, your every friends phone, and every school yearbook since you were a kid. Unless you ask all of them to also delete it once they are done with it.


But it makes a big difference if some CCTV camera captures my face and comes up with "unknown person" or if it finds my associated passport and other information.

By the way, ever since facebook was a thing I always asked my friends not to tag me in any photos and took similar measures at every opportunity to keep my data somewhat private.


> I agree that DPA:s, as they are written today, aren't good.

That is, multiple regulations already explicitly restrict the amount of data you can collect and pass on to third parties.

And yet you're here saying "it's not that bad, we don't send eggregious amounts of data to all 17 data brokers at once, inly to 2 or 3 at a time, no big deal"

> In the end, your face is on every store's CCTV camera, your every friends phone

If you don't see how this is a problem already, and is now exacerbated by huge databases cross-referencing your entire life, you are a part of the problem


_Your_ _opinion_ _is_ _definitely_ _biased_ _,_ _not_ _potentially_ _._


> your _face_ is going to be _everywhere_ on the internet. Who are we kidding here? Why would _that_ be the problem?

It's a strange logic. "Evil thing X will happen anyway so it's acceptable for me to work in a company doing evil thing X". You should be ashamed of building searchable databases of faces


So they’ll send the data to whichever of the 17 pay them for it.

Obviously our faces are public, but there’s no easy way to tie it to all my PII unless I give it to them.


you'd also have to check if it's a human using an AI to impersonate another AI


We try to do the same for a human using another human by making the time limits shorter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: