Hacker Newsnew | past | comments | ask | show | jobs | submit | bigfishrunning's commentslogin

from the site:

> Built for Pixelfed & the fediverse

> Loops speaks ActivityPub, so your videos can reach people on Mastodon, Pixelfed and other compatible apps — while you stay in control of your home server.


What is the incentive for people producing AI slop to tick the "This is slop" box? Reminds me of the evil bit https://en.wikipedia.org/wiki/Evil_bit

By boiling the ocean

The AI isn't vindictive. It can't think. It's following the example of people, who in general are vindictive.

Please stop personifying the clankers


You’re splitting hairs, I’m not assigning sentience to the AI, I’m just describing actions.

The point is that scammers will set up AI systems to attack in this way. Scammers will instruct AI to see a person who is interacting rather than ignoring as a warm lead.


Does it matter?

"It's not really writing a hit piece to destroy my reputation, it's just a next token generator"

But you're still not getting hired.


Stopping the anthropomorphization of AIs is kind of like fighting a trademark battle. Every time a perceived misuse is noticed, action must be taken!

The difference is that the action is taken, for free, by a concerned citizen, rather than by a corporate lawyer.

The outcome will be the same. Xerox and kleenex are practically public domain, and AIs will be anthropomorphized.

Given that humans have been ascribing intention to inanimate objects and systems since time immemorial, this outcome is preordained.

The only thing you can infer from the struggle is that AIs are deep in the uncanny valley for some people.


> Given that humans have been ascribing intention to inanimate objects and systems since time immemorial, this outcome is preordained.

This is true, but there's a big difference between "My car decided not to start" and "The computer wrote a hit piece about me". In reality, both of these events came from the same amount of intention, but to lay-people, these are two very different things. Educating about those differences (and very intentionally not blurring the lines) can only be a good thing.


So I've been reading up on what the philosophers and scientists have been saying this past century or so on this very topic. I think the layman is wise to steer clear. It's a war out there.

The one thing I can tell you with certainty: If anyone is claiming certainty, they're hallucinating harder than the AI :-P (is also what I tell lay people).

Turns out, hilariously, Claude's much criticized "I don't know" is actually epistemically the most honest (tracing from Chalmers).

[ semi randomly: I'm especially frustrated at psychology papers at the moment. I can't find a good continuous measure for affect. Almost all the protocols use discrete buckets :-/ ]


To amplify:

It's also potentially lethally stupid. What if an industrial robot arm decides to smash a €10000 expensive machine next door, or -heaven forbid- a human's skull. "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."

Yeah, to heck with that. If you're one of those people (and you know who you are); you're overcompensating. We're going to need a root cause analysis, pull all the circuit diagrams, diagnose the code, cross check the interlocks, and fix the gorram actual problem. Policing language is not productive (and in the real life situation in the factory, please imagine I'm swearing and kicking things -scrap metal, not humans!- for real too) .

Just to be sure in this particular case with the Openclaw bot, the human basically pointed experimental level software at a human space and said "go". But I don't think they foresaw what happened next. They do have at least partial culpability here; but even that doesn't mean we get to just close our eyes, plug our ears, and refuse to analyze the safety implications of the system design an sich.

Shambaugh did a good job here. Even the Operator, however flawed, did a better job than just burning the evidence and running for the hills. Partial credit among the scorn to the latter.

(finally, note that there's probably 2.5 million of these systems out there now and counting, most -seemingly- operated by more responsible people. Let's hope)


> "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."

It's not the operator that's to blame, it's whoever made the decision to have a skull-smashing machine who's only safety interlock is a poor operator with an e-stop. The world has gone insane, and personifying these AI systems is a way to shift blame from the decision makers to "Shit happens shrug". That's what we should be fighting back against


Seriously, that's not how you investigate incidents.

For one, there's no single executive who pushes a red button marked "Deploy The Skull-Splitter". Rather the opposite in fact, especially in eg german industry where people very much care and demand proper adherence to safety.

Assuming good faith; sometimes, the holes in the swiss cheese line up [1]

Advanced safety and reliability cultures don't look for people to blame [2] [3] . Your first goal is to look for the causes and you solve them. Very sometimes, someone does deserve blame (due to eg malice or gross negligence), in which case then you get to blame them.

[1] https://en.wikipedia.org/wiki/Swiss_cheese_model

[2] https://en.wikipedia.org/wiki/Just_culture https://www.faa.gov/about/initiatives/cp (FAA Just Culture)

[3] https://www.atlassian.com/incident-management/postmortem/bla... https://sre.google/sre-book/postmortem-culture/ Atlassian, Google SRE


Advanced safety and reliability cultures also don't choose technologies that are unpredictable and misunderstood. Nothing is safe or reliable about these systems.

Absolutely; if you're deploying experimental systems: do your homework and assess the risks, get consent of the human participants, and stay in constant communication. If the Openclaw's operator here had done that from the start, things would have gone a lot differently.

In fact, you can imagine that if we build up a just culture around deployment of semi-autonomous agents like this, the operator wouldn't have had to remain anonymous in the first place. Best practices help everyone.


All excellent points.

Unfortunately, your most excellent point:

> Policing language is not productive

goes against the grain here. Policing language is the one thing that our corporate overlords have gotten the right and the left to agree on. (Sure, they disagree on the details, but the first amendment is in graver danger now than it has been for a long time.)

https://www.durbin.senate.gov/newsroom/press-releases/durbin...


Are we at AGI yet? No. Are we getting closer? Also no.

Neither of you know the answer to this, in any scientific or statistical manner, and I wish people would stop being so confident about it.

If I'm wrong, please give any kind of citation. You can start with defining what human intelligence and sentience is.


My argument is that we are getting closer, not that we know exactly what AGI will be. That is clearly part of it right? If we had some boolean definition I suspect we would already be there. Figuring it out is a big part of getting there. I think my points still stand based on this. We aren't there yet but it is hard to deny that these things are growing from a complexity/capability standpoint. On a spectrum from rock to human level intelligence, these are getting closer to human and further from rock and getting further from rock every day.

That Ostrich Tho

That Tires Tho

At this point, i think maybe they're training on all of the previous pelicans, and one of them decided to put a hat on it?

Disclaimer: This is an unsubstantiated claim that i made up


When Lorenzo Milam died, i heard about his book "Sex and Broadcasting" that describes this process (as well as other details about running a community radio station). As an avid HAM, i was very interested in the technical side of it, but the political side was interesting as well. I highly recommend reading it.

https://a.co/d/03EQ3Ouo

Congratulations to the OP for getting something like this off the ground!


They are, you just have to turn the pages really fast

I suspect we'll all end up without any software, once we've successfully gotten rid of anyone who can evaluate the output of an LLM

There will always be a niche of people writing software, just as today while most work in web dev or backend, there are some who work in embedded or have retro computing as a hobby.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: