> Loops speaks ActivityPub, so your videos can reach people on Mastodon, Pixelfed and other compatible apps — while you stay in control of your home server.
You’re splitting hairs, I’m not assigning sentience to the AI, I’m just describing actions.
The point is that scammers will set up AI systems to attack in this way. Scammers will instruct AI to see a person who is interacting rather than ignoring as a warm lead.
> Given that humans have been ascribing intention to inanimate objects and systems since time immemorial, this outcome is preordained.
This is true, but there's a big difference between "My car decided not to start" and "The computer wrote a hit piece about me". In reality, both of these events came from the same amount of intention, but to lay-people, these are two very different things. Educating about those differences (and very intentionally not blurring the lines) can only be a good thing.
So I've been reading up on what the philosophers and scientists have been saying this past century or so on this very topic. I think the layman is wise to steer clear. It's a war out there.
The one thing I can tell you with certainty: If anyone is claiming certainty, they're hallucinating harder than the AI :-P (is also what I tell lay people).
Turns out, hilariously, Claude's much criticized "I don't know" is actually epistemically the most honest (tracing from Chalmers).
[ semi randomly: I'm especially frustrated at psychology papers at the moment. I can't find a good continuous measure for affect. Almost all the protocols use discrete buckets :-/ ]
It's also potentially lethally stupid. What if an industrial robot arm decides to smash a €10000 expensive machine next door, or -heaven forbid- a human's skull. "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."
Yeah, to heck with that. If you're one of those people (and you know who you are); you're overcompensating. We're going to need a root cause analysis, pull all the circuit diagrams, diagnose the code, cross check the interlocks, and fix the gorram actual problem. Policing language is not productive (and in the real life situation in the factory, please imagine I'm swearing and kicking things -scrap metal, not humans!- for real too) .
Just to be sure in this particular case with the Openclaw bot, the human basically pointed experimental level software at a human space and said "go". But I don't think they foresaw what happened next. They do have at least partial culpability here; but even that doesn't mean we get to just close our eyes, plug our ears, and refuse to analyze the safety implications of the system design an sich.
Shambaugh did a good job here. Even the Operator, however flawed, did a better job than just burning the evidence and running for the hills. Partial credit among the scorn to the latter.
(finally, note that there's probably 2.5 million of these systems out there now and counting, most -seemingly- operated by more responsible people. Let's hope)
> "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."
It's not the operator that's to blame, it's whoever made the decision to have a skull-smashing machine who's only safety interlock is a poor operator with an e-stop. The world has gone insane, and personifying these AI systems is a way to shift blame from the decision makers to "Shit happens shrug". That's what we should be fighting back against
Seriously, that's not how you investigate incidents.
For one, there's no single executive who pushes a red button marked "Deploy The Skull-Splitter". Rather the opposite in fact, especially in eg german industry where people very much care and demand proper adherence to safety.
Assuming good faith; sometimes, the holes in the swiss cheese line up [1]
Advanced safety and reliability cultures don't look for people to blame [2] [3] . Your first goal is to look for the causes and you solve them. Very sometimes, someone does deserve blame (due to eg malice or gross negligence), in which case then you get to blame them.
Advanced safety and reliability cultures also don't choose technologies that are unpredictable and misunderstood. Nothing is safe or reliable about these systems.
Absolutely; if you're deploying experimental systems: do your homework and assess the risks, get consent of the human participants, and stay in constant communication. If the Openclaw's operator here had done that from the start, things would have gone a lot differently.
In fact, you can imagine that if we build up a just culture around deployment of semi-autonomous agents like this, the operator wouldn't have had to remain anonymous in the first place. Best practices help everyone.
goes against the grain here. Policing language is the one thing that our corporate overlords have gotten the right and the left to agree on. (Sure, they disagree on the details, but the first amendment is in graver danger now than it has been for a long time.)
My argument is that we are getting closer, not that we know exactly what AGI will be. That is clearly part of it right? If we had some boolean definition I suspect we would already be there. Figuring it out is a big part of getting there. I think my points still stand based on this. We aren't there yet but it is hard to deny that these things are growing from a complexity/capability standpoint. On a spectrum from rock to human level intelligence, these are getting closer to human and further from rock and getting further from rock every day.
When Lorenzo Milam died, i heard about his book "Sex and Broadcasting" that describes this process (as well as other details about running a community radio station). As an avid HAM, i was very interested in the technical side of it, but the political side was interesting as well. I highly recommend reading it.
There will always be a niche of people writing software, just as today while most work in web dev or backend, there are some who work in embedded or have retro computing as a hobby.
> Built for Pixelfed & the fediverse
> Loops speaks ActivityPub, so your videos can reach people on Mastodon, Pixelfed and other compatible apps — while you stay in control of your home server.
reply