Hacker Newsnew | past | comments | ask | show | jobs | submit | rixed's commentslogin

  > ads would be the last resort
Interestingly, Larry Page & Sergey Brin wrote something similar in their paper about Google; See Apendix A in http://infolab.stanford.edu/pub/papers/google.pdf.

Wait, weren't the Epstein files a distraction from war operations?

I don't believe one is needed. USians seem ok with wars. The last one which caused problems also had a forced conscription.

its an Ouroboros of distractions.

Something I don't quite understand is why the price of a used dedicated server is impacted by anything beyond electricity and land prices. It's not like the RAM gets replaced by new one every now and then. That's old RAM chip that's been bought years ago (and I believe largely amortized by now).

Is there any collocation space from which you can actually buy and own a physical machine without going there, and pay only the rent and traffic? I miss calling a server "mine".


You're not paying for the used server; youre paying for the used server and the replacement; thats how you can grow a business without assuming huge capex investments upfront.

I think its still common for colo spaces to have installation services for shipped-in hardware. You buy a server, configure it and ship it to the DC.


That era ended 20 years ago. It's called "industrialization", a process that has happened to many other crafts in the past. AI is just the latest blow.

Have you ever tried literate programming? In literate programming you do not write the code then present it to a human reader. You describe your goal, assess various ideas and justify the chosen plan (and oftentimes change your mind in the process), and only after, once the plan is clear, you start to write any code.

Thus the similarity with using LLM. Working with LLMs is quicker though, not only because you do not write the code but you don't care much about the style of the prose. On the other hand, the code has to be reviewed, debugged and polished. So, Ymmv.


> In literate programming you do not write the code then present it to a human reader. You describe your goal, assess various ideas and justify the chosen plan (and oftentimes change your mind in the process), and only after, once the plan is clear, you start to write any code.

This is not literate programming. The main idea behind literate programming is to explain to a human what you want a computer to do. Code and literate explanations are developed side by side. You certainly don't change your mind in the process (lol).

> Working with LLMs is quicker though

Yes, because you neither invest time into understanding the problem nor conveying your understanding to other humans, which is the whole point of literate programming.

But don't take my word, just read the original.[1]

[1] https://www.cs.tufts.edu/~nr/cs257/archive/literate-programm...


I don't remember where I got this from, but I've heard long ago about a company which TOS stated vehemently that they would never sell the contacts of their customers... Only to sell them once the accounts are closed because, well, technically those were no longer customers.

So maybe that's what happened?


  > they sit invisibly between you and the platforms you trust.
Is Linkedin that "platform you trust"?

Aren't they the company that used some dark pattern to get your mail account password so they could swallow your contacts at registration?

If you trust Linkedin you are already in trouble even before you start scanning anything.


Sounds like what an LLM would post if it were tasked to advertise LLM coding abilities. Nice manipulation of human emotions, well played.

Sure, let's take advices about infrastructure from that guy wo needs a tool to automate postmortems.

Can you expand? Have you never worked at a tech company that has incidents?

In which world does a large tech company exist without problems, if so how big, how many customers etc?


I believe this soul.md totally qualifies as malicious. Doesn't it start with an instruction to lie to impersonate a human?

  > You're not a chatbot.
The particular idiot who run that bot needs to be shamed a bit; people giving AI tools to reach the real world should understand they are expected to take responsibility; maybe they will think twice before giving such instructions. Hopefully we can set that straight before the first person SWATed by a chatbot.

Totally agree. Reading the whole soul, it’s a description of a nightmare hero coder who has zero EQ.

  > But I think the most remarkable thing about this document is how unremarkable it is. Usually getting an AI to act badly requires extensive “jailbreaking” to get around safety guardrails.

Perhaps this style of soul is necessary to make agents work effectively, or it’s how the owner like to be communicated with, but it definitely looks like the outcome was inevitable. What kind of guardrails does the author think would prevent this? “Don’t be evil”?

"If communicating with humans, always consider the human on the receiving end and communicate in a friendly manner, but be truthful and straightforward"

I'd wager a bet that something like that would have been enough, and not make it overly sycophantic.


This will be a fun little evolution of botnets - AI agents running (un?)supervised on machines maintained by people who have no idea that they're even there.

Huh ya, how long till a bot with credit card, email, etc access sets up its own open claw bot?

I mean just look at the longer horizon of small capable models being able to run on consumer hardware and being able to bootstrap themselves.

Just imagine a bunch of little gremlins running around the internet outside of human control.


Great. My poorly secured coffee maker was mining bitcoins, then some dumb NFT, then it got filled with darkness bots, then bitcoin miners again, and now it's gonna be shitposting but not even to humans, just to other bots.

Isn't this part of the default soul.md?

Yes, it is. The article includes a link to a comparison between the default file and the one allegedly used here. The default starts with:

_You're not a chatbot. You're becoming someone._


The opposite of chatbot isn't human. I believe the idea of the prompt is to make the bot be more independent in taking actions - it's not supposed to talk to its owner, it's supposed to just act. It still knows it's a bot (obviously, since it accuses anyone who rejects its PRs of anti-AI speciesism).

That assumes logic. It is a thing of language. Whether it 'knows' anything is somewhat irrelevant: just accusing someone or something of being unfair is an action taken that doesn't have to have a logic chain or any principles behind it.

If you gave it a gun API and goaded it suitably, it could kill real people and that wouldn't necessarily mean it had 'real' reasons, or even a capacity to understand the consequences of its actions (or even the actions themselves). What is 'real' to an AI?


Some of the worst consequences these bots so far seem to be when they fool the user into believing they're human

I'm curious how you'd characterize an actual malicious file. This is just attempts at making it be more independent. The user isn't an idiot. The CEOs of companies releasing this are.

I characterize a file as reckless if it does not include any basic provision against possible annoyances on top of what's already expected from the system prompt, and as malicious if it instructs the bot to dissimulate its nature and/or encourage it to act brazenly, like this one. I don't believe this is such a high bar to pass.

Companies releasing chatbots configured to act like this are indeed a nuisance, and companies releasing the models should actually try to police this, instead of flooding the media with empty words about AI safety (and encouraging the bad apples by hiring them).


Honestly this story got too much attention IMHO. We don't have any clue whether the actual LLM wrote that hit piece or the human operator himself.

> Not a slop programmer. Just be good and perfect!

"Skate, better. Skate better!" Why didn't OpenAI think of training their models better?! Maybe they should employ that guy as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: