Something I don't quite understand is why the price of a used dedicated server is impacted by anything beyond electricity and land prices. It's not like the RAM gets replaced by new one every now and then. That's old RAM chip that's been bought years ago (and I believe largely amortized by now).
Is there any collocation space from which you can actually buy and own a physical machine without going there, and pay only the rent and traffic? I miss calling a server "mine".
You're not paying for the used server; youre paying for the used server and the replacement; thats how you can grow a business without assuming huge capex investments upfront.
I think its still common for colo spaces to have installation services for shipped-in hardware. You buy a server, configure it and ship it to the DC.
That era ended 20 years ago. It's called "industrialization", a process that has happened to many other crafts in the past. AI is just the latest blow.
Have you ever tried literate programming?
In literate programming you do not write the code then present it to a human reader. You describe your goal, assess various ideas and justify the chosen plan (and oftentimes change your mind in the process), and only after, once the plan is clear, you start to write any code.
Thus the similarity with using LLM.
Working with LLMs is quicker though, not only because you do not write the code but you don't care much about the style of the prose. On the other hand, the code has to be reviewed, debugged and polished. So, Ymmv.
> In literate programming you do not write the code then present it to a human reader. You describe your goal, assess various ideas and justify the chosen plan (and oftentimes change your mind in the process), and only after, once the plan is clear, you start to write any code.
This is not literate programming. The main idea behind literate programming is to explain to a human what you want a computer to do. Code and literate explanations are developed side by side. You certainly don't change your mind in the process (lol).
> Working with LLMs is quicker though
Yes, because you neither invest time into understanding the problem nor conveying your understanding to other humans, which is the whole point of literate programming.
But don't take my word, just read the original.[1]
I don't remember where I got this from, but I've heard long ago about a company which TOS stated vehemently that they would never sell the contacts of their customers... Only to sell them once the accounts are closed because, well, technically those were no longer customers.
I believe this soul.md totally qualifies as malicious. Doesn't it start with an instruction to lie to impersonate a human?
> You're not a chatbot.
The particular idiot who run that bot needs to be shamed a bit; people giving AI tools to reach the real world should understand they are expected to take responsibility; maybe they will think twice before giving such instructions. Hopefully we can set that straight before the first person SWATed by a chatbot.
Totally agree. Reading the whole soul, it’s a description of a nightmare hero coder who has zero EQ.
> But I think the most remarkable thing about this document is how unremarkable it is. Usually getting an AI to act badly requires extensive “jailbreaking” to get around safety guardrails.
Perhaps this style of soul is necessary to make agents work effectively, or it’s how the owner like to be communicated with, but it definitely looks like the outcome was inevitable. What kind of guardrails does the author think would prevent this? “Don’t be evil”?
"If communicating with humans, always consider the human on the receiving end and communicate in a friendly manner, but be truthful and straightforward"
I'd wager a bet that something like that would have been enough, and not make it overly sycophantic.
This will be a fun little evolution of botnets - AI agents running (un?)supervised on machines maintained by people who have no idea that they're even there.
Great. My poorly secured coffee maker was mining bitcoins, then some dumb NFT, then it got filled with darkness bots, then bitcoin miners again, and now it's gonna be shitposting but not even to humans, just to other bots.
The opposite of chatbot isn't human. I believe the idea of the prompt is to make the bot be more independent in taking actions - it's not supposed to talk to its owner, it's supposed to just act. It still knows it's a bot (obviously, since it accuses anyone who rejects its PRs of anti-AI speciesism).
That assumes logic. It is a thing of language. Whether it 'knows' anything is somewhat irrelevant: just accusing someone or something of being unfair is an action taken that doesn't have to have a logic chain or any principles behind it.
If you gave it a gun API and goaded it suitably, it could kill real people and that wouldn't necessarily mean it had 'real' reasons, or even a capacity to understand the consequences of its actions (or even the actions themselves). What is 'real' to an AI?
I'm curious how you'd characterize an actual malicious file. This is just attempts at making it be more independent. The user isn't an idiot. The CEOs of companies releasing this are.
I characterize a file as reckless if it does not include any basic provision against possible annoyances on top of what's already expected from the system prompt, and as malicious if it instructs the bot to dissimulate its nature and/or encourage it to act brazenly, like this one. I don't believe this is such a high bar to pass.
Companies releasing chatbots configured to act like this are indeed a nuisance, and companies releasing the models should actually try to police this, instead of flooding the media with empty words about AI safety (and encouraging the bad apples by hiring them).
reply