I don't know. Having interacted with LLMs at different levels, they resemble a very sophisticated, alien intelligence trying to pretend to be human. It's like me pretending to be a dog; even if I were to emulate a dog perfectly, I wouldn't have the same emotions; I'd be pretending.
We have no idea what emotions, motivations, behaviors, or goals AIs have, will have, or if they'll have something as of yet unconvinced that's not emotions or motivations, but just alien.
We evolved to self-preserve and breed. Modern AIs evolve to pretend to write human text. It's not clear there is any intention to survive, reproduce, or turn the surface of the earth into a computing substrate.
There's a million different dangers -- and I suspect the real ones are ones we haven't conceived of. Whether they'll materialize or how depends on on how we evolve them, and I expect we can't predict it.
To me, much more likely than earth-as-a-computing-substrate is humans-as-brainwashed-consumers. Market forces will push for AIs to write text which draws eyeballs. Those models won't care about truth, ethics, or much of anything other than getting you addicted to reading what they write (or watching what they create). At that point, we can destroy ourselves just fine.
But even more likely is something no one has thought of.
> We have no idea what emotions, motivations, behaviors, or goals AIs have, will have, or if they'll have something as of yet unconvinced that's not emotions or motivations, but just alien.
AIs don't have emotions, motivations or goals.
They don't pretend, because pretending implies intent, they don't have intent. They do what they're created to do.
Humans are already brainwashed consumers. Welcome to marketing/advertising and late-stage capitalism. The ability of human beings to do what you're describing is much more effective than that of AI at present, ergo the "danger" has been here for decades. Smoking? Junk food? Radium water? Fast fashion? Equestrian ivermectin? Shall I continue?
The level of confidence both sides give here is not warranted. We have no idea about what internal structures emerged within LLMs. We only know outer behavior.
From a humanist / secular perspective, humans evolved to make babies. Emotions are an emergent behavior to maximize the number of babies made, and their survival. Nothing less, and nothing more.
What analogues emerge when we train machines not to survive but to complete text?
We have no idea.
There's the "ghost in the machine" crowd, the sentient machine crowd, and the mechanical machine crowd. None have presented any compelling evidence, but all speak with complete confidence in their hypotheses.
We have no idea what emotions, motivations, behaviors, or goals AIs have, will have, or if they'll have something as of yet unconvinced that's not emotions or motivations, but just alien.
We evolved to self-preserve and breed. Modern AIs evolve to pretend to write human text. It's not clear there is any intention to survive, reproduce, or turn the surface of the earth into a computing substrate.
There's a million different dangers -- and I suspect the real ones are ones we haven't conceived of. Whether they'll materialize or how depends on on how we evolve them, and I expect we can't predict it.
To me, much more likely than earth-as-a-computing-substrate is humans-as-brainwashed-consumers. Market forces will push for AIs to write text which draws eyeballs. Those models won't care about truth, ethics, or much of anything other than getting you addicted to reading what they write (or watching what they create). At that point, we can destroy ourselves just fine.
But even more likely is something no one has thought of.