An LLM generates plausible text token by token. It is at its core a deterministic function with some randomization and some clever tricks to make it look like an agent dialoguing or reasoning.
Plausible text sometimes is right, sometimes not.
Humans have a world model, a model of what happens. LLMs have a model of what humans would plausibly say.
This is such a motte-and-bailey argument. Whenever people point out LLMs aren't actually intelligent then you're an anti-AI Luddite. But whenever an AI does something catastrophically dumb it's absolved of all responsibility because "it's just predicting the next token".
I think they are not actually intelligent. Fix all random seeds and other sources of randomness, and try the same prompt twice, and check how intelligent that looks, as a first approximation.
On a more technical level very serious people have voiced doubts, for example Richard Sutton in an interview with Dwarkash Patel [1].
No this is not what GP said, and I don't get how you reached this conclusion. This is like saying that AES is security through obscurity because it relies on key being secret. See [1] (linked in the OP) to understand the difference better.
I am pretty sure everyone who works in security agrees that obscurity is not security.
The point of ASLR is that even if you fully understand how it works, this won't make it easier to bypass the protections of ASLR, since the primary way ASLR works is through dynamic adaptation. This turns it into a probabilistic security technique where there is always a chance that an attack goes through.
Security through obscurity in this case would be to roll your own ASLR implementation with a different randomization strategy.
That's not what security through obscurity means. Security through obscurity has a specific meaning, it doesn't just mean to gain security by hiding anything it means to attempt to gain security by hiding how a system works.
ASLR is a well understood system that exploit writers know to expect and thus ASLR is not security through obscurity.
no because it's still possible to find the data using standard techniques, it doesn't count as obsecurity it's still possible.
I.e. just because you* don't know where something is, doesn't mean it's using obsecurity to hide.
The reason is important, because words mean things: If you say, knowledge of some secret is security though obsecurity. That means passwords are security though obsecurity.
*: that may or may not be available to the attacker.
it other words, just because a secret exists, doesn't put that secret into the 'obsecurity' category.
No. I believe that is more apples than there are atoms in the universe, so not only it is impossible to observe, it is a fundamental contradiction with our universal reality. No one and nothing will ever be able to observe or interact that many apples, and so a reference to that many apples is only an abstract mathematical convenience that has no direct bearing to reality.
Like infinity.
I'm not sure I actually believe that, I'm just thinking out loud. But it leads me to think the question "Does infinity exist?" should be answered with the question "An infinity of what?"
If that is your hope you are probably in for a rude awakening. Left brained/right brained is a wooden exaggeration according to more recent research [1].
Well, maybe. The poster you replied to wasn't discussing literal neuroanatomy, they were using "left/right-brained" in the colloquial, metaphorical sense.
I think we are discussing the wrong problem here. I have no solution to offer, but I think the problem is not so much generated content, but the surroundings in which it can thrive and become the content you see everywhere.
If we hadn't removed the gatekeepers everywhere (and I know there are problems with them, too), then all that technology would not be able to do much harm.
It might also have to do with incentives. The incentives in our economy are not to help and advance society, the invisible hand nonwithstanding.
Plausible text sometimes is right, sometimes not.
Humans have a world model, a model of what happens. LLMs have a model of what humans would plausibly say.
The only good guardrail seems human-in-the-loop.
reply