Hacker Newsnew | past | comments | ask | show | jobs | submit | kbrkbr's commentslogin

An LLM generates plausible text token by token. It is at its core a deterministic function with some randomization and some clever tricks to make it look like an agent dialoguing or reasoning.

Plausible text sometimes is right, sometimes not.

Humans have a world model, a model of what happens. LLMs have a model of what humans would plausibly say.

The only good guardrail seems human-in-the-loop.


This is such a motte-and-bailey argument. Whenever people point out LLMs aren't actually intelligent then you're an anti-AI Luddite. But whenever an AI does something catastrophically dumb it's absolved of all responsibility because "it's just predicting the next token".

I'm getting so tired of this.


I think they are not actually intelligent. Fix all random seeds and other sources of randomness, and try the same prompt twice, and check how intelligent that looks, as a first approximation.

On a more technical level very serious people have voiced doubts, for example Richard Sutton in an interview with Dwarkash Patel [1].

[1] https://m.youtube.com/watch?v=21EYKqUsPfg&pp=ygUnZmF0aGVyIG9...


That.

"And it confessed in writing" - no, it created probabilistically token after token based on the context without any other access to what happened.

LLMs can't explain themselves in the manner relevant here, much less confess.


> Obscurity is not security.

So ASLR [1] is not a security control? I guess you are pretty alone with this opinion.

[1] https://en.wikipedia.org/wiki/Address_space_layout_randomiza...


No this is not what GP said, and I don't get how you reached this conclusion. This is like saying that AES is security through obscurity because it relies on key being secret. See [1] (linked in the OP) to understand the difference better.

I am pretty sure everyone who works in security agrees that obscurity is not security.

[1] https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle


ASLR is (still[1]) not security by obscurity.

[1] https://news.ycombinator.com/item?id=43408079


ASLR is, by definition, security by obscurity. The entire purpose of it is to make it so that it's hard to find the memory which is in use.

The point of ASLR is that even if you fully understand how it works, this won't make it easier to bypass the protections of ASLR, since the primary way ASLR works is through dynamic adaptation. This turns it into a probabilistic security technique where there is always a chance that an attack goes through.

Security through obscurity in this case would be to roll your own ASLR implementation with a different randomization strategy.


That's not what security through obscurity means. Security through obscurity has a specific meaning, it doesn't just mean to gain security by hiding anything it means to attempt to gain security by hiding how a system works.

ASLR is a well understood system that exploit writers know to expect and thus ASLR is not security through obscurity.


no because it's still possible to find the data using standard techniques, it doesn't count as obsecurity it's still possible.

I.e. just because you* don't know where something is, doesn't mean it's using obsecurity to hide.

The reason is important, because words mean things: If you say, knowledge of some secret is security though obsecurity. That means passwords are security though obsecurity.

*: that may or may not be available to the attacker.

it other words, just because a secret exists, doesn't put that secret into the 'obsecurity' category.


No, because ASLR uses a secret.

"The universe is fundamentally just a complicated clockwork"

Unknown Ptolemy disciple


Not if you are an aphantast.

Can you observe 2.34 x 10^456789 apples?

No. I believe that is more apples than there are atoms in the universe, so not only it is impossible to observe, it is a fundamental contradiction with our universal reality. No one and nothing will ever be able to observe or interact that many apples, and so a reference to that many apples is only an abstract mathematical convenience that has no direct bearing to reality.

Like infinity.

I'm not sure I actually believe that, I'm just thinking out loud. But it leads me to think the question "Does infinity exist?" should be answered with the question "An infinity of what?"


You say that as if we knew the number of atoms in the universe, or its size, age, and "duration".

But none of this can be observed either, which in my book makes your argument a bit weak.

Your "universal reality" is a construction relying in big parts on the mathematics relying on infinity as a concept.


Yeah. Cargo-cult engineering meets the Streisand effect.


The AI does nothing the like. It predicts tokens. That's it.

Describing the tech in anthropomorphic terms does not make it a person.


I feel like you didn't get the joke at the end.


If that is your hope you are probably in for a rude awakening. Left brained/right brained is a wooden exaggeration according to more recent research [1].

[1] e.g. https://www.sciencenewstoday.org/left-brain-vs-right-brain-t...


Well, maybe. The poster you replied to wasn't discussing literal neuroanatomy, they were using "left/right-brained" in the colloquial, metaphorical sense.


I think we are discussing the wrong problem here. I have no solution to offer, but I think the problem is not so much generated content, but the surroundings in which it can thrive and become the content you see everywhere.

If we hadn't removed the gatekeepers everywhere (and I know there are problems with them, too), then all that technology would not be able to do much harm.

It might also have to do with incentives. The incentives in our economy are not to help and advance society, the invisible hand nonwithstanding.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: