Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Being confused as to how LLMs see tokens is just a factual error.

I think the more concerning error GP makes is how he makes deductions on fundamental nature of the intelligence of LLMs by looking at "bugs" in current iterations of LLMs. It's like looking at a child struggling to learn how to spell, and making broad claims like "look at the mistakes this child made, humans will never attain any __real__ intelligence!"

So yeah at this point I'm often pessimistic whether humans have "real" intelligence or not. Pretty sure LLMs can spot the logical mistakes in his claims easily.



Your explanation perfectly captures another big differences between human / mammal intelligence and LLM intelligence: A child can make mistakes and (few shot) learn. A LLM can’t.

And even a child struggling with spelling won’t make a mistake like the one I have described. It will spell things wrong and not even catch the spelling mistake. But it won’t pretend and insist there is a mistake where there isn’t (okay, maybe it will, but only to troll you).

Maybe talking about “real” intelligence was not precise enough and it’s better to talk about “mammal like intelligence.”

I guess there is a chance LLMs can be trained to a level where all the questions where there is a correct answer for (basically everything that can be benchmarked) will be answered correctly. Would this be incredibly useful and make a lot of jobs obsolete? Yes. Still a very different form of intelligence.


> A child can make mistakes and (few shot) learn. A LLM can’t.

Considering that we literally call the process of giving an llm several attempts at a problem "few-shot reasoning", I do not understand your reasoning here.

And LLM absolutely can "gain acquire knowledge of or skill in (something)" of things within its context window (i.e. learning). And then you can bake those understandings in by making a LoRa, or further training.

If this is really your distinction that makes intelligence, the only difference between llms and human brains is that human brains have a built-in mechanism to convert short-term memory to long-term, and llms haven't fully evolved that.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: