Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is the same for humans, if we were asked to compute multiplication without thinking about it for longer than a few milliseconds.

Not to be a jerk but "LLMs are just like humans when humans don't think" is perhaps not the take you intended to have.

> I have never in my life applied strict logic to any problem lol.

My condolences.

No, but seriously. If you've done any kind of math beyond basic arithmetic, you have in fact applied strict logical rules.



> Not to be a jerk but "LLMs are just like humans when humans don't think" is perhaps not the take you intended to have.

No that's exactly the take I have and have always had. The LLM text axis is the LLM's axis of time. So it's actually even stupider: LLMs are just like humans who are trained not to think.

> No, but seriously. If you've done any kind of math beyond basic arithmetic, you have in fact applied strict logical rules.

To solve the problem, I apply the rules, plus error. LLMs can do that.

To find the rules, I apply creativity and exploratory cycles. LLMs can do that as well, but worse.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: