Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a third option: The journalist who wrote the article made the quotes up without an LLM.

I think calling the incorrect output of an LLM a “hallucination” is too kind on the companies creating these models even if it’s technically accurate. “Being lied to” would be more accurate as a description for how the end user feels.

 help



The journalist was almost certainly using an LLM, and a cheap one at that. The quote reads as if the model was instructed to build a quote solely using its context window.

Lying is deliberately deceiving, but yeah, to a reader, who in a effect is a trusting customer who pays with part of their attention diverted to advertising support, broadcasting a hallucination is essentially the same thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: