Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know about OP, but I'm suggesting that the term 'hallucinate' be abolished entirely as applies to LLMs, not redefined. It draws an arbitrary line in the middle of the set of problems that all amount to "how do we make sure that the output of an LLM is consistently acceptable" and will all be solved using the same techniques if at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: