Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If we only used LLMs for use cases where they exceed human ability, that would be great. But we don't. We use them to replace human beings in the general case, and many people believe that they exceed human ability in every relevant factor. Yet if human beings failed as often as LLMs do at the tasks for which LLMs are employed, those humans would be fired, sued and probably committed.

Yet any arbitrary degree of error can be dismissed in LLMs because "humans do it too." It's weird.



I don't think it's true that modern LLMs are used to replace human beings in the general case, or that any significant number of people believe they exceed human ability in every relevant factor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: