Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Injecting bias into an already biased model doesn’t make decision smarter, it just makes them faster.
 help



I think this continued anthropomorphism "Have you tried asking about..." is a real problem.

I get it. It quacks like a duck, so seems like if you feed it peas it should get bigger ". But it's not a duck.

There's a distinction between "I need to tell my LLM friend what I want" and "I need to adjust the context for my statistical LLM tool and provide guardrails in the form of linting etc".

It's not that adding prose description doesn't shift the context - but it assume a wrong model about what is going on, that I think is ultimately limiting.

The LLM doesn't really have that kind of agency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: