Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs are good at writing individual functions but terrible at deciding which functions should exist.

Have you tried explicitly asking them about the latter? If you just tell them to code, they aren't going to work on figuring out the software engineering part: it's not part of the goal that was directly reinforced by the prompt. They aren't really all that smart.

 help



Injecting bias into an already biased model doesn’t make decision smarter, it just makes them faster.

I think this continued anthropomorphism "Have you tried asking about..." is a real problem.

I get it. It quacks like a duck, so seems like if you feed it peas it should get bigger ". But it's not a duck.

There's a distinction between "I need to tell my LLM friend what I want" and "I need to adjust the context for my statistical LLM tool and provide guardrails in the form of linting etc".

It's not that adding prose description doesn't shift the context - but it assume a wrong model about what is going on, that I think is ultimately limiting.

The LLM doesn't really have that kind of agency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: