Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

the user who wants it? and a premature retort: if the feedback is "the user / PM / stakeholder could be wrong", then... that's where we are. A "refiner" LLM can be fronted (Replit is playing with this for instance).

To be clear: this is not something I do currently, but my point is that one needs to detach from how _we_ engineers do this for a more accurate evaluation of whether these things truly do not work.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: