Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a hopeful evolutionary path. My concern is that I can literally feel Conway's law emanating from current LLM approaches as they switch between the actual LLM and the governing code around it that layers a buch of conditionals of the form:

if (unspeakable_things): return negatory_good_buddy

I see this happen a few times per day where the UI triggers a cancel even on its own fake typing mode and overwrites a user response that has at least half-rendered the trigger-warning-inducing response.

It's pretty clear from a design perspective that this is intended to be proxy to facial expressions while being worthy of an MVP postmortem discussion about what viability means in a product that's somewhere on a spectrum of unintended consequences that only arise at runtime.



This happened to me today on a prompt that I could not discern fit my original post as to "unspeakable things":

* design a men's haircut by combining a 1/4" shaved undercut around the ears and neck with a longer 2" crown and intended to provide cover from the sun on top.

followed by the AI interrupting itself mid-stream yet again after it had already answered the previous prompt to completion by providing step by step instructions to execute such a haircut.

* I'm sorry, I can't respond to your prompt. Please try something else.

My general impression is that there is near zero quality control oversight going on in this team and to their credit, that's been unusual in my experience observing and using M$ software post-Nadella.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: