Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a lot of reasons ChatGPT could neither ask clarifying questions nor get any examples of the clarifying questions nor interpret all the levels of code and purposes of code-that-was-never-written-before.

Language does not handle most business process nor efficiency goal and even security restrictions. People and logic do. Little of that language (!) is in the written word. It is done mostly by failures and historic-obvious-cultural practice.

Emulation does not mean 'comprehension' of all the purposes involved in craftsmanship of handling data for business or for even computation.

Most AI-solved problems are search reduction for spoon fed problem areas of data.



Asking clarifying questions is not always socially acceptable! Then what?

Some random things that seem difficult to get an AI to deal with:

1) Things that people expect you to know by observation, but not be told and not to say.

2) Things that people expect you to know/adhere to, in spite of being told something different.

3) Strategically creating the impression that you do or do not have the ability to change your course of action.


The stuff you're describing are things GPT-4 is already adept at, much more so than writing code.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: