Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is interesting to expanding upon.

Conceivably, prompt injection could be leveraged to make LLMs give bad advice. Almost like social engineering.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: