Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You have a point but current LLM architectures in particular are very fragile to data poisoning [1,2].

[1] https://www.anthropic.com/research/small-samples-poison

[2] https://arxiv.org/abs/2510.07192



Yes, there are quite a few anti-AI projects. https://old.reddit.com/r/badphilosophy/wiki/index


No idea why you're being downvoted. We can't yet even demonstrate that LLMs will withstand training on their own output as they pollute the Internet.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: