Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's the solution then? Chain 2 AIs, the first one is fine tuned on / has RAG access to your content telling a second that actually produces content what files are relevant (and logged)?

Or just a system prompt "log where all the info comes from"...



Someone please confirm my idea (or remedy my ignorance) about this rule of thumb:

Don't train a model on sensitive info, if there will ever be a need for authZ more granular than implied by access to that model. IOW, given a user's ability to interact w/ a model, assume that everything it was trained on is visible to that user.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: