Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is like someone saying "I am much more worried about the implications of dumb humans using flintlock muskets in the near term, then I am about the theoretical threat of machine guns and nuclear weapons." Surely the potential for both misuse and mistakes goes up the more powerful the technology gets.


Rather loaded analogy. We're well aware of the practical threat nuclear weapons pose, you're assuming a lot to compare them with AGI. It's as valid to say it's like someone in the 1980s talking about how they're much more worried about the dangers of poorly operated and designed Soviet fission reactors than they are about the theoretical threat of fusion (sure to become economical in the next twenty years!)


That's fair, but to keep going with the analogy: we are currently the Native Americans in the 1500's, and the Conquistadors are coming ashore with their flintlocks (ML). Should we be more worried about them, or the future B-2 bombers, each armed with sixteen B83 nukes (AGI)?

I understand that the timeline may be exponentially more compressed in our modern case, but should we ignore the immediate problem?

In this analogy, the flintlocks could be actual ML-powered murder bots, or just ML-powered economic kill bots, both fully controlled by humans.

The flintlocks enable the already powerful to further consolidate their power, to the great detriment of the less powerful. No super AGI is necessary, it just takes a large handful of human Conquistador sociopaths with >1,000x "productivity" gains, to erase our culture.

I don't understand how we could ever get to the point of handling the future B-2 nuke problem, as a civilization, without first figuring out how to properly share the benefits of the flintlock.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: