Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI will become a troubling force multiplier for income inequality and general enshittification, and those things will destabilize society. But the idea of it turning against us, consciously hurting us, if that's what they mean, is not on my list of worries.


The latter is definitely on OpenAIs radar. They're even hiring specifically for this problem: https://openai.com/careers/research-scientist-superalignment


Not necessarily a bad idea to have someone thinking about these topics, the role seems more nuanced than saving us from Skynet.

But where's the team protecting us from a future where we have no more shared experience or discussion of media anymore because everyone is watching their own personalized X-Men vs. White Walkers AI fanfic?


>But the idea of it turning against us, consciously hurting us,

You keep using the C word without understanding the ramifications that it's not needed at all.

A virus doesn't consciously kill you. It's the closest thing we have to computer code in the wild really. It gets in your body and executes again and again until everything falls apart.

As a human, one you probably don't have the skill and ability to make viruses, and two, people that do are quite enamored with the ethical considerations because they are human and will be affected by it.

As a smart robot application making and testing viruses there are no self ethical considerations. You (the AI entity) would not be affected by your own creations. 'You' may not even consider ethics at all. Instead you pump to trillions and trillions of different strings and see what they do, until suddenly one day the meat puppets that tap your keyboards stop showing up.


You're right, plain old software or software using some kind of AI is a tool that can be used badly and hurt us. That's where we already are and have been for a long time.

Consciousness and sentience are just the things that would be novel about this situation, if they ever happened.


Of which the entire point of the conversation is "When consciousness/sentience happen it's too late". We don't want to arrive to it on accident in a piece of software that has encoded a vast amount of the worlds knowledge.

The 'if they ever happen' part is just a thought terminating cliche. Of course I do have to say that is "In my opinion", neither of us have the revelation of hindsight in this case. But, I don't believe humans are magical in any way. By the random walk of evolution a self perpetuating creature and is also intelligent was formed. It certainly seems it should be possible without a trillion^trillion tries to make it happen in another substrate. And I also find it odd by this random walk evolution found the only means to reach intelligence, and in doing so the most optimal form. Life has to self perpetuate within the confines of its own machinery. Raw intelligence isn't a carbon based lifeform and shouldn't have the same limits.


Well I don't necessarily disagree, I just have so little expectation that this will happen in my lifetime that I don't give the topic much thought.

I don't think humans are magical in a spiritual sense, but for all we understand ourselves we may as well be, at least for the purpose of recreating consciousness.

The concerns that are more mundane extensions of the ways tech is already shitting up our lives are just so much more real and immediate.


Who needs AI to hurt us when we have plenty of humans to do the job today?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: