Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not everyone. That's major strategic powers. If everyone (in the literal meaning of the term) had nukes we'd all be dead by now.


The nuke analogy only applies if the nukes in question also work as anti-nuclear shields. It's also a false equivalency on a much broader fundamental level. AI emboldens all kinds of processes and innovations, not just weapons and defence.


AI of course has the potential for good—even in the hands of random people—I'll give you that.

Problem is, if it only takes one person to end the world using AI in a malevolent fashion, then I think human nature there is unfortunately something that can be relied upon.

In order to prevent that scenario, the solution is likely to be more complicated than the problem. That represents a fundamental issue, in my view: it's much easier to destroy the world with AI than to save it.

To use your own example: currently there's far more nukes than there are systems capable of neutralizing nukes, and the reason for that owes to the complexities inherent to defensive technology; it's vastly harder.

I fear AI may be not much different in that regard.


It's not a false equivalency with respect to the question of overriding concern, which is existential safety. Suppose nukes somehow also provided nuclear power.

Then, you could say the exact same thing you're saying now... but in that case, nukes-slash-nuclear-energy still shouldn't be distributed to everyone.

Even nukes-slash-anti-nuke-shields shouldn't be distributed to everyone, unless you're absolutely sure the shields will scale up at least as fast as the nukes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: