Hacker News new | past | comments | ask | show | jobs | submit login

Such an entity would not be hyper-intelligent. It would be idiotic. One huge hole for me in the paperclip argument is that an AI capable of that kind of power would not be stupid enough to misinterpret a command - it would be intelligent enough to infer human desires.



Yeah, but why would it want to? I can perfectly infer the values of an earthworm, but I don't dedicate all my resources to making worms happy.


Of course it would. But, it's not programmed to care about what you meant to say. It will gladly do what it was mis-programmed to do instead. You can already see this kind of trait in humans, where instinct is mis-aligned with intended result. Such as procreation for fun + birth control.


You're making the assumption that human desires would matter to an AI.


Sure it would. It just wouldn't be friendly to you.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: