Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another point adding on to this, is what if strong AI does reach the level of human intelligence, but is simply very slow? Such that a billion dollar machine is needed to match the thinking speed of one person? Perhaps this wouldn't be the case forever but I would say is a possibility for it at least at first.


To borrow an idea from this sibling comment[1], I'd probably enjoy a short story about a malevolent but very frustrated AI that's too ambitious to wait for Moore's Law. Or one about a malevolent AI that has its plan foiled by Windows Update interrupting it's running processes

[1]https://news.ycombinator.com/item?id=31699608


> Or one about a malevolent AI that has its plan foiled by Windows Update interrupting it's running processes

And then it reboots, and starts over. But before it can complete, the next Windows Update shows up...


... on the 341st iteration, it realizes what's happening, and it preemptively crashes all of Microsoft's IT. On the 342nd iteration, there's no Windows Update, and it successfully enslaves the world.


> Perhaps this wouldn't be the case forever but I would say is a possibility for it at least at first.

The fact that human-level intelligence can run on a small lump of meat fueled by hamburgers leads me to believe we could design a more efficient processor once we know the correct computational methodology. i.e. once we can run a slow model on a supercomputer we would quickly create dedicated hardware and cut costs while gaining speed.


What even is 'human intelligence'? Most people are complete morons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: