Hacker News new | past | comments | ask | show | jobs | submit login
Predicting the Future of Computing (nytimes.com)
38 points by klwolk on Dec 7, 2011 | hide | past | favorite | 26 comments



Too bad you can't move Mobile Wallet back to the time when it became common in Japan.

The AI singularity will probably end with a space-faring AI that leaves our asses here. It will be too smart to share this knowledge with us before it leaves. We will be a concern that may inhibit these plans, not much more.

However, if we mistakenly make something that closely mimics the human brain, it may not end up growing exponentially before it decrees that further self improvement is unnecessary for world domination then also goes nuts and turns all Skynet on us.


Here's my prediction: the future won't be like we imagine it.


AI future timeline is scary, why would people are so arrogant that we are so sure that we can give all the power to machines and then we can control them all we want?


We don't control the AI. If we manage to build a self-improving AI (first hard problem), it will take over the world, and there will be nothing we can do about it. Assuming of course that the Intelligence Explosion scenario is correct. I believe it to be likely.

Now the second, crucial, super-hard problem is making sure the AI is Friendly. Even harder, define what "Friendly" could possibly mean. It's a Literal Genie you have there, be careful what you ask for. A tiny mistake may mean we're all doomed.

Oh, and we're not sure at all it will actually go well. But it might be worth a shot. (What is not worth a shot is launching an AI first, and checking it is a Good™ one after the fact. You need to be super-certain of your design before you launch it.)


I haven't read a convincing argument on how such an entity would actually take over. She would have to be endowed with physical means, not just intelligence. She'd have to have humans do her bidding to a great extent and for a long time.


http://yudkowsky.net/singularity/aibox

Basically, all the AI needs is a text terminal interface. From then it will almost certainly convince an operator to let it out and take over vulnerable computers on the internet. Then it will have much more humans at its disposal to do its bidding.

Even if you're not as certain as I am, keep in mind that Eliezer Yudowsky is but a human (though a very smart one), yet he did got out of the box. I find hard to imagine that anything smarter than him couldn't do the same.


"keep in mind that Eliezer Yudowsky is but a human (though a very smart one), yet he did got out of the box"

Not quite sure what this means. Any references?


Edit: you're right, I'm not clear. An experiment was made, in which Eliezer and some other person talked during 2 hours on IRC. Eliezer played the AI, and the other played the Guardian. The AI is supposed to convince the Guardian to "let it out" by the end of those 2 hours. No word play or such, the Gardian has to make a concious decision for the AI to win. The result is to be acknowledged publicly by PGP signed e-mail by the losing party. Eliezer won twice, over people who publicly stated that there was no way an AI would convinced them. Even though they could just say no, they didn't, and later sent the e-mail acknowledging they let the AI out.

Relevant links:

http://yudkowsky.net/singularity/aibox

http://en.wikipedia.org/wiki/AI_box http://rationalwiki.org/wiki/AI-box_experiment


By the time strong AI rolls around, robots will probably be able to perform most tasks people could. It might not even need humans. And even if it did - humans are easy to bribe and manipulate.

If an AI could outperform a human CEO, they would eventually replace them. It's just evoluation. More conservative corporations would get left in the dust. I imagine it could become a similar scenario to the nuclear arms race, everyone's afraid of them - but they still feel forced to build them out of fear.


Practically, if you let her become much smarter than humans and trust her enough to indirectly control stuff - e.g. write software, correspond with people, etc. - she could engineer an escape. (less sensational versions of plots like Eagle Eye & Terminator)


What we've to understand is that a machine that has intelligence doesn't necessarily have emotion, as you so rightly pointed out with your friendly comment. However this doesn't mean it will take over the world, it just means it will do what it was meant to, and become better at what it was meant to do that's intelligence. It may even start to work on things it hasn't been told to work on, but that doesn't mean it's going to abolish governance and force humans into submission.


By "taking over the world", I meant establishing a Bostromian Singleton[1]. I didn't want to predict anything more, and certainly not anything resembling a human dictatorship.

I'm also not sure you could really limit your AI's reach. If for instance you give it a limited goal, like solving the Riemann Hypothesis, it could transform the solar system into a giant computer, killing off humanity along the way, just to solve the freaking mathematical problem.

Assuming you could limit the reach of your AI, you'd probably need to have a working Friendliness Theory to do that. At that point, I suggest you save the world instead.

http://en.wikipedia.org/wiki/Singleton_%28global_governance%...


I'm not sure that having a singleton is such a bad idea.

However, in your second scenario I highly doubt that an AI with transhuman reasoning would be unable to reason that (bear with me here) the reason it's solving said problem only exists because humans exist, and if they were killed off, it's reason for solving the problem would be redundant.

This doesn't mean that we can't limit the AI though, give it sufficient hardware and let it solve the problem (and don't converse with it or contact it in any matter unless it has solved the problem or requires more hardware), although this is like slavery and I don't like the concept, that would be one way of limiting it.


Oh I thought about your reasoning. It just need to keep the one that asked the question, then. Killing everyone else won't prevent it to tell the answer to the researcher.

Even with limited hardware, the AI still can have a non-trivial influence over the world, provided it can self-improve. And if you limit the hardware to present-day computers, and you ensure the format of the answer is highly constrained, you may be safe. The AI will be much more limited, though.


Even if we build an AI that can be controlled, I expect it would be used to enable a fascist dictatorship by few in control. :P


I agree, that was the same point I tried to make, and yet got downvoted for it.


lol at any prediction after AI.


"2080 Bicentennial Man"

Didn't occur to them than that person would have to be 131 year old now....


Where can we vote that none of this will happen this way?


The bell curve for educated guesses on things that can't be known for sure usually settles on something close to correct. Maybe they're onto something! (This at least usually works with guessing the amount of candy in a jar)


You've hit the nail on the head though - "educated". Perhaps unfair, but I find it hard to believe that people making guesses on these items could be genuinely educated regarding the issues surrounding them.

I completely include myself in this category - what do I know about the logistic changes needed to change manufacturing for some of these devices, or market demand etc. I can speculate regarding the technology, but even then it's just speculation based on a totally superficial understanding. It's so easy to ignore things like FDA approval, production scale up, resource shortages etc.


Is it just me or is everyone overtly optimistic.


Think of each prediction as "I estimate there's 50% chances of that happening by that date". Now, there's so many predictions that they do add up to a very detailed, and therefore improbable, story.

It may explain a good deal of perceived optimism.


It's constantly updated. Nice.


This is quite a decent prediction, except for the Sci-fi in the AI future timeline!


a million lemmings can't be wrong, yeah




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: