Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe it's precisely because the it can't be argued against that it's not much use? There's no way we could tell it will ever happen in the moment, no way to tell how intelligent we can make computers, and so on. You can't reliably argue for or against.

So what's the point? I mean, you may want to form a group and discuss those scenarios, but it's not excusable to spread fear mongering in the media with no substance like that. It's unscientific in a way.

We don't really need to start writing traffic rules for flying cars.



I disagree that it can't be argued against. It's a set of logical steps, any one of which can be critiqued:

1 - There's no real dispute that computers will eventually be as "powerful" as brains. In fact, it's likely that they will one day be far more powerful.

2 - Assuming the brain evolved naturally, there's no reason to assume humans won't eventually duplicate the software once we have the hardware. It's really only a matter of when.

3 - An AGI with equal/more power than the human brain will, eventually, be able change its own code and improve upon itself.

4 - Given the characteristics of intelligence as it has been observed thus far in nature, even small increases in IQ lead to massive improvements in capability that are difficult for lesser intelligences to comprehend. A sufficiently advanced AGI would not only be highly capable, but would quite possibly be too smart for us to predict how it might behave. This is a dangerous combination.

Further complicating things, we might hit step 2 on accident, without realizing that we hit it. Or some group might accomplish step 2 in secret.

What I'd like to see someone do is argue that the chance of these things happening so small and/or the consequences so minuscule that it's not worth worrying about or planning for.


Step 4 is not at all obvious and would require some significant justification.

General intelligence is, well, general. My primate brain may not be able at all to intuit what it is like to move on a 7-dimensional hyper-surface in an 11-dimensional space, but thanks to the wonders of formalized mathematics I can work out and tell you anything you want to know about higher-dimensional geometry. If the super-intelligence itself is computable, and we have the machinery to verify portions of the computation ourselves, it is in principle understandable.

Of course there will be computational limits, but that's hardly anything new. There is no single person who can tell you absolutely everything about how a Boeing 787 works. Not even the engineers that built the thing. They work in groups as a collective intelligence, and use cognitive artifacts (CAD systems, simulators, automated design) to enhance their productive capacity. But still, we manage to build and fly them safely and routinely.

There is no law of nature which says that human beings can't understand something which is smarter than them. Deep Blue is way smarter than me or any other human being at Chess. But its operation and move selection is no mystery to anyone who cares to investigate its inner workings.


I agree that it's not impossible for us to understand something smarter than us. But I don't particularly like your examples. Understanding a static well-organized human-designed system like a 787 or a chess-playing algorithm is far simpler than understanding the thoughts and desires of a dynamic intelligence.

A better analogy would be to stick to IQ. How long would it take a group of people with an average IQ of 70 to understand and predict the workings of the mind of a genius with an IQ of 210? Probably a very long time, if ever. What if the genius was involved in direct competition with the group of people? She'd be incentivized to obscure her intentions and run circles around the group, and she'd likely succeed at both.

Just how intelligent might an AGI become given enough compute power? A 3x IQ increase is a conservative prediction. 10x, 100x, or even 1000x aren't unimaginable. How can pretend to know what such an intelligence might think about or care about?


We don't really need to start writing traffic rules for flying cars.

This is a great metaphor and one I wish I would have thought up sooner.


It's a terrible metaphor, or at least it argues against your own position. If flying cars were in fact something on the horizon, we would need discussions about air spaces, commute lanes, and traffic rules. Otherwise lethal mid-air collisions would be much more probable, resulting in significant bystander and property damage as these wreckage fall out of the sky.

And, no surprise, we're seeing exactly this discussion going on right now about drones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: