When I started school, my dream was to figure out a theory to underpin a grand unified model of artificial intelligence. Imagine my disappointment once I started studying the subject in detail.
Most functional AI nowadays consists of algorithms that are carefully tuned to solve a very specific problem in a narrowly defined environment. All the research nowadays is pushing the boundaries of a local optimum. Right now, true AI is a pipe dream without a fundamental shift in how we approach AI-type problems. And no, machine learning/deep learning is not that shift; it is just another flavor of the same statistics that everybody already uses.
What concerns me is not Skynet; what concerns me is the exasperating over-confidence that some people have in our current AI capabilities, on Hacker News and elsewhere. Too often, we discuss such technology as a miracle tonic to various economic or social woes, but without acknowledging the current state of the technology and its limitations (or being completely ignorant of such), we might as well be discussing Star Trek transporters. And usually, the discussion veers into Star Trek territory. Proponents of self-driving cars: I AM LOOKING AT YOU.
Take self-driving cars: at least with humans, our failure modes are well-known. We cannot say the same for most software, especially software that relies on a fundamentally heuristic layer as input to the control system. To that mix, add extremely dynamic and completely unpredictable driving conditions -- tread lightly.
The key to self-driving cars is to realize that they don't have to be perfect - they just have to be better than us. It's not that the AI driver so good - it's that human drivers are SO BAD! I agree with you that AI is a pipe dream but I do think self-driving cars will succeed. I don't think the computers will ever match our judgement but it's trivially easy for them to beat us on attention span and reaction time, which will make them better drivers.
> The key to self-driving cars is to realize that they don't have to be perfect - they just have to be better than us.
Again, that's skirting the issue. Do you have any idea how close self-driving cars are to being "better than us" ? As someone who's done computer vision research: not close at all.
> I don't think the computers will ever match our judgement
That is exactly the problem.
> it's trivially easy for them to beat us on attention span and reaction time
Attention span and reaction time are not the hard parts of building an autonomous vehicle.
This kind of comment beautifully illustrates the problem with casual discussions about AI technology. Humans and computers have very different operating characteristics, and discussions all focus on the wrong things: typically, they look at human weaknesses, and emphasize where computers are obviously, trivially superior. What about the contrapositive: the gap between where computers are weak, and where humans are vastly superior? More importantly, what is the actual state of that gap? That question is often completely ignored, or dismissed outright. Which is disappointing, especially among a technically literate audience such as HN.
I suspect that the current google car is already safer than the overall average driver.
Don't forget some people speed, flee from the cops, fall asleep at the wheel, get drunk, text, look at maps, have strokes etc. So, sure at peak performance cars have a long way to go. However, accidents are often a worst case and computers are very good at paying attention to boring things for long periods of time.
PS: If driverless cars on average killed 1 person each year per 20,000 cars then they would be significantly safer than human drivers.
> Don't forget some people speed, flee from the cops, fall asleep at the wheel, get drunk, text, look at maps, have strokes etc.
Again you are falling into the same pit: a nonsensical comparison of human and computer operational/failure modes. Of course computers can't have strokes. And yes, they are good at "paying attention to boring things". That is trivially true. And that's not where the discussion should be focused.
I do hope self-driving cars will be generally available sooner rather than later. What's not to like about them? But what I'm really curious about is how that availability will be qualified. Weather, road, visibility conditions? Heavy construction? Detours? Will this work in rural areas or countries that don't have consistent markings (or even paved roads!)? Will a driver still have to be at the wheel, and what extent will the driver have to be involved?
What is really annoying are breathless pronouncements about a technology without critically thinking about its actual state and implementation. We might as well be talking about Star Trek transporters.
A car that can that 80% of the time can get a sleepy or drunk people home would be a monumental advantage and likely save thousands of lives a year.
Basicly an MVP that flat out refuses to operate on non designated routes, bad weather, or even highway speeds could still be very useful.
PS: Classic stop and go traffic is another area where speeds are low, conditions are generally good. But, because people can't pay attention to boring things you regularly see traffic accidents which creat massive gridlock cost people day's per year sitting in traffic.
> I suspect that the current google car is already safer than the overall average driver.
Based on what? Even Google admits that the car is essentially blind (~30 feet visibility) in light rain. They've done little to no road testing in poor weather conditions. The vast majority of their "total miles driven" are highway miles in good weather, with the tricky city-driving bits at either end taken over by humans.
Google often leaves the impression that, as a Google executive once wrote, the cars can “drive anywhere a car can legally drive.” However, that’s true only if intricate preparations have been made beforehand, with the car’s exact route, including driveways, extensively mapped. Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps.
A self-driving car that can only do highways and can't drive in bad weather still destroys the trucker industry overnight. And that is today.
No, I'm not drinking the AI koolaid here thinking there will be some breakout solution to fuzzy visibility problems these cars have. The difficulty differentiating contextual moving objects correctly, knowing what is "safe" debris and what is not, etc, are all neural net problems that will take years of development and even then just be heuristics.
But you don't really need all that. All you need is something that, given a sunny day and a highway, beats the human at driving the truck, and suddenly you lose three million jobs when it hits the market.
>All you need is something that, given a sunny day and a highway, beats the human at driving the truck, and suddenly you lose three million jobs when it hits the market.
I don't see how, given a sunny day and a highway, an autonomous vehicle can quantifiably beat a human driver to the degree of 'destroying the trucker industry overnight.'
Unless you're assuming that there are few human drivers capable of managing a trip down a highway under the best possible conditions, the results between the two would have to be more or less equal. Those human drivers can, meanwhile, still manage night, sleet, snow, fog, and arbitrary backroads and detours.
It's going to be awhile before they can actually eliminate the human. They've had self-driving tankers and self-driving airplanes for awhile now, they still need human operators for various reasons. They just don't have to actually drive.
> A self-driving car that can only do highways and can't drive in bad weather still destroys the trucker industry overnight.
Destroying the trucker industry with a self-driving car which can't drive in bad weather?? I don't think so!
Trucker's clients usually care a lot about predictability, and they would NOT be happy to hear "sorry it is raining and the robot car couldn't arrive".
You might be right that they are safer, but you're totally failing to contest the point. Yes computers are lots better than humans at lots of things, but they are worse at other things. They can't reliably tell the difference between a cat and a bench. Things that may be important when the computer is going 80 mph with humans on board.
Why would a self driving car need to know the difference between a cat and a bench? All it really has to know is that it is an object of a certain size and not to hit it.
The things that the car needs to know are largely standardized: the lines on the road, road signs, speed limits, etc.
>The things that the car needs to know are largely standardized: the lines on the road, road signs, speed limits, etc.
These are things humans need to know as well, and yet autonomous car cheerleaders constantly argue that human drivers are death machines, despite the fact that most human drivers know perfectly well how to follow these norms, and even deviate from them, without incident, most of the time.
I suspect that in their enthusiasm to set the bar of human intelligence as low as possible in order to make the case for autonomous cars seem urgent and necessary, some vastly underestimate the actual complexity of the problem. An autonomous car that only knows to 'avoid the boxes' has worse AI than many modern video games.
I'd settle for an AI that's better than only the worst drivers. It could monitor your driving, and only kick in if it thought you were that horrible/drunk.
To add to what has already been said, I sorta disagree with "they just have to be better than us." I would prefer the car to not just be better than an average driver, but be better than me. As any person, I'm sure I dramatically overestimate my skills, which makes that bar quite high. So it doesn't have to just be better than us, it has to be better than we (however wrongly) think we are.
Do you know there is a field studying general artificial intelligence? They have a conference[1], journal[2], and a slew of competing architectures.
What you've basically described is that mainstream "AI" is actually quite divorced from what the public thinks of as "AI" -- something matching or improving upon human-level intelligence. Still, there are those still working on original grand dream of artificial intelligence.
There is a problem with naming. There are two kinds of AI which get discussed - AI (Artificial Intelligence) and AGI (Artificial General Intelligence).
The problem with these names are that they are not easily distinguishable by the general public or others who are not already familiar with the concepts. Its like the poor coding pattern where you have two variables in your code with almost the same names - 'wfieldt' and 'wtfield' will get confused by the next person who has to maintain your code. It's the same problem with popularization of concepts.
The same thing happens in Physics, where people confuse energy with power, and think that Special Relativity must be much harder than General Relativity (it's special, right?). The words used to name things make them understandable, or not.
What's worse is that this confusion is intentional, created on the part of people who wanted to hype their (actually valuable, but perhaps overlooked) work, by confusing their 'boring' or 'limited' AI work with AGI work in the minds of those supporting or funding that work.
That's why people say things like "true AI".
There really needs to be a name for the non-AGI kind of AI, which refers specifically to the limited-domain type of problem solving which "AI" has been so successful. "AI" is a bad name for this.
Personally, I am partial to the metaphor of "canned thought". It shows that a human had to do the work beforehand of setting up the structure of the algorithm that is later doing the information processing, and that the process is limited by the thought that were previously put into it. But Canned Thought is also a pretty bad name for the field. It's not descriptive and it's not catchy. Anyone have any suggestions for a better name for non-AGI AI?
There really needs to be a name for the non-AGI kind of AI, which refers specifically to the limited-domain type of problem solving which "AI" has been so successful. "AI" is a bad name for this.
I think you are looking for Narrow Artificial Intelligence, which hasn't gained much traction. To me Narrow and General AI respectively make perfect sense for their scopes.
The core dilemma, and I'm paraphrasing I can't remember who, is that for every other problem given to computer scientists to solve, there already exists an understanding of what the problem space is and what a solution is required to do. The computer scientist just architects and implements the solution in software, but they're implementing business logic, or physics equations to guide a spacecraft, or "route this car through the mapped 3d environment and follow these rules and don't hit anything"
But sentience simply is not defined. There are no equations for it, no foundational science modeling it. It's possibly the ultimate unanswered question for all human history, and so the task given to computer engineers is: implement this thing we haven't as a species been able to explain or describe or comprehend for 10,000 years
And really, it's a bit much to ask. The neuroscientists could have a eureka moment tomorrow and finally crack the code of sentience, and once documented I'm confident some form of rudimentary synthetic implementation would be possible within a decade. But I'm not sure many neuroscientists or practically anyone (Tononi/Tegmark/Hawkins/?) are directly attempt to investigate and build theories of consciousness.
That theory of consciousness would be the instruction book for the programmers building the first hard AI.
But why would sentience or consciousness be necessary for (or even relevant to) a system's utility and/or hazard? Put differently, it seems like what's really relevant is its cognitive ability, model of the world, decision-making skills, and so on, all so-called "easy problems". Despite the designation they're not truly easy problems to investigate; on the other hand, neuroscientists and other researchers are making steady, semi-predictable progress on these questions every year.
So what role is left for sentience? Maybe it is the idea that only sentience could cause something to act "on its own", to have "intentions"? To that I would ask if any creature, sentient or otherwise, can be said to act free from its "programming", genetic, cultural, etc.
Alternately maybe you think sentience would be necessary for an entity to perform unexpected actions in furtherance of subgoals (a weaker interpretation of "intention"). Or conversely: without sentience, computers can only perform actions which are trivial consequences of the orders they have been given. But, of course, even computers today do all kinds of things "automatically". If I ask for a browser tab to open on my laptop, memory gets freed and allocated, various fans turn on and off, lots more happens that would be very difficult for me to predict (even if I designed the system). Again, sentience seems entirely besides the point.
I'm worried about the descendant of Bonzai Buddy. I'm worried about annoying little algorithms that are simple-minded but have access to vast quantities of data I'd rather they not, and that have means of actuating things in the real world that could prove annoying.
Everybody seems so goddamned thrilled over this stuff that nobody seems to be worried about the mundane but harmful things we can accomplish and enable with current-day technology.
"All representatives are busy, please stay on the line. Meanwhile here's a friendly word from Amazon. We notice you've been searching for baby clothes recently. We will now read a long list of baby-related products to you..."
Imagine a bot whose only job is to squat a hashtag and intermittently harass people who mention it.
Now stop imagining, because we've already got those.
Imagine another bot that just looks for bug trackers and spams incomprehensible combinations of issues generated by mining previous issues. It can be hard, on a good day, to tell difference between frustrated and inarticulate user, and markov-chain spam.
What concerns me is not Skynet; what concerns me is the exasperating over-confidence that some people have in our current AI capabilities, on Hacker News and elsewhere.
Yes exactly!!
Each AI winter is preceded by unrealistic expectations. If the unrealistic expectations this time are resoundingly fearful, I worry that the next AI winter will be even worse - like create legislation and basically kill the field worse.
>Too often, we discuss such technology as a miracle tonic to various economic or social woes, but without acknowledging the current state of the technology and its limitation
What you describe is one half of the Amara's Law:
"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run"
What you demonstrate with your comments is the other half :)
A lot of technologies were over-hyped first and underestimated later, until they just appeared to be part of our lives. I'm quite sure that, sooner or later, it'll happen with AI too.
Jeremy Clarkson made a fascinating point about the self-driving car and it's "intelligence." shortly before the whole fracas thing:
If you assume these cars will some day be able to identify humans, what decision is made in a situation where the car can either swerve and (potentially) injure a bystander, or continue on a trajectory which calculates to be certain death for its occupants?
What shocks me is how few people understand that this would not be a "decision" in the sense that the software would have a choice. It would either be coded in directly or would be a resultant output of the criteria it uses to already do its pathfinding.
Eh. If we get to the point where an AI has this degree of resolution, we've solved a LOT of problems that we don't know how to solve today. And the answer is that I either get to choose a setting or you save me because I'm the owner of the AI.
I had a very passionate interesting in AI at high school - did a CS degree then was lucky enough to get a paid RA post (UK - Research Associate a contract researcher paid for by a project where the research work overlaps to a greater or lesser extent with your PhD topic).
I don't think my own interest in AI recovered from reading Drew McDermott's A critique of pure reason - fortunately that was about 1992 and I found this really cool networked hypertext thing that was far more interesting....
Your first sentences needs some more support. We know that some of it can be explained by statistical modelling, and we don't yet know how to explain the rest of it. Why does that make you sure that it doesn't run on statistics?
What makes intelligence is organic and doesn't seem to map easily on statistical modeling. We can oversimplify some behaviors and claim some math on it but we at best scratch the surface.
A lot of respected AI researchers and practitioners are writing these "AIs are really stupid" articles to rebut superintelligence fearmongering in the popular press. That's a valuable service, and everything this article says is correct. Deepmind's Atari network is not going to kick off the singularity.
I worry that the flurry of articles like this, rational and well-reasoned all, will be seen as a "win" for the nothing-to-worry-about side of the argument and lead people to discount the entire issue. This article does a great job demonstrating the flaws in current AI techniques. It doesn't attempt to engage with the arguments of Stuart Russell, Nick Bostrom, Eliezer Yudkowsky, and others who are worried, not about current methods, but about what will happen when the time comes -- in ten, fifty, or a hundred years -- that AI does exceed general human intelligence. (refs: http://edge.org/conversation/the-myth-of-ai#26015, http://www.amazon.com/Superintelligence-Dangers-Strategies-N...)
This article rightly points out that advances like self-driving cars will have significant economic impact we'll need to deal with in the near future. That's not mutually exclusive with beginning to research ways to ensure that, as we start building more and more advanced systems, they are provably controllable and aligned with human values. These are two different problems to solve, on different timescales, both important and well worth the time and energy of smart people.
It doesn't attempt to engage with the arguments of Stuart Russell, Nick Bostrom, Eliezer Yudkowsky, and others who are worried, not about current methods, but about what will happen when the time comes -- in ten, fifty, or a hundred years -- that AI does exceed general human intelligence.
That's because it would be arguing a straw-man. So there is no reason to engage the argument.
I'm not sure I understand what you mean by "straw man" here. The usual meaning is that to attack a straw man means to argue against a position that no one actually holds, but which is easier to attack than your opponent's actual position. The concerns about the long-term future of AI are real, actual beliefs held by serious people. At this point there's a respectable literature on the potential dangers of unconstrained, powerful optimizing agents; I linked a couple of examples. These arguments are well thought through and worth engaging.
By contrast, this article and many others are implicitly arguing against an actual straw man position, the position of "let's shut down AI research before it kills us all in six months", which no serious person on either side of this debate really holds (though it's understandable how someone who only read the discussion in the mainstream press could come to think this way).
at this point there's a respectable literature on the potential dangers of unconstrained, powerful optimizing agents, of which I linked a couple of examples.
There isn't really though. Superintelligence, while an interesting book, doesn't do much more than pontificate on sci-fi futures based largely on writing from Yudkowski. Bostrom gives no clear pathway to AGI. Neither does Yudkowski in his own writings or Barrat (Our final invention). By the way Super intelligence is kind of an offshoot book from Global Catastrophic Risks.
All of them take these interesting approaches, WBE, BCI, AGI etc... and just assume they will achieve the goal without looking realistically about where the field is or how it would get there.
So what they do is they say: We are here right now. In some possible fantasy world we could be there. The problem is they can't draw the dots between them.
For example, find me someone who can tell me what kind of hardware we need for an AGI. Can't do it, because no one has any idea. What about interface, what is the interface for an AGI?
Even better (this was my thesis project): What group would have a strong enough requirement that an AGI would be required to build? Industry, Academia, Military? Ok great, can they get funding or resources for it?
etc...
Note, I am not saying that there is no potential that AGI could be a problem. The point here is that nobody has firm footing to say that it will definitely be a huge risk and that we need to regulate it's development.
There are actually people calling for legislation to prevent/slow AGI development - see Altman's blog post and many of the writings on MIRI. So that is what I am rallying against.
Could you argue against the position directly? You kinda took a swerve in the middle there. Bostrom, Yudkowsky, and Barrat's positions are basically "superintelligence is possible, it will eventually happen in some way, and when it does ..." It's not within the scope of the quoted works to elaborate a technical roadmap or provide firm dates for the arrival of superhuman machine intelligence.
So you would like to see unencumbered research in this area. They argue that the long term outcome of this research is existentially risky and therefore should be tightly regulated, and the sooner the better. Do you have any points regarding this particular difference of opinion?
No I can't because there isn't a way to argue it. I can't argue that it would be safe because I have nothing to point to that says it will be or even could be. We simply do not know enough about how it would be built to reason on it.
It's not within the scope of the quoted works to elaborate a technical roadmap or provide firm dates for the arrival of superhuman machine intelligence.
Correct, but they don't even reference a technical road map because there isn't one. It's too early to tell.
Do you have any points regarding this particular difference of opinion?
Not really, other than my own opinions which are as baseless as the others. I mean I have my own opinions and thoughts on things but they aren't empirically based at all, which is my main contention. We can't reason, let alone start making policy on stuff that we haven't the slightest idea how to build.
Well, color me disappointed. I think there are many arguments against the viewpoints of MIRI and Bostrom, both discrediting the perceived risk of super-intelligences and the the idea that we could build a "provably friendly" alternative.
But I don't think it's at all a fair criticism to say that you need a technical roadmap to engage in that debate. I would rather say that they need to be willing to discuss specific architectural details in order to even sit at the table, sure. The field of "AI safety" needs more engineering and less philosophy. But that doesn't require unity around a single roadmap, or any concept of deliverable dates.
I would rather say that they need to be willing to discuss specific architectural details in order to even sit at the table, sure.
We are in violent agreement then. When I say roadmap I certainly don't intend to mean that each step needs to be perfectly detailed.
The field of "AI safety" needs more engineering and less philosophy. But that doesn't require unity around a single roadmap, or any concept of deliverable dates.
Exactly exactly and well said! That is 100% the point. Mind meld complete.
I'm not worried about the immediate future at all. We definitely haven't reached the point where hastily-implemented regulations will do more good than harm.
That said, you put it exactly right. The arguments about the potential long-term risk are persuasive regardless of where we are today. It bothers me to see this argument ignored time and time again.
> That said, you put it exactly right. The arguments about the potential long-term risk are persuasive regardless of where we are today. It bothers me to see this argument ignored time and time again.
Actually I am almost completely unpersuaded by the arguments of Bostrom, Yudkowsky, et al, at least in their presented strong form. Superintelligent AI is not a magical black box with infinite compute capacity, the two assumptions which really underlie the scary existential risk scenarios if you look closely. It is not at all clear to me that there will not exist mechanisms by which superintelligences can be contained or their proposed actions properly vetted. I have a number of ideas for this myself.
In the weak form it is true that for most of the history of AI its proponents have more or less ignored the issue of machine morality. There were people in the 90's and earlier who basically advocated for building superintelligent AI and then sitting back and letting it do its thing to enact a positive singularity. That indeed would have been crazy stupid. Kudos to Yudkowsky and others for pointing out that intelligence and morality (mostly?) orthogonal. But it's a huge unjustified leap from "machines don't necessarily share our values" to "the mere existence of a superintelligent amoral machine will inevitably result in the destruction of humanity." (not an actual quote)
I just want to point out: the history of software, just regular software, has been typified by the New Jersey approach and the MIT approach. The former consists in just hacking together something that kinda-mostly works, releasing fast, and trying to ameliorate problems later. The latter consists in thoroughly considering what the software needs to do, designing the code correctly the first time with all necessary functionality, (ideally) proving the code correct before releasing it, putting it through a very thorough QA process, and then releasing with a pre-made plan for dealing with any remaining bugs (that you didn't catch in the verification, testing, and other QA stages).
Only the latter is used for programming important things like airplanes, missile silos, and other things that can kill people and cost millions or billions of dollars when they go wrong -- the things in which society knows we cannot afford software errors.
I don't think it's even remotely a leap to say that we ought to apply only this thorough school of engineering to artificial general intelligence, whether it's at the level of an idiotic teenager or whether it's a superintelligent scifi thingy.
Now, if we think we can't really solve the inherent philosophical questions involved in making a "world optimization" AI that cannot go wrong, then we should be thinking about ways to write the software so that it simply doesn't do any "world optimization", but instead just performs a set task on behalf of its human owners, and does nothing else at all.
But either way, I think leaving it up to New Jersey kludging is a Very Bad Idea.
No one is making "world optimizations" engines. The concept doesn't even make sense when he wheels hit the road. No AI research done anytime in the foreseeable future would even be at risk of resulting in a runaway world optimizer, and contrary to exaggerated claims being made there would be plenty of clear signs something was amis if it did happen and planty of time to pull the plug.
I think you missed the last sentence: your software doesn't need to be a "runaway world optimizer" to be a very destructive machine merely because it's a bad machine that was put in an important job. Again: add up the financial and human cost of previous software bugs, and then extrapolate to consider the kind of problems we'll face when we're using deterministically buggy intelligent software instead of stochastically buggy human intellect.
At the very least, we have a clear research imperative to ensure that "AI", whatever we end up using that term to mean, "fails fuzzily" like a human being does: that a small mistake in programming or instructions only causes a small deviation from desired behavior.
You're right: AGI might not be as powerful as some predict it can be. And there might be a way to contain it if it is. And we might be able to enforce said containment strategy on researches worldwide. And the AGI might be benevolent even if it is super powerful and uncontainable.
The authors above are unpersuasive in arguing that these things are inevitable, but they are persuasive in arguing that these things are possible. And if they are possible, then what matters to me is their likelihood. Based on current estimates, are we talking about a 50-50 chance here? 80-20? 99-1? 99.999-0.001?
If we're talking about the future of the human race, these chances matter. But instead it seems everyone is just waving their hands at the what-ifs simply because they aren't 100%.
I agree with a lesser form (timescale on which the AI self improves is probably going to be long enough for humans to deal with it) of your first argument, but I'm confused at how, given 'merely' lots of compute capacity can be countered by 'a number of ideas for this myself'.
Security for human level threats is already very poor, and we already have a relatively good threat model. If you suppose that an AI could have radically different incentives and vectors than than a human, it seems implausible that you could be secure, even in practice. I suppose you could say that these would be implemented in time, but it's not at all clear to me that a humanity which has trouble coordinating to stop global warming or unilateral nuclear disarmament would recognize this in time.
On the other hand, I'm slightly puzzled by why you think there's a huge unjustified leap between lack of value alignment and threat to the human race. Does most of your objection lie in 1) the lack of threat any given superintelligent AI would pose, because they're not going to be that much smarter than humans or 2) the lack of worry that they'll do anything too harmful to humans, because they'll do something relatively harmless, like trade with us or go into outer space?
For 1, I buy that it'd be a lot smarter than humans, because even if it initially starts out as humanlike, it can copy itself and be productive for longer periods of time (imagine what you could do if you didn't have to sleep, or could eat more to avoid sleeping). And we know that any "superintelligent" machine can be at least as efficient as the smartest humans alive. I would still not want to be in a war against a nation full of von Neumanns on weapons development, Muhammads on foreign policy and Napoleons on Military strategy.
For 2... I would need to hear specifics on how their morality would be close enough to ours to be harmless. But judging by your posts cross thread this doesn't seem to be the main point.
By the way, I must thank you for your measured tone and willingness to engage on this issue. You seem to be familiar with some of the arguments, and perhaps just give different weights to them than I do. I've seen many gut level dismissals and I'm very happy to see that you're laying out your reasoning process.
Maybe it's precisely because the it can't be argued against that it's not much use? There's no way we could tell it will ever happen in the moment, no way to tell how intelligent we can make computers, and so on. You can't reliably argue for or against.
So what's the point? I mean, you may want to form a group and discuss those scenarios, but it's not excusable to spread fear mongering in the media with no substance like that. It's unscientific in a way.
We don't really need to start writing traffic rules for flying cars.
I disagree that it can't be argued against. It's a set of logical steps, any one of which can be critiqued:
1 - There's no real dispute that computers will eventually be as "powerful" as brains. In fact, it's likely that they will one day be far more powerful.
2 - Assuming the brain evolved naturally, there's no reason to assume humans won't eventually duplicate the software once we have the hardware. It's really only a matter of when.
3 - An AGI with equal/more power than the human brain will, eventually, be able change its own code and improve upon itself.
4 - Given the characteristics of intelligence as it has been observed thus far in nature, even small increases in IQ lead to massive improvements in capability that are difficult for lesser intelligences to comprehend. A sufficiently advanced AGI would not only be highly capable, but would quite possibly be too smart for us to predict how it might behave. This is a dangerous combination.
Further complicating things, we might hit step 2 on accident, without realizing that we hit it. Or some group might accomplish step 2 in secret.
What I'd like to see someone do is argue that the chance of these things happening so small and/or the consequences so minuscule that it's not worth worrying about or planning for.
Step 4 is not at all obvious and would require some significant justification.
General intelligence is, well, general. My primate brain may not be able at all to intuit what it is like to move on a 7-dimensional hyper-surface in an 11-dimensional space, but thanks to the wonders of formalized mathematics I can work out and tell you anything you want to know about higher-dimensional geometry. If the super-intelligence itself is computable, and we have the machinery to verify portions of the computation ourselves, it is in principle understandable.
Of course there will be computational limits, but that's hardly anything new. There is no single person who can tell you absolutely everything about how a Boeing 787 works. Not even the engineers that built the thing. They work in groups as a collective intelligence, and use cognitive artifacts (CAD systems, simulators, automated design) to enhance their productive capacity. But still, we manage to build and fly them safely and routinely.
There is no law of nature which says that human beings can't understand something which is smarter than them. Deep Blue is way smarter than me or any other human being at Chess. But its operation and move selection is no mystery to anyone who cares to investigate its inner workings.
I agree that it's not impossible for us to understand something smarter than us. But I don't particularly like your examples. Understanding a static well-organized human-designed system like a 787 or a chess-playing algorithm is far simpler than understanding the thoughts and desires of a dynamic intelligence.
A better analogy would be to stick to IQ. How long would it take a group of people with an average IQ of 70 to understand and predict the workings of the mind of a genius with an IQ of 210? Probably a very long time, if ever. What if the genius was involved in direct competition with the group of people? She'd be incentivized to obscure her intentions and run circles around the group, and she'd likely succeed at both.
Just how intelligent might an AGI become given enough compute power? A 3x IQ increase is a conservative prediction. 10x, 100x, or even 1000x aren't unimaginable. How can pretend to know what such an intelligence might think about or care about?
It's a terrible metaphor, or at least it argues against your own position. If flying cars were in fact something on the horizon, we would need discussions about air spaces, commute lanes, and traffic rules. Otherwise lethal mid-air collisions would be much more probable, resulting in significant bystander and property damage as these wreckage fall out of the sky.
And, no surprise, we're seeing exactly this discussion going on right now about drones.
Look at the technologies underpinning the internet. We badly screwed up a lot of things that are very difficult to fix in retrospect, because we didn't invest enough time and effort into foreseeing the consequences.
Nobody worried about the fact that SMTP allows people to send millions of unsolicited messages at practically zero cost until it was far too late to fix the protocol. It is entirely plausible that an RFC describing a proof-of-work scheme could have fixed the problem from the outset, but now we're stuck with Bayesian filters and black/whitelists that sort of work acceptably most of the time.
The issues we're seeing regarding surveillance and censorship could have been hugely ameliorated if our protocols were designed with greater foresight, if people like Cerf and Berners-Lee were more aware of those risks.
We look back in horror at the naive techno-utopianism of the past and the harms that resulted - tetraethyl lead, asbestos, nuclear fission, CFCs. They were all heralded as miracles of the modern age, but caused harms that we're still dealing with today. The technology industry needs to be far more circumspect about the hazards of what we do. We need less hype about changing the world, and more sleepless nights about the people we might be harming.
We need to be having big discussions right now about every aspect of technology, before we open Pandora's box. For example, the World Anti-Doping Agency already have rules prohibiting genetic doping in sport, because they know that it's a potential risk. They would rather figure out their policies now in the cold light of day, rather than in a panic when the Russian team arrive at the 2032 Olympics with a team of superhuman mutants.
AGI is a starting point for broader discussions about how society deals with increasingly powerful and ubiquitous computing technologies. Maybe it will happen, maybe it won't, but I don't think that's particularly important; What we can all agree on is that machine intelligence will only become more disruptive to society.
We need to plan now for what we would do if certain technologies come good. Driverless cars might never happen or they might be five years away, but we need to figure out what to do with the millions of Americans who drive for a living before we put them all out of work. What would be the social and ethical implications of brain-computer interfaces, of algorithmic decision-making in medicine or the criminal justice system, of ubiquitous facial recognition and tracking? How can we plan to mitigate those harms? If we don't give serious thought to those kinds of questions, then we're sleepwalking into the future.
Really I would describe these people as bikeshedding. They're pontificating about the only abstractions they understand, some hand-wavey idea of AI and then the sci-fi short stories of the imagined dangers. Because if you can't do, fanfic.
I don't pretend that an argument from authority resolves this debate, but the fact that people like Stuart Russell take these arguments seriously implies that there's a bit more substance there than you're acknowledging.
To actually argue the point a little bit, the theory of expected-utility-maximizing agents is pretty much the framework in which all of mainstream AI research is situated. Yes, most current work is focused on tiny special cases, in limited domains, with a whole lot of tricks, hacks, and one-off implementations required to get decent results. You really do need a lot of specialized knowledge to be a successful researcher in deep learning, computer vision, probabilistic inference, robotics, etc. But almost all of that knowledge is ultimately in the service of trying to implement better and better approximations to optimal decision-theoretic agents. It's not an unreasonable question to ask, "what if this project succeeds?" (not at true optimality -- that's obviously excluded by computational hardness results -- but just at approximations that are as good or better than what the human brain does).
Do Nick Bostrom and Eliezer Yudkowsky understand when you would use a tanh vs rectified linear nonlinearity in a deep network? Do they know the relative merits of extended vs unscented Kalman filters, MCMC vs variational inference, gradient descent vs BFGS? I don't know, but I'd guess largely not. Is it relevant to their arguments? Not really. You can do a lot of interesting and clarifying reasoning about the behavior of agents at the decision-theoretic level of abstraction, without cluttering the argument with details of current techniques that may or may not be relevant to the limitations of whatever we eventually build.
All that talk about maximizing utility, isn't about intelligence at all. Its about a sensory feedback loop maybe. Intelligence is what lets you say "This isn't working. Maybe I should try a different approach. Maybe I should change the problem. Maybe I should get a different job". Until you're operating at that meta level, you're not talking 'intelligence' at all, just control systems.
That's not the definition the mainstream AI community has taken, for what I think are largely good reasons, but you could define intelligence that way if you wanted. It's only a renaming of the debate though - instead of calling the things we're worried about "intelligent machines", you'd now call them "very effective control systems".
The issue is still the same: if a system that doesn't perfectly share your goals is making decisions more effectively than you, it's cold comfort to tell yourself "this is just a control system, it's not really intelligent". As Gary Kasparov can confirm, a system with non-human reasoning patterns is still perfectly capable of beating you.
Yeah you can imagine a lizard brain being introduced into a biomechanic machine to calculate chess moves. It doesn't make a lizard more intelligent, or even add intelligence to the lizard.
If we don't regard intelligence as something different from control, then I guess birds are the most intelligent because they can navigate complex air currents. Etc. That is a poor definition of intelligence, because its not helpful in distinguishing what we normally mean by 'smart' from mechanistic/logical systems.
And the discussion of rogue AIs is all about intelligence gone awry. Does anybody fear a control system that mis-estimates the corn crop? No, its about a malicious non-empathetic machine entity that coldly calculates how to defeat us. And that requires more that the current AI's are delivering.
There's some good comments about some new AI tools; it's a shame that the article's premise is a straw man.
The fears of machine superintelligence are based on the belief that true AI is just around a corner. After all, we’re so advanced and the progress is only accelerating, it’s probably a few years away, at most.
Analogously, public figures started warning us about climate change, so it must be just a few years before the earth is uninhabitable. It should be pretty obvious that if we start worrying about significant threats only a "few years" before they kill us, that will be as a rule too late.
Carbon-driven global warming ("the greenhouse effect") was gathering support in the 60s, followed by consensus in the 80s. Yet we are just now experiencing the first years of growth without increased carbon output.
The claim by the concerned parties with respect to harmful AI is that it may be a threat in the next 25 to 50 years: 2043 and 2065 are common estimates. "A few years" isn't a good way to characterize that, interpret Gates, etc., or a reasonable understanding of when we start worrying.
Current AI technologies give us no insight about the probability of an AGI
Because that is exactly the point. We can't look at what we have built today and extrapolate that AGI will come from it.
All anyone can do is speculate wildly on AGI just as they were able to in 1956 [1]. The difference is that we see automation/computing as ubiquitous (smartphones/computers in 2015), rather than around the edges (Radio/TV 1956). Which brings it closer to home in some ways.
[1] I chose 1956 because of the dartmouth conference on AI
Probability of AGI is close to 100%, conditional on mankind surviving long enough. We know that human brains don't run on magic, therefore you can build a machine that can do the same thing. (Turing computability doesn't matter, if the brain relies on non-Turing-computable physical effects then a machine can rely on those as well.) The current situation where unmodified humans are the smartest creatures around is unstable in the long term.
Saying that we don't have to worry because AI development will take 1000 years is also missing the point. Research into unsafe AI is inherently faster than research into safe AI, because the latter problem is more constrained. If unsafe AI kills us all, the best time to start working on safe AI is 100 years ago, and the second best time is now.
Saying that we don't have to worry because smarter than human intelligences will have a better understanding of morality is also missing the point. All currently known mathematical formalisms for decision-making allow you to plug in arbitrary utility functions, including "immoral" ones. Thinking that future decision-making formalisms will support only "moral" utility functions is extreme wishful thinking.
Saying that we don't have to worry because brain emulations or human cognitive enhancement will come first is also missing the point. These things can be just as dangerous as AI (e.g. see Robin Hanson's descriptions of how mind uploading increases Darwinian pressures on everyone, including non-uploaded people), and can also lead to AI more quickly.
And so on.
If you want to feel safe and protected, no one can stop you. But you don't have any good arguments for feeling safe. I'm sorry, but you really don't. I've been on the lookout for such arguments for years, and I'd be happier if they existed.
>Saying that we don't have to worry because smarter than human intelligences will have a better understanding of morality is also missing the point. All currently known mathematical formalisms for decision-making allow you to plug in arbitrary utility functions, including "immoral" ones. Thinking that future decision-making formalisms will support only "moral" utility functions is extreme wishful thinking.
I'm going to play Devil's Advocate on this.
While it is true that all known decision-making formulas allow for immoral utility functions, I think it's a flaw of current approaches, such as reinforcement learning, that they only allow nonspecific utility functions. I can't write down an AI program to maximize paperclips -- there's no way, in current formalisms, to specify what that means.
I think the FAI ideologues and the mainstream machine-learning community can both rally around the prospect of being able to specify a goal in terms other than reinforcement. Scientifically, that's an entirely sensible research question for the field of AI/ML to tackle, and it also constitutes a major step away from the scenarios in which a dangerous AGI winds up accidentally programmed with random, arbitrary goals that nobody actually wanted. After all, fundamental philosophical questions aside, a machine that flies out of control at random when turned on is simply a bad machine.
Yeah, I agree that reinforcement learning is probably a bad approach to FAI. Most of our toy models involve utility functions encoded directly into the AI, not reinforcement.
That said, it's indeed very hard to directly specify a utility function involving paperclips. If our universe were a Game of Life universe and we knew exactly which configuration corresponds to a paperclip, I'd be able to do that right now. But since we don't know the true laws of physics, the "hard way" involves encoding some kind of Solomonoff prior over all possible physical universes, and a rule for recognizing paperclips in each of them. That's kind of a tall order.
There's a shortcut that involves encoding a precise mathematical description of a human mind into an AI, and passing the buck by saying "the AI must maximize whatever function the human mind would output, given enough time to think". That would actually be straightforward to implement, if we had a description of a human mind that we were willing to trust. Unfortunately, the naive form of that approach immediately fails due to acausal blackmail. There's no obvious fix, but some folks are trying to devise non-obvious fixes, and I suppose it can be made to work in the long run.
Of course, after you can formulate a utility function in terms of a precise description of a human mind, the next step is reformulating it in terms of the outputs of an actually existing person, say some webcam videos and written instructions. You instruct the AI to infer the simplest program that would generate these outputs (using something like the Solomonoff prior again), and then proceed as in the previous step. That part has its own problems, but I expect that it can also be made to work.
The whole approach is kind of a long shot, but I hope that I've given a sense that the problem could be solvable by careful human effort on the scale of years. There are other approaches as well.
>That said, it's indeed very hard to directly specify a utility function involving paperclips. If our universe were a Game of Life universe and we knew exactly which configuration corresponds to a paperclip, I'd be able to do that right now. But since we don't know the true laws of physics, the "hard way" involves encoding some kind of Solomonoff prior over all possible physical universes, and a rule for recognizing paperclips in each of them. That's kind of a tall order.
Or we just need an approach to AGI that gives us more conceptual abstraction than AIXI and its Solomonoff-based reasoning. Which we needed anyway, since AIXI_{tl} is "asymptotically optimal" with an additive constant larger than the remaining lifetime of the Earth.
Luckily, there's quite a large amount of bleeding-edge research into exactly that: learning "white-box" representations that are understandable and manipulable to human operators.
AIXItl isn't really the kind of AI that I like, because it's reflectively inconsistent. In any case, the time complexity of AIXItl is kind of irrelevant at this stage, because we're trying to figure out what is the right thing to optimize. Only then we should start figuring out how to optimize that thing efficiently, because we really don't want to optimize the wrong thing efficiently. I'm very skeptical that approaches based on "conceptual abstraction" can tell us the right thing to optimize, as opposed to my preferred approach (defining a utility function over mathematical objects directly).
> I'm very skeptical that approaches based on "conceptual abstraction" can tell us the right thing to optimize, as opposed to my preferred approach (defining a utility function over mathematical objects directly).
And I'm very skeptical that mathematical Platonism is useful for AI: "mathematical objects directly" do not exist in the real world, and it is very much real-world things on which we want our software to operate. "Conceptual abstraction" simply refers to a learning algorithm that possesses a representation of, for instance, a chair, that is not composed entirely of a concrete feature-set (visual edges, orientations, and colors) and can thus be deployed to generatively model chairs in general.
Computational cognitive science is working towards this sort of thing, and the results should start to hit the machine-learning community fairly soon.
> "mathematical objects directly" do not exist in the real world
Well, there's an influential minority that thinks mathematical objects are all that exists (Tegmark multiverse). I don't necessarily agree with them, but that's one way to rigorously define the domain for a utility function, in a way that is not obviously exploitable. Another way is to define utility in terms of an agent's perceptions, but that is exploitable by wireheading, and IMO that flaw is unfixable as the agent gets more powerful. I'm not aware of any other approaches that are different in principle from those two, so I'll stick with the lesser evil for now, and hope that someone comes up with a better idea.
We know that human brains don't run on magic, therefore you can build a machine that can do the same thing.
I mean that is what I think, and most of the community thinks, but we haven't proven that empirically - nor could we, its unprovable.
But you don't have any good arguments for feeling safe.
Correct. Notice that I am not saying that we are safe. I personally think we aren't safe [1], but that is a different discussion which is equally as baseless.
The key here is that the argument that AGI will not be safe, and therefore needs to be regulated to make it safe, not only misses the point but ensures that we would never make one. The current level of funding for AGI development and understanding of the roadmap for AGI is absolutely dismal given it's potential impact. If we now add the constraint that in order to make one, we also ensure it doesn't threaten humanity - then you have sounded the death knell for AGI. Once we make AGI - in the sense that it is a recursively self improving system with human equivalent capabilities across domains - there is no way we will know what will happen. That's it.
[1] Which is why I am personally in the transhumanist camp.
Whether AGI is "tragically underfunded" or "luckily underfunded" is a matter of perspective. If the DoD announced a Manhattan-scale project to create an AI for defense, I'd be more worried than I am now. At the current levels of funding, smaller groups have at least some chance of making progress on safety research. It's also encouraging that Google is taking the problem somewhat seriously, and has instituted an "AI ethics" committee as part of the DeepMind acquisition.
Well, yes, but that's still a straw man. It's not as if Hawking and Gates and so on saw an Atari-bot and on that basis came to worry about AGI. Current AI technologies shouldn't update our estimates much about AGI, unless they push us way off of Kurzweil's (or someone else's) chart.
I do agree that we lack evidence of a broad community of AI experts who have been able to make good predictions, which potentially puts AI research even behind economists! Nonetheless, we'd expect there to be a period between "wild guessing" and "good and reliable models"; Gates and Hawking either think we're in that middle ground, or they think the risks are considerable enough that it's worth pushing towards it now.
I think that in any debate, you have a responsibility to respond to the strongest version of the opposing argument. Even if your "third option" is correct and the specific aforementioned people just "want to get in on the AI buzz", that still doesn't justify ignoring the argument itself.
The global warming comparison isn't very good. We had the capabilities for carbon output 50 years ago and it has slowly been increasing. But we don't yet have an AI. So there is no need to warn anyone, except out of irrational fear. Once we actually get it, it would make sense to start warning about its applications, so it doesn't get out of control.
If I apply your analogy correctly, then warning about AI now, is the same as it would be warning about global warming in the 19th century. Not very logical and certainly very paranoid.
If people had started to worry about the long-term effects of carbon output before it was already widespread, a lot of the damage it has caused could have been limited.
I don't see what's irrational about trying to solve problems before they become problems, rather than trying to do damage control after the fact.
If people had started to worry about the long-term effects of carbon output before it was already widespread, the Industrial Revolution would have been strangled at birth (the road from wood fires to solar panels leads through coal-burning steam engines; refuse to ever burn coal in large quantities and the chain is broken), and we would have stumbled around banging the rocks together until evolution optimized general intelligence out of existence or the sun autoclaved the biosphere.
Nothing in the world is more dangerous than premature attempts at safety.
Who said anything about refusing to burn coal in large quantities? The benefits of the industrial revolution may well have outweighed the costs, and no one is saying it was a bad thing – or likewise, that developments in AI are a bad thing.
But are you saying there's absolutely nothing that could've been done better in the industrial revolution? With some foresight we might have thought to develop solar power more urgently, for example, but no one thought we needed to. Worrying just a little might have changed that.
The point is not to stop progress, just to approach with caution and be mindful of what the long-term implications are.
I'm saying there is nothing that would in fact have been done better by premature worry about global warming. If your argument is that any sequence of actions will be less than optimal - compared to, say, the actions that could have been carried out by a hypothetical omniscient entity of infinite wisdom and benevolence - then that is certainly true but irrelevant. But the previous argument was that convincing actual humans in the nineteenth century to start worrying about global warming would have been net beneficial, and I'm pointing out it would have been disastrous.
And that, mind you, is still with the anachronistic application of 21st-century knowledge to the matter. If people in the 19th century had actually tried to figure out what they should be worrying about if they were going to worry two centuries prematurely, they would likely have come up with something completely different.
>With some foresight we might have thought to develop solar power more urgently, for example, but no one thought we needed to.
With some foresight, people were trying to develop solar, nuclear, and fusion all the way back in the '60s and '70s, but political interests ensured that research funding was reallocated towards shorter-term political projects.
When pharma companies produce a new drug which appears to have a positive effect, they don't know that it will cause problems and don't have any data to back up any concerns.
So do they rush full-steam-ahead into the unknown? Do you unleash the drug to whomever wants it, without regulation? No, of course not; you approach with caution, you hold clinical trials, you collect data as you go and you look for potential problems before the drug is widespread.
Arguably a similarly controlled approach would have helped in the case of carbon output and will help in the case of AI. It's not about fear mongering or preventing progress, it's about discussing ways to approach with caution and solve problems (and minimise the damage they cause), if and when they arise.
Arrhenius probably did no empirical research in this field. He devised a generalized formula based on theoretical considerations and indirect measurements (of the moon's appearance). It would take ~60 years before a quorum of researchers believed his formula could provide accurate real-life predictions.
Moravec's 70s papers are roughly comparable to Arrhenius's formula. The basic computational power predictions (a minor extension of Moore's law) still seem to be basically correct. There's a reasonable debate to be had about whether more interesting predictions made by Moravec/Kurzweil/etc. have been validated or not.
I think even most skeptics would agree that we have had for some time technology worthy of being called "AI". It's not "Artificial General Intelligence" for sure. And it is to date not materially strong enough to harm large numbers of people.
But AI is plausibly 50 years away from harming large numbers of people[1]. By that standard of "years until harm", global warming should have been first publicly discussed in the 1980s or 90s. That is a bit later than it happened, and later than some would like.
[1] If researchers develop models that say it's more like 100 or 500 years out, I think that would be great and very helpful. But several researchers have put out credible models which put a crisis in the 40s to 60s. Skepticism about their approaches that identifies specific flaws or produces improved models is good, much better than hand-waved dismissals.
Agreed. If we are to worry about it, then we need to do it now, not when major corporations are 2 quarters away from mass producing said robots. By then the investment of those companies and governments would be too great to just suddenly terminate all the programs. Think about all the jobs lost, corporate lobbying, etc. It would be too late.
> The fears of machine superintelligence are based on the belief that true AI is just around a corner. After all, we’re so advanced and the progress is only accelerating, it’s probably a few years away, at most.
Versus MIRI [1], quoting Nick Bostrom, whose book is arguably what most recently sparked all of the current discussion of AI risk:
> If what readers take away from language like “impending” and “soon” is that Bostrom is unusually confident that AGI will come early, or that Bostrom is confident we’ll build a general AI this century, then they’ll be getting the situation exactly backwards.
> [...]
> > My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI [human-level machine intelligence] not having been developed by 2075 or even 2100 (after conditionalizing on “human scientific activity continuing without major negative disruption”) seems too low.
> > Historically, AI researchers have not had a strong record of being able to predict the rate of advances in their own field or the shape that such advances would take. On the one hand, some tasks, like chess playing, turned out to be achievable by means of surprisingly simple programs; and naysayers who claimed that machines would “never” be able to do this or that have repeatedly been proven wrong. On the other hand, the more typical errors among practitioners have been to underestimate the difficulties of getting a system to perform robustly on real-world tasks, and to overestimate the advantages of their own particular pet project or technique.
I agree with the meat of the article, though – recent AI progress has surely been oversold by the media. It's really frustrating to see people continually arguing against strawmen when there are some perfectly lovely arguments to grapple with instead!
Aside: is there some kind of law we could coin that if a rebuttal to AI risk starts off by referencing Skynet and Terminator, we can safely assume it's not going to be worth reading?
The smart guys (professors and such) have brought this on themselves by pretentiously labeling their field Artificial Intelligence. Excitable fools would move on to the next thing if it were called Computational Rationality or something similarly boring.
> The fears of machine superintelligence are based on the belief that true AI is just around a corner.
No, the fear of AGI is that the time between when we first start to see promising progess in AI to the point where AI is out of our control would be so short that we would not be able to react to prevent it.
>the time between when we first start to see promising progess in AI to the point where AI is out of our control would be so short that we would not be able to react to prevent it.
That seems pretty absurd to me. A human-level AGI is going to require a sizable amount of time and data to train, so that it knows something rather than nothing. In fact, getting to the point where self-improvement becomes viable will take a good deal longer than training/educational periods for most other tasks. The information complexity is just higher.
So while AI safety should be taken seriously, we do need to "sober up" by remembering that an AGI has to obey the rules of information theory.
What's absurd is to have a strong conviction about inviolable restrictions on systems that haven't been imagined yet. The examples you cite aren't even compelling on their face.
For instance if you include whole-brain emulations in AGI (and we should because the consequences are essentially the same) it would take zero time to train. But even without whole-brain emulations a child learns rapidly from its environment alone. An AGI could conceivably do the same but faster, and moreover could take advantage of access to the Internet.
I don't know what information theory has to do with any of that.
>What's absurd is to have a strong conviction about inviolable restrictions on systems that haven't been imagined yet.
AGI has already been imagined. And yes, it does have to obey the laws of information theory: only one bit of knowledge can be learned from sensory inputs with one bit of entropy.
>An AGI could conceivably do the same but faster, and moreover could take advantage of access to the Internet.
Children still take 20 years to grow from infancy to being useful adults. If we hypothesize that an AGI could learn, say, 10x faster, we're assuming a two-year training time.
All the manifestations of AGI have not been imagined, and even ones that have like WBE refute your claims.
Even if you could somehow create a model precise enough to talk about information theory (define a symbol universe, map information to knowledge, itself a not-well defined term) it would not seem to be restrictive. Again, children take in a massive amount of sensory input.
Finally, the singularity hypothesis is that AGI could learn exponentially faster, not merely 10x. And again, WBEs don't need to learn at all.
But the fear and thoughts should instead be directed to how one can solve the economical an social challenges that arise when AI removes a lot of jobs. That is the issue here, not some Skynet-thingy destroying humanity.
I would agree with this article that every independent solution today does not have the slightest chance to be iteratively improved to some kind of real intelligence. Even the "generic" game playing AI is very limited outside the scope of a pixel buffer and some digital output signals.
However, I don't understand how the arguments correlate to "there is no need to worry about AI". The real worry should come from the fact that there are now many, many more people looking at the problem of AI than before, and it's a competition that now has serious money behind it.
The article mentioned the infinite monkey theorem as a counterargument, but ironically that is exactly how I think we will stumble upon generic intelligence. The more people that work on it and have the confidence, belief, and money to actually try out random things, the more likely it is that someone out there will discover the step change that will, in a day, take AI from deep prediction algorithms to eerily deliberate behavior.
Even if no one manages to shortcut our way to generic intelligence, we sure as hell will brute force it. While we're extremely early in the world of simulating organic brains, they exist. People have constructed virtual organic bodies controlled by virtual electric currents from virtual neurons in virtual brains. It's happening today, and as capacity, domain knowledge, and most importantly awareness increases, more and more people will work on the problem and bigger and bigger brains will be simulated.
I really don't see how we can question whether we will eventually reach human level intelligence running on computers. The only way I see generic machine intelligence not happening is by making people believe it is so impossible that no one ever tries to do it. And maybe that was the purpose of this article.
Here's one project simulating the C. elegans (earth worm) in a virtual environment. If they (or any other project out there working on similar problems) can prove that the simplest organic life form can be simulated in a computer, I think that's a big initial step towards brute forced brain simulation.
No doubt increased funding and interests will increase the odds of stumbling upon "cheap" strong AI - a faster-than-brute-force solution.
But consider the only strong intelligence we know: say intelligence only relied on the brain (which might well not be the case). This organ consumes a large fraction of the energy available to the individual. I'm not saying evolution produces optimal solutions but I guess its fair to say that a vastly superior algorithm would have had a good chance of concurring the world by now.
So imitating the biological processes of the brain in silicon would be rather costly no matter the gain. And that would in the end (at best) be a one-in-seven-billion individual ie. no magic powers.
I guess funding will flow towards practical systems that augument some consumer who will in the end provide a return on the investment.
Just saying we are not short on intelligence and its not likely to be cheap to run massive scale super-AI in silicon.
This is all very vague theorizing of course, but I believe that once we reach a level of artificial intelligence that resembles biological intelligence, scaling it will be quite feasible on a silicon platform. You've also removed the element of maintaining a body (exercise, eating, hygiene) as well as expanded potential interfaces (direct connection to the internet) so that even a 1:1 silicon brain would presumably be more efficient.
The next thing you can do is optimize the time step element, so that for every 1 second of thinking in a silicon brain, it has performed the equivalent of 10 seconds of thinking in a human brain. At that point, you've surpassed human intelligence. It's also safe to assume that if we do reach this point, we've learned a lot more about how biological intelligence works, and may be able to use to our advantage the beneficial intelligence properties of savants.
The really big downside with all these approaches is that the intelligence will be built from a blueprint that includes emotions, and we may do something that is very morally wrong towards the intelligence that we created.
>I would agree with this article that every independent solution today does not have the slightest chance to be iteratively improved to some kind of real intelligence. Even the "generic" game playing AI is very limited outside the scope of a pixel buffer and some digital output signals.
Why do people who are laypersons to machine learning continue to insist, despite what those in the field say, that we are learning nothing about the principles of general-domain learning from current developments? I think DeepMind is making genuine discoveries: they're certainly able to build things nobody was able to build before.
Steps are certainly being made, but I don't think anyone in the field said any of the technology that exists today can be iterated upon to get even close to the kind of AI that would eventually become superintelligent. But yes, I do believe that with the amount of people working on these problems, we may very well "hit gold" and come up with something that does have the potential to become superintelligent, before we get there through brute-forcing means.
Tldr: current AI capabilities are exaggerated by cherry-picked sample data and fear fueled by famous smart people is unwarranted.
Do Any of those cited give specific timelines? Even if we are very far away, do you really doubt that one day machines will have superhuman intelligence? I take that as pretty much a given, whether it's 50 or 500 years from now. What I'm not so sure of is whether fear is an appropriate response.
Altman's bacteria handwashing analogy doesn't hold up. We don't care about bacteria because they have no central nervous system or consciousness on any level. However, we go out of our way to protect animals that can feel pain and experience emotions because it's what we've decided through our intelligence and higher reasoning is the moral thing to do. Stats show that the more intelligent and educated the human, the more likely he is to behave morally as our greatest moral philosophers define it. Why would super intelligent machines buck this trend?
However, we go out of our way to protect animals that can feel pain and experience emotions because it's what we've decided through our intelligence and higher reasoning is the moral thing to do.
We care about and protect some animals, yeah. However, we also industrially butcher hundreds of thousands of other animals every day to satisfy our goals.
Stats show that the more intelligent and educated the human, the more likely he is to behave morally as our greatest moral philosophers define it. Why would super intelligent machines buck this trend?
There's pretty large jump from "more intelligent => more moral" to "behave according to human morality and satisfy human goals".
> Do Any of those cited give specific timelines? Even if we are very far away, do you really doubt that one day machines will have superhuman intelligence? I take that as pretty much a given, whether it's 50 or 500 years from now
Why? If you extrapolate from the amount of progress we have made toward AGI in the last 50 years (ie, none), then it's reasonable to argue that we still will have made no progress 50 and 500 years from now.
There are intellectual problems that humans aren't capable of solving; it wouldn't make any sense to talk about "superhuman intelligence" if that wasn't the case. The currently available evidence suggests that "constructing an AGI" might very well be one of those problems.
> If you extrapolate from the amount of progress we have made toward AGI in the last 50 years (ie, none)
That's an odd way of defining progress.
> There are intellectual problems that humans aren't capable of solving; it wouldn't make any sense to talk about "superhuman intelligence" if that wasn't the case.
A superhuman intelligence doesn't necessarily have to come up with solutions humans would never think of, it just needs to come up with a solution in less time, or with less available data, or with fewer attempts.
I think you're anthropomorphising. It's not given that machine intelligence and (human) morality/ethics are intertwined. What if the AI mind/intelligence is of such a high level that it regards us like we do bacteria?
It's probably safer to engineer the AI in such a way that it is guaranteed to be friendly than to trust that it will turn out that way and do nothing. Even a naive hedonistic calculus is probably safer than assuming that (human) ethics will result from more intelligence.
> What if the AI mind/intelligence is of such a high level that it regards us like we do bacteria?
We don't regard bacteria like that, because they are not sufficiently intelligent, but because we believe they can't feel or suffer. So it's not a question of "level" of intelligence.
I've begun to think about AI more in terms of "artificial will" than "artificial intelligence". Intelligence and will seem mostly orthogonal, in that advanced computational reductionism appears capable of extremely intelligent behavior without anywhere near the amount of self-determinism shown by insects.
Whether artificial will is something that can be implemented in a turing machine and run on silicon seems an open question. I believe the concept, when deeply considered, is almost precisely antithetical to the goal of programming languages. The goal of artificial will is to give control to the program, not the programmer. Perhaps that's possible in a turing machine, but I have a feeling the "natural language" in which to express such a program would be like an inside-out LSD trip.
If you knew how the atoms were arranged in an animal simulating the animal's behavior would be as simple as simulating each individual atom. In practice that's not tractable but it shows that in theory, since the behavior of atoms is computable (blah blah quantum physics shut up), anything an animal can do can be simulated by a turing machine including will. Just because it's not simple to express in a programming language doesn't mean it not possible.
Nice article. I'd like to point out that in the Pacman example the agent is only receiving a partial picture of the environment (details are in the video description), so it's unfair to criticize it for lack of planning.
As to why this is the case you'd have to ask the researcher, but I think it's because the observation space would be too big for the machine running the agent (both memory and run time)
As a contributor to several ML projects, I am happy the mainstream media and 'thought leaders' haven't found out and picked on early projects like OpenCog or DeepDive. We could've seen a tremendous amount of pseudoscience BS that would've undermined important initiatives.
Indeed we are far far away from true AI (to me it implies self-consciousness).
The point is even if it happens in 100 or 200 years, it will be a huge change in human history.
I guess Gates, Hawking and Stark are talking about a further AI creation, they are kind of long-term thinking guys.
It depends on what you mean by self-consciousness. On the one hand it can be processes for understanding oneself and modifying oneself to better survive in the environment, on the other you have the illusory "qualia" / subjective experience. It seems possible to me to have a self-improving process and an active process to evaluate the world state and self state without consciousness, and that this could result in something life-like. In this sense, is AI far away? It doesn't have to be necessarily.
It doesn't make sense to me personally why we need subjective experience, nor what it is in a physical sense. We can have agency and variability of choice without it, but at the same time we know how powerful it is in humans. Until we know exactly how it functions, i don't think anyone can claim with any certainty if it will happen, or how far away it is.
Agreed. As someone who works in AI (and knows its limitations) it's frustrating to see the layperson getting nervous about AI after watching Chappie and hearing some of these quotes from Gates and the like. Human level AI is way way way out.
I think it's reasonable for the layperson to get a bit nervous. Even if it's 50 or 200 years, advanced, human-brain-like-or-better AI will happen.
Imagine the public sentiment if mass media made people aware of the inevitability of the nuclear bomb 50 to 200 years in advance and people (not just the government) were actively working on developing it.
In the analogy of nuclear bomb, it's more likely than not a public awareness of nuclear power would have prevented nuclear plant (the good thing), nuclear bomb (the bad thing) is extremely unlikely to not be developed anyway.
Maybe, it's hard to know for sure, just like it's hard to know now :)
I think that had nuclear physics been more obvious 50-200 years earlier, it may have lead to a lot more practical private-sector development. In the case of power generation, this is seriously beneficial.
How will private-sector AI turn out? Will they boot up sentient AIs in Docker, then discard them? Is that OK? Who even knows?
Seems like it's a popular sentiment to think of Elon Musk as Tony Stark ;). I believe you mean to say "Gates, Hawking and Musk". Stark doesn't tend to talk about AI, he build them.
Human level AI seems to be a far reach but AI in specific domains seems to be the approaching intersection. What I mean by this is the self-driving car...very discrete skills but not any where near human level. Let's not forget google's "Find Mitten's the Cat" AI on videos.
Computers have always been very good at performing very specific tasks in highly proscribed circumstances. What were learning to do is expand the circumstances within which the computer can operate, but as you say the tasks are still very highly specific.
We could probably have programmed a computer to drive a car back in the 70s, within an extremely specific configuration of roads and with no traffic or pedestrians. Now self driving cars have very advanced sensors, can cope with other traffic, pedestrians and a much wider range of road geometries. But it's still only useful for driving a car, you couldn't take the same program and teach it to even controll a boat or a plane let alone play Jeopardy, predict the weather or trade on the stock market. Systems like this are going to be very useful, but they are not taking us on a path to develop strong general AI.
And it's still only useful for driving a car in a rather specific configuration of roads and a somewhat limited set of conditions. I'd argue that a car's ability to operate autonomously and rather reliably under those circumstances is leading a lot of folks to be very optimistic about how long it will be before I can summon robo-taxi with my smartphone.
I'm always glad to see someone cutting through the hype. Way too many people gobble up the missinformation pumped out but academia and corporations who's main goal is to increase their own funding. Add to that the wild speculative fiction of the likes of Ray Kurzweil and pop science "news" who again are mostly interested in entertaining and shocking you to increase they're own popularity and thus revenue. There's just not enough incentive to think critically and too few of the readership are inclined to cut through the crap. It's not even socially acceptable to think critically. It's such a downer. You're better off just agreeing and not pointing out your friend's gulibility if you want to continue to have friends.
I'm not sure why this is so difficult. AI simply going to be a highly context sensitive answering machine which will rely and pick its answers from a massive knowledge graph(which it can grow as it spends more time interacting with assisted learning). If there is no context, it will just pickup the most commonly expected answer based on weights and priorities. This is exactly what human mind does as well.
Everything else is parsing the sensor inputs in a language that makes sense to the AI answering machine.
So its knowledge category and context. Form a big enough and comprehensive graph of interconnected elements, and traversing that graph is what AI will do eventually. Apart from growing the graph and using the knowledge within the graph to improve itself.
Study AGI approaches like OpenCog, HTM, etc. and look at mainstream deep learning objectively. Lose your supernatural beliefs about the mind or human exceptionalism.
What do people do? Advanced pattern-based behavior generation and unsupervised hierarchical spatial temporal pattern learning and abstraction. Logic and reasoning. Attention and goals.
I believe we will start to see somewhat convincing human-like conversational interface, speaking and movement from machines in lesx than five years. I don't even believe these things require real breakthroughs necessarily -- we can probably mostly combine existing techniques.
The most common misconception about AI is that it will "mimic human level intelligence or better". Human intelligence is an infinitesimally small sliver of possible conscious entities with agency, and whatever "wakes up" enough for headlines to declare AI is real will almost certainly be worlds apart from us.
To me, algorithmic trading and investing is already a pretty big scary AI proposition. And it's happening all over, for real profit.
Privatization and algorithmification (or whatever) of large scale human decision making seems like an enormous change. And the computers don't have to "think" in order to do this.
The next step in this scenario would be policy decision making based on AI techniques. Statistical measuring, machine learning, etc in order to decide on political details, gerrymandering, etc.
In other words, computers don't need to grow sentient and godlike. We could just delegate power to them anyway. Maybe they'll be a bit stupid; so are we, just in different ways. And they won't understand human concerns in any deep way. But they'll be efficient and profitable.
You could already look at the international market as a kind of machine or distributed algorithm. With human "computers." Replace these computers with robots, per standard capitalist efficiency procedures, and voila, the world is run by machines.
"policy decision making based on ... statistical measuring" is > 2000 years old. The modern version based on a mathematical understanding of statistics & sampling, as opposed to collecting aggregates large enough that you can just treat the error as 0, is ~80 years old.
It's old, but that doesn't invalidate the comment, which I found insightful... odd that it's getting downvoted. Machines get greater and greater amounts of control, and humans correspondingly less, over time. Even if the trend has been happening for thousands of years, it's still worthwhile to think about the end game that could come when things go to an extreme.
Yeah, it's quite fascinating. Like the "Cybersyn Project" in Chile. [0]
Project Cybersyn was a Chilean project from 1971–1973 (during the government of President Salvador Allende) aimed at constructing a distributed decision support system to aid in the management of the national economy. The project consisted of four modules: an economic simulator, custom software to check factory performance, an operations room, and a national network of telex machines that were linked to one mainframe computer.
Cybersyn was kind of a joke; it was a Star Trek set hooked up to telexes where aides manually aggregated information in the same way leaders get their briefings the world over.
Most functional AI nowadays consists of algorithms that are carefully tuned to solve a very specific problem in a narrowly defined environment. All the research nowadays is pushing the boundaries of a local optimum. Right now, true AI is a pipe dream without a fundamental shift in how we approach AI-type problems. And no, machine learning/deep learning is not that shift; it is just another flavor of the same statistics that everybody already uses.
What concerns me is not Skynet; what concerns me is the exasperating over-confidence that some people have in our current AI capabilities, on Hacker News and elsewhere. Too often, we discuss such technology as a miracle tonic to various economic or social woes, but without acknowledging the current state of the technology and its limitations (or being completely ignorant of such), we might as well be discussing Star Trek transporters. And usually, the discussion veers into Star Trek territory. Proponents of self-driving cars: I AM LOOKING AT YOU.
Take self-driving cars: at least with humans, our failure modes are well-known. We cannot say the same for most software, especially software that relies on a fundamentally heuristic layer as input to the control system. To that mix, add extremely dynamic and completely unpredictable driving conditions -- tread lightly.