Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I started school, my dream was to figure out a theory to underpin a grand unified model of artificial intelligence. Imagine my disappointment once I started studying the subject in detail.

Most functional AI nowadays consists of algorithms that are carefully tuned to solve a very specific problem in a narrowly defined environment. All the research nowadays is pushing the boundaries of a local optimum. Right now, true AI is a pipe dream without a fundamental shift in how we approach AI-type problems. And no, machine learning/deep learning is not that shift; it is just another flavor of the same statistics that everybody already uses.

What concerns me is not Skynet; what concerns me is the exasperating over-confidence that some people have in our current AI capabilities, on Hacker News and elsewhere. Too often, we discuss such technology as a miracle tonic to various economic or social woes, but without acknowledging the current state of the technology and its limitations (or being completely ignorant of such), we might as well be discussing Star Trek transporters. And usually, the discussion veers into Star Trek territory. Proponents of self-driving cars: I AM LOOKING AT YOU.

Take self-driving cars: at least with humans, our failure modes are well-known. We cannot say the same for most software, especially software that relies on a fundamentally heuristic layer as input to the control system. To that mix, add extremely dynamic and completely unpredictable driving conditions -- tread lightly.



The key to self-driving cars is to realize that they don't have to be perfect - they just have to be better than us. It's not that the AI driver so good - it's that human drivers are SO BAD! I agree with you that AI is a pipe dream but I do think self-driving cars will succeed. I don't think the computers will ever match our judgement but it's trivially easy for them to beat us on attention span and reaction time, which will make them better drivers.


> The key to self-driving cars is to realize that they don't have to be perfect - they just have to be better than us.

Again, that's skirting the issue. Do you have any idea how close self-driving cars are to being "better than us" ? As someone who's done computer vision research: not close at all.

> I don't think the computers will ever match our judgement

That is exactly the problem.

> it's trivially easy for them to beat us on attention span and reaction time

Attention span and reaction time are not the hard parts of building an autonomous vehicle.

This kind of comment beautifully illustrates the problem with casual discussions about AI technology. Humans and computers have very different operating characteristics, and discussions all focus on the wrong things: typically, they look at human weaknesses, and emphasize where computers are obviously, trivially superior. What about the contrapositive: the gap between where computers are weak, and where humans are vastly superior? More importantly, what is the actual state of that gap? That question is often completely ignored, or dismissed outright. Which is disappointing, especially among a technically literate audience such as HN.


I suspect that the current google car is already safer than the overall average driver.

Don't forget some people speed, flee from the cops, fall asleep at the wheel, get drunk, text, look at maps, have strokes etc. So, sure at peak performance cars have a long way to go. However, accidents are often a worst case and computers are very good at paying attention to boring things for long periods of time.

PS: If driverless cars on average killed 1 person each year per 20,000 cars then they would be significantly safer than human drivers.


> Don't forget some people speed, flee from the cops, fall asleep at the wheel, get drunk, text, look at maps, have strokes etc.

Again you are falling into the same pit: a nonsensical comparison of human and computer operational/failure modes. Of course computers can't have strokes. And yes, they are good at "paying attention to boring things". That is trivially true. And that's not where the discussion should be focused.

I do hope self-driving cars will be generally available sooner rather than later. What's not to like about them? But what I'm really curious about is how that availability will be qualified. Weather, road, visibility conditions? Heavy construction? Detours? Will this work in rural areas or countries that don't have consistent markings (or even paved roads!)? Will a driver still have to be at the wheel, and what extent will the driver have to be involved?

What is really annoying are breathless pronouncements about a technology without critically thinking about its actual state and implementation. We might as well be talking about Star Trek transporters.


A car that can that 80% of the time can get a sleepy or drunk people home would be a monumental advantage and likely save thousands of lives a year.

Basicly an MVP that flat out refuses to operate on non designated routes, bad weather, or even highway speeds could still be very useful.

PS: Classic stop and go traffic is another area where speeds are low, conditions are generally good. But, because people can't pay attention to boring things you regularly see traffic accidents which creat massive gridlock cost people day's per year sitting in traffic.


> I suspect that the current google car is already safer than the overall average driver.

Based on what? Even Google admits that the car is essentially blind (~30 feet visibility) in light rain. They've done little to no road testing in poor weather conditions. The vast majority of their "total miles driven" are highway miles in good weather, with the tricky city-driving bits at either end taken over by humans.

Google often leaves the impression that, as a Google executive once wrote, the cars can “drive anywhere a car can legally drive.” However, that’s true only if intricate preparations have been made beforehand, with the car’s exact route, including driveways, extensively mapped. Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps.

http://www.technologyreview.com/news/530276/hidden-obstacles...


A self-driving car that can only do highways and can't drive in bad weather still destroys the trucker industry overnight. And that is today.

No, I'm not drinking the AI koolaid here thinking there will be some breakout solution to fuzzy visibility problems these cars have. The difficulty differentiating contextual moving objects correctly, knowing what is "safe" debris and what is not, etc, are all neural net problems that will take years of development and even then just be heuristics.

But you don't really need all that. All you need is something that, given a sunny day and a highway, beats the human at driving the truck, and suddenly you lose three million jobs when it hits the market.


>All you need is something that, given a sunny day and a highway, beats the human at driving the truck, and suddenly you lose three million jobs when it hits the market.

I don't see how, given a sunny day and a highway, an autonomous vehicle can quantifiably beat a human driver to the degree of 'destroying the trucker industry overnight.'

Unless you're assuming that there are few human drivers capable of managing a trip down a highway under the best possible conditions, the results between the two would have to be more or less equal. Those human drivers can, meanwhile, still manage night, sleet, snow, fog, and arbitrary backroads and detours.


It's going to be awhile before they can actually eliminate the human. They've had self-driving tankers and self-driving airplanes for awhile now, they still need human operators for various reasons. They just don't have to actually drive.


> A self-driving car that can only do highways and can't drive in bad weather still destroys the trucker industry overnight.

Destroying the trucker industry with a self-driving car which can't drive in bad weather?? I don't think so! Trucker's clients usually care a lot about predictability, and they would NOT be happy to hear "sorry it is raining and the robot car couldn't arrive".


You might be right that they are safer, but you're totally failing to contest the point. Yes computers are lots better than humans at lots of things, but they are worse at other things. They can't reliably tell the difference between a cat and a bench. Things that may be important when the computer is going 80 mph with humans on board.


Why would a self driving car need to know the difference between a cat and a bench? All it really has to know is that it is an object of a certain size and not to hit it.

The things that the car needs to know are largely standardized: the lines on the road, road signs, speed limits, etc.


>The things that the car needs to know are largely standardized: the lines on the road, road signs, speed limits, etc.

These are things humans need to know as well, and yet autonomous car cheerleaders constantly argue that human drivers are death machines, despite the fact that most human drivers know perfectly well how to follow these norms, and even deviate from them, without incident, most of the time.

I suspect that in their enthusiasm to set the bar of human intelligence as low as possible in order to make the case for autonomous cars seem urgent and necessary, some vastly underestimate the actual complexity of the problem. An autonomous car that only knows to 'avoid the boxes' has worse AI than many modern video games.


Maybe if it has to choose between avoiding a kid and a rock...


I'd settle for an AI that's better than only the worst drivers. It could monitor your driving, and only kick in if it thought you were that horrible/drunk.


To add to what has already been said, I sorta disagree with "they just have to be better than us." I would prefer the car to not just be better than an average driver, but be better than me. As any person, I'm sure I dramatically overestimate my skills, which makes that bar quite high. So it doesn't have to just be better than us, it has to be better than we (however wrongly) think we are.


Do you know there is a field studying general artificial intelligence? They have a conference[1], journal[2], and a slew of competing architectures.

What you've basically described is that mainstream "AI" is actually quite divorced from what the public thinks of as "AI" -- something matching or improving upon human-level intelligence. Still, there are those still working on original grand dream of artificial intelligence.

[1] http://agi-conference.org/ [2] http://www.agi-society.org/journal/


There is a problem with naming. There are two kinds of AI which get discussed - AI (Artificial Intelligence) and AGI (Artificial General Intelligence).

The problem with these names are that they are not easily distinguishable by the general public or others who are not already familiar with the concepts. Its like the poor coding pattern where you have two variables in your code with almost the same names - 'wfieldt' and 'wtfield' will get confused by the next person who has to maintain your code. It's the same problem with popularization of concepts.

The same thing happens in Physics, where people confuse energy with power, and think that Special Relativity must be much harder than General Relativity (it's special, right?). The words used to name things make them understandable, or not.

What's worse is that this confusion is intentional, created on the part of people who wanted to hype their (actually valuable, but perhaps overlooked) work, by confusing their 'boring' or 'limited' AI work with AGI work in the minds of those supporting or funding that work.

That's why people say things like "true AI".

There really needs to be a name for the non-AGI kind of AI, which refers specifically to the limited-domain type of problem solving which "AI" has been so successful. "AI" is a bad name for this.

Personally, I am partial to the metaphor of "canned thought". It shows that a human had to do the work beforehand of setting up the structure of the algorithm that is later doing the information processing, and that the process is limited by the thought that were previously put into it. But Canned Thought is also a pretty bad name for the field. It's not descriptive and it's not catchy. Anyone have any suggestions for a better name for non-AGI AI?


There really needs to be a name for the non-AGI kind of AI, which refers specifically to the limited-domain type of problem solving which "AI" has been so successful. "AI" is a bad name for this.

I think you are looking for Narrow Artificial Intelligence, which hasn't gained much traction. To me Narrow and General AI respectively make perfect sense for their scopes.

[1]http://en.wikipedia.org/wiki/Weak_AI


A name I've heard for this non general type of AI is RI (restricted intelligence).


The core dilemma, and I'm paraphrasing I can't remember who, is that for every other problem given to computer scientists to solve, there already exists an understanding of what the problem space is and what a solution is required to do. The computer scientist just architects and implements the solution in software, but they're implementing business logic, or physics equations to guide a spacecraft, or "route this car through the mapped 3d environment and follow these rules and don't hit anything"

But sentience simply is not defined. There are no equations for it, no foundational science modeling it. It's possibly the ultimate unanswered question for all human history, and so the task given to computer engineers is: implement this thing we haven't as a species been able to explain or describe or comprehend for 10,000 years

And really, it's a bit much to ask. The neuroscientists could have a eureka moment tomorrow and finally crack the code of sentience, and once documented I'm confident some form of rudimentary synthetic implementation would be possible within a decade. But I'm not sure many neuroscientists or practically anyone (Tononi/Tegmark/Hawkins/?) are directly attempt to investigate and build theories of consciousness.

That theory of consciousness would be the instruction book for the programmers building the first hard AI.


But why would sentience or consciousness be necessary for (or even relevant to) a system's utility and/or hazard? Put differently, it seems like what's really relevant is its cognitive ability, model of the world, decision-making skills, and so on, all so-called "easy problems". Despite the designation they're not truly easy problems to investigate; on the other hand, neuroscientists and other researchers are making steady, semi-predictable progress on these questions every year.

So what role is left for sentience? Maybe it is the idea that only sentience could cause something to act "on its own", to have "intentions"? To that I would ask if any creature, sentient or otherwise, can be said to act free from its "programming", genetic, cultural, etc.

Alternately maybe you think sentience would be necessary for an entity to perform unexpected actions in furtherance of subgoals (a weaker interpretation of "intention"). Or conversely: without sentience, computers can only perform actions which are trivial consequences of the orders they have been given. But, of course, even computers today do all kinds of things "automatically". If I ask for a browser tab to open on my laptop, memory gets freed and allocated, various fans turn on and off, lots more happens that would be very difficult for me to predict (even if I designed the system). Again, sentience seems entirely besides the point.


> the ultimate unanswered question

Answer: 42.


I too am not concerned about Skynet.

I'm worried about the descendant of Bonzai Buddy. I'm worried about annoying little algorithms that are simple-minded but have access to vast quantities of data I'd rather they not, and that have means of actuating things in the real world that could prove annoying.

Everybody seems so goddamned thrilled over this stuff that nobody seems to be worried about the mundane but harmful things we can accomplish and enable with current-day technology.


Could you be more specific? How could an algorithm be "annoying" in the real world? How could the effectiveness of an algorithm be so harmful?


"All representatives are busy, please stay on the line. Meanwhile here's a friendly word from Amazon. We notice you've been searching for baby clothes recently. We will now read a long list of baby-related products to you..."


Twitter bots.

Imagine a bot whose only job is to squat a hashtag and intermittently harass people who mention it.

Now stop imagining, because we've already got those.

Imagine another bot that just looks for bug trackers and spams incomprehensible combinations of issues generated by mining previous issues. It can be hard, on a good day, to tell difference between frustrated and inarticulate user, and markov-chain spam.

Let your imagination run wild.


What concerns me is not Skynet; what concerns me is the exasperating over-confidence that some people have in our current AI capabilities, on Hacker News and elsewhere.

Yes exactly!!

Each AI winter is preceded by unrealistic expectations. If the unrealistic expectations this time are resoundingly fearful, I worry that the next AI winter will be even worse - like create legislation and basically kill the field worse.


>Too often, we discuss such technology as a miracle tonic to various economic or social woes, but without acknowledging the current state of the technology and its limitation

What you describe is one half of the Amara's Law: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run"

What you demonstrate with your comments is the other half :)

A lot of technologies were over-hyped first and underestimated later, until they just appeared to be part of our lives. I'm quite sure that, sooner or later, it'll happen with AI too.


Jeremy Clarkson made a fascinating point about the self-driving car and it's "intelligence." shortly before the whole fracas thing:

If you assume these cars will some day be able to identify humans, what decision is made in a situation where the car can either swerve and (potentially) injure a bystander, or continue on a trajectory which calculates to be certain death for its occupants?

What shocks me is how few people understand that this would not be a "decision" in the sense that the software would have a choice. It would either be coded in directly or would be a resultant output of the criteria it uses to already do its pathfinding.


Eh. If we get to the point where an AI has this degree of resolution, we've solved a LOT of problems that we don't know how to solve today. And the answer is that I either get to choose a setting or you save me because I'm the owner of the AI.


I had a very passionate interesting in AI at high school - did a CS degree then was lucky enough to get a paid RA post (UK - Research Associate a contract researcher paid for by a project where the research work overlaps to a greater or lesser extent with your PhD topic).

I don't think my own interest in AI recovered from reading Drew McDermott's A critique of pure reason - fortunately that was about 1992 and I found this really cool networked hypertext thing that was far more interesting....


surprised not to see this linked in the article:

http://karpathy.github.io/2012/10/22/state-of-computer-visio...


>And no, machine learning/deep learning is not that shift; it is just another flavor of the same statistics that everybody already uses.

How do you think the human mind runs? Something other than statistics?


Human minds don't run on statistics. Some can be explained by statistical modeling but we don't really understand how an organic brain works.


Your first sentences needs some more support. We know that some of it can be explained by statistical modelling, and we don't yet know how to explain the rest of it. Why does that make you sure that it doesn't run on statistics?


What makes intelligence is organic and doesn't seem to map easily on statistical modeling. We can oversimplify some behaviors and claim some math on it but we at best scratch the surface.


>Human minds don't run on statistics.

Theoretical neuroscience at least partially disagrees[1]

[1] -- http://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20princ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: