Hacker News new | past | comments | ask | show | jobs | submit login

I took the liberty to draw a few situations which AI would have likely no clue what to do, even if it detected the danger. I'll be adding some more as every day on the freeway brings me new ideas (in fact I'm editing one right now). Something like a "Winograd Schema for self driving cars": http://blog.piekniewski.info/2017/01/19/what-would-an-autono...



Your artwork is great! I'm not sure #1 would really be a problem as that situation never should occur and is totally preventable (there always should be traffic cones/barricades surrounding the cover, something the car easily could detect). In the rare case it does happen, the utility company would/should be at fault for leaving a manhole open like that.

I can also think of a few ways to prevent number 2 (basically, a combination of GPS + knowing where all intersections occur + road data from thousands of other connected cars = knowledge of where every stop sign is. Certainly not foolproof, but I think ultimately it is a problem that has possible solutions)

The rest of your examples are fantastic though, #3 being quite terrifying actually. They are all very intriguing thought-experiments and I look forward to seeing your future additions!


> In the rare case it does happen, the utility company would/should be at fault for leaving a manhole open like that.

They're who's at fault, but you're the one who's dead.

I'd rather my car avoids holes in the road even if they haven't been marked [yet].


> They're who's at fault, but you're the one who's dead.

This is what my dad taught me too and why I've since always been cautious both as a driver, bicyclist and pedestrian even when I'm in the right to drive/go. Being right isn't as fun when someone's on their way to the hospital/morgue.


An open manhole will not cause you to veer out of control and die in a horrific accident. At worst, you might have a blowout. But in general, unless you have a very small car, most cars tires are larger in diameter than the diameter of the pothole.

Provided that your tires are normal (and not the thin-sidewall don't-my-ride-look-cool tires with big rims), and in relatively normal balance - and you are going a normal speed - it most likely will drop slightly then bounce off the lip, jarring car forcefully, but likely not doing much damage.

If the tire doesn't blow (it may), then you might get a cracked rim (if alloy) or bent rim (if steel and depending on the force and deformation), and you would probably want to have the tire inspected (because the plys may be compromised in the area from the sharp impact, which could cause premature interior delamination of the tire in the future).

Yes - ideally you want you or your car to avoid an open manhole or large pothole, but in general, its unlikely to be the cause of a serious accident.


LIDAR imaging already scans the road for it's topography. An open manhole would be easily detected (its equivalent to even a particularly deep pothole - un-navigable).

I don't know how Tesla's video only solution would cope, but they're not Level 5 yet.


How would it show up different than a regular pot hole? What if its almost filled with water and the only thing giving away its true nature is its roundness? The point is to explore the limits. If these aren't the limits, then they are likely very close. I'd like people to think of those cases before they jump into their car and start playing cards (like i one of the videos in this thread).


Basic geometry. A pothole of greater then a certain depth, at a known angle from sensor will produce a different distance profile (namely - rather then an expected arc, you'd see a sudden increase in distance, corresponding to a non-planar surface). Depth is inferred from the deviation from the plane of the road-surface around it.

This is not actually a particularly interesting edge case.

The water-case is more interesting, but I would counter by posing the question of how a human would know whether a water-filled pot hole is safe to navigate? A clear water pothole would not impinge the LIDAR, a murky water one might and could fool LIDAR I suppose (I can not find any solid info on this interface).

Of course car RADAR and a visual camera are likely to also be fitted - both of which could identify a water-filled pothole which LIDAR might struggle with (RADAR can penetrate water and find the unknown backscatter, a camera can simply see the murky-water pothole and choose to navigate around it or stop - in both cases a human can't make a better informed decision).


It depends on the context. Humans may make a better decision if there is something else providing the information. Perhaps the manhole cover sits at the curb triggering immediate attention. Perhaps there are cones, only in the wrong place because some drunk guy moved them for fun. Such is reality.

The problem is not so much availability of information (a sensor packed car has a lot more information than a human behind a windshield), but making sense of that information, particularly in those 0.0001% uncommon, strange cases.

Humans can connect evidence (a manhole cover sitting next to a water filled hole) because they generally know how things work (aka common sense knowledge, long standing problem in AI). Humans can infer what happened and predict what could happen. These things are not really available to AI which is these days more like see-react, not see-anticipate-react. I have a specific post on this on my blog:http://blog.piekniewski.info/2016/11/03/reactive-vs-predicti... The blog has many other posts on limits of todays approach and on some ideas to fix it in the future.

I agree that some of these cases may be challenging to some humans (particularly inattentive), but we want the self driving car to be (much) safer than inattentive driver, so we need to set the bar high.


Clearly if you consider any of these scenarios in isolation we can dream up a solution, but can you do that with literally any adverse scenario? That seems like the point to me.


Yes, that is exactly my point. Solving these particular situations is like typing all the possible Winograd Schema sentences into a chat bot. Certainly possible but does not solve a larger problem.

In reality there will always be a different situation. These situations sit in the statistical long tail (are too infrequent to reliably train, and too frequent to ignore) and vary enormously from case to case. These are just examples.


Except if the first scenario is solvable (fairly trivially) then why should I assume there must exist an unsolvable scenario?

An argument from the vaguery of "humans have context" with regards to driving is a poor one - humans that are driving do not do a good job of analysing context because of reaction times, and do not share a common context they react to similarly.

You haven't made a compelling case, because you've yet to present a compelling example (i.e. one definitely unsolvable by reasonably usable technology on an autonomous platform). The manhole scenario is based on assumption about the operation of LIDAR which simply aren't true and you've had to modify it in a number of ways to try and make it tricky (i.e. when has there ever been a manhole completely full of water?)


I think you're still confused. The point isn't that any of these scenarios don't have solutions. It's that there is a practically infinite number of scenarios like them and it's impossible to plan for each and every one of them.


A sinkhole (named "Steve"?) just opened up in a highway near Oakland, CA. Initial photos showed a hole a couple feet across. Manhole-sized. Repair crews later found that under the asphalt the sinkhole was large enough to swallow a car. Human drivers recognized that something was amiss and avoided the hole. Would a self-driving car?


Let's put "ability to detect solid road in front of car" near the top of the requirements list. Whether it is a pothole, sinkhole, missing manhole cover, or missing bridge, it needs to be a mandatory requirement. From what I've seen of LIDAR imaging it seems like it is possible.


> In the rare case it does happen, the utility company would/should be at fault for leaving a manhole open like that.

Sometimes, the utility wouldn't be at fault. It has happened in the past the people would steal manhole covers to sell them for scrap (you know, to get money for their next 'fix').

This is less of an issue now, as most scrap companies won't take manhole covers anymore, unless the seller can prove they represent the locality shown on the cover, and that they have such authorization...

...but there are less scrupulous scrap companies.


Hey, I just finished another one. A school bus with a swinging open stop sign in the middle of the freeway.

Anyway, just a food for though, autonomy really requires a lot of "intelligence" and our technology is not quite there to deal with all these bizarre corner cases. I'm glad you like it.


You could discount that by noticing that stop sign generally shouldn't move relative to the ground. But then, you may want to notice when a police car wants to pull you over by waving that thing they wave out of the car's window...

I actually love trying to make computers deal with the real world - it quickly reveals just how goddamn complicated the real world is, and how many things we think as hard and fast are utterly arbitrary.


Right, there are cases where such a moving stop sign would actually be a real stop sign. The problem is, we try to program in the complexity that cannot be anticipated.


I think such things can be discounted by knowing that motorways never have stop signs (same with the prank stop signs). Of course, there are a lot of things that only cause a change in driving behaviour in very specific contexts. The bus' stop sign probably only applies when the bus is stopped. Similarly, in Germany at least a bus may turn on hazard lights on a bus stop, requiring everyone to drive past only at walking speed. A bus stopped on the shoulder of a motorway with hazard lights on is a different context again and thus should not trigger the same behaviour.

There are a lot of rules and laws and they change from country to country, or in the US' case, even from state to state. Self-driving cars must know these things and react accordingly. So I think the scenarios presented here are just a few (admittedly, more far-fetched than others) more contexts amidst the probably hundreds of others that already have to work correctly for switching safely between city and motorway driving, driving in a living street, observing right of way correctly in all circumstances (roundabouts, weird stuff like four-way stops, signs changing ROW for one intersection, or a stretch of road, lowered kerbs, people exiting a living street even though it's to the right, cars on an on-ramp and perhaps letting them in based on how far the on-ramp still continues, ...).

Stop signs are interesting in any case, since they have a characteristic shape. If we go full autonomous, then snow-covered signs must be correctly observed as well, at which point any octagon shape may be a stop sign (perhaps, again, depending on context). Same with signs that don't reflect well anymore at night.


The real sign SDVs are here will be when infrastructure starts accommodating their needs. Humans aren't really good at driving either, so we've invented a lot of ways to help them and direct their attention. Open manholes are supposed to be marked clearly, because people do miss that stuff. When there's snow on the road hiding lane markings, someone will come and clean it out. Signs are made to be retroreflective. Etc.

So at some point, I suppose the infrastructure (broadly understood - including laws) may be modified to reduce the dependence on cultural context and other things machines are weak at. So for instance, it won't be every sorta-octagonal shape that works as stop sign, it will be required by law to be clearly visible and also have some machine-friendly accommodations, and SDVs will be free to ignore signs without those accommodations.

(Doesn't solve the prank problem, but humans are equally vulnerable to a targeted prank anyway.)


>When there's snow on the road hiding lane markings, someone will come and clean it out.

In New England, snow can completely cover the road surface for days or even weeks at a time, and ever-changing piles of snow cover the curbs and parts of the lanes. Humans just choose a path without regard to where the lanes are in the summer. On some roads this turns a four lane road into a two lane road with a lane-width snow pile between the lanes. In a few spots it turns a two lane road into a one lane, with drivers from different directions taking turns.


Remember when Boston was about to start dumping snow into the sea because they ran out of places to put it?


Yeah, there were a few spots in my neighborhood that they gave up on trying to plow, but those of us with 4WD trucks were able to get through. Not sure how a self driving truck is going to know which snowbanks it can drive through/over and which it can't. Sometimes I couldn't tell until I tried. That was fun!

Now I really wish I had taken more photos that year specifically to illustrate this sort of thing.


> Humans aren't really good at driving either,

I'm not sure where did that come from. One fatality per 200 million miles not good? Seriously, there are millions of people travelling every day. And true, there are accidents, but I think this "Humans aren't really good at driving" mantra is not really serious (but frequently repeated recently by PR of some companies).

Attentive humans are extremely good at driving, distracted humans are much worse, but perhaps the technology should focus first on the much easier task of making sure the driver attends. That would probably safe many lives, before we can have a real autonomous car.


> When there's snow on the road hiding lane markings, someone will come and clean it out.

In Nordic countries, you don't see the lane markings for several months, they are under snow and ice. And sometimes, when it melts in April or May, you notice you have been driving on the roadside for a few months :-)

Oh, and you don't see anything at all if you are behind a truck or a bus.


In Nordic countries...

In some parts of some Nordic countries. In my part of my Nordic country we haven't had proper snow for probably about 6 or 7 years and even then it was only for a few weeks.

But more generally. If the snow clearing machines where also driver less then they might be able to run more or them more often and keep the roads clearer.


It's not just a matter of running them often. Where are you going to put all this excess snow? And operating the plows won't be free even without labor.


> (Doesn't solve the prank problem, but humans are equally vulnerable to a targeted prank anyway.)

I would hesitate using the word "equally". People are actually quite robust. Particularly the second human in row will certainly not be tricked by the same prank that tricked the first one.


I want to emphasize the word "targeted" I used. Pranks involve an intelligent agent with malicious intent and an attacker's advantage - i.e. prankster is free to exploit any vulnerability of its victim. People have different vulnerabities than machines, but they still have them.


Sure they do, agree with that. But word "equally" suggest the susceptibility is the same. I would actually emphasise the difference. It is much easier to fool a machine than a human, particularly if we have a copy of the machine at hand and tinker with it (see adversarial examples for deep nets). Humans are all different, so we can never expect our "adversarial example" to be 100% certain to work.


Especially if it's a familiar road.


But if it's that reliant on a special sensor it's quite susceptible to not only pranks but sabotage or plain old maintenance issues.


I know a few places where the stop sign is > 90% covered by bushes in summertime, and the continuous transversal white line > 90% covered by gravel (and the remaining 10% have worn off). Yet people do stop. Either because they know the place, or because they "feel" there is something suspicious (the comparative size of roads, the angle of the crossroad, the fact that there is a "pile" of gravel there, etc.). Not sure how a car could decide this by itself.


I wouldn't say never; sometimes you see them in really odd places.


That's something what's easy to train. Each of these are a bit country specific like how a road sign is mounted and where, but this is far from a real problem.


What is the real problem then?


>Open manhole.

Not sure why this would be a problem? The car can identify a dangerous road surface and navigate around it.

>Many stop signs.

Is it really so bad if the car does stop at every one? It's inconvenient, but not dangerous. Unless there's someone behind in which case the car should know whether there's a safe stopping distance behind and act accordingly.

>Fire in a tunnel.

This doesn't seem like it would be that difficult to detect. And also, I would expect self-driving cars to have an emergency stop button.

>Tornado.

I would imagine this could be detected as a visual anomaly, but more generally... yeah natural disasters suck.

>Potential car-jacking.

I really don't think this should be the car's responsibility. If there's a serious risk of people with guns ambushing you on a road, then it's not safe and you shouldn't be driving there. How the self-driving car reacts to that is the least of my concerns.


> I really don't think this should be the car's responsibility. If there's a serious risk of people with guns ambushing you on a road, then it's not safe and you shouldn't be driving there. How the self-driving car reacts to that is the least of my concerns.

Well, I think the idea behind this prompt was more about how it would handle humans where there shouldn't be any. I would imagine that with the premise that the car would stop for unexpected pedestrians, a car jacking/mugging of this type should be as simple as simply find a remote road, wait for a car with a preferred target, and then intercept - the car would acquiesce and make a person vulnerable to an attack (as simple as smashed windows with a window breaker and then an attack on the occupant.

The picture has an extreme outlier, but criminals adjust to technology pretty fast, especially when it makes their lives easier. It may not be armed gunmen looking to get you, but if you can be assured you'll get a target to stop, I'm not sure why criminals wouldn't exploit it.

edit: change "attach" to "attack" as I meant it originally


This is a method of committing a carjacking that works today, it's not some kind of new exploit. Very few people will intentionally run down a person in the road, as opposed to stopping, because they think something fishy might be going on.

The scenario I'm familiar with involved a bicyclist crashing, or laying down, his bike on a slow stretch of road in an industrial area. His companions would approach the car when it stopped. The deterrent for this is a harsh criminal penalty, not AI.

It never became a commonplace crime, in spite of being relatively simple.


I think you can get on Liveleak and find lots of video evidence that there are places where drivers will not stop for precisely this reason.


Rather than train automotive AI to handle this case, we should just stipulate that if you live in a godforsaken place that might require you to run over somebody during your drive in order to survive banditry, you should keep your seatbelt fully fastened and remain aware enough to retake control of the vehicle at short notice should it stop. I'm sure some airline has a sign that could be reused for this purpose.

Or, you know, you could just drive yourself. Or pay a driver. Or stop fantasizing about embarrassingly absurd stuff that has nothing to do with the efficacy of automotive autonomy.


The entire point of the article you're discussing is that there are a million bizarre little circumstances like this that the software will probably fail to take into account, not that carjacking specifically is unsurmountable. This stuff isn't "embarrassingly absurd;" it happens daily.


> The entire point of the article you're discussing is that there are a million bizarre little circumstances like this

In defense of the article, which is pretty reasonable, it doesn't mention the ridiculous hijacking example you were harping on. That is quite a unique situation, technically and ethically.

The other examples you gave: bad roads, downed power lines, weather, and fire are all much more reasonable examples, with much more straightforward solutions available. It's essentially obstacle avoidance and exception handling. The article's example of situations involving not having any safe place to stop is even more interesting.

edit: I was referring to TFA, not to the artist who illustrated some stuff on his blog and shared it here. Which was also a fine effort...


> In defense of the article, which is pretty reasonable, it doesn't mention the ridiculous hijacking example you were harping on. That is quite a unique situation, technically and ethically.

It is clearly one of the implied reasons a bunch of armed men would be standing around on the road in that picture. What makes it "ridiculous," exactly?


> What makes it "ridiculous," exactly?

If you're on a road with armed men with hostile intent, the fact that your autonomous car is unable to offer a solution is absolutely the least of your worries. The unique properties of the armed men on the side of the road problem are not representative of the more general problems vehicle autonomy involves. Take your pick.


Speeding past or turning around are sensible actions a human driver could take that the AI probably would not. The whole point of that example is that the appropriate response to armed guys on the road is not the same in one context as another.


If they just want to steal your car, coming to a stop and abandoning the vehicle is probably the safest thing to do anyway.

But if they actually intend to harm you, you've got bigger problems, and you should probably hire a professional defensive driver instead of expecting consumer AI to support your edge case.


I'm not sure I'm explaiing the scenario well - this isn't a planned attack, it's an opportunity attack where you're the victim based simply on the fact that you happened to be there, much like a mugging.

Assume the following for a moment: It is known that, when presented with pedestrians in an unexpected area, an automated car will plot away around them or will yield until they are no longer in the way.

Supposed that with the condition, you were having your car drive you from downtown to your small suburb that requires going down any generally empty road. (i.e., no one is around because it's late) In the distance, 4 people form a loose barrier that the car can't safely pass through so it triggers the logic to yield to pedestrians. The people are muggers, and they quickly break the windows and proceed to mug.

Yes, it's a very specific scenario, but if such a case of logic exists, how long until such a scenario becomes common place? This isn't asking AI to evaluate and protect people from targeted attacks or inventing paranoid delusions of importance, it's about figuring out how to respond to a fairly simple abuse of an often called for bit of logic in the AI.


What do you think a human would do differently in this scenario? If a normal person is driving home late and sees even just one person standing in the road, they'll probably slow down for them before even thinking about it, just like the car. I think if this were such an effective way to mug people, it would already be a major problem.

And again, you're talking about a car that's surrounded by cameras by design. It would be easy to include a button that immediately starts streaming all camera info to cloud storage (or indeed, just do so by default if mugging were such a huge problem).


If we define "normal person" to mean a person who lives in a safe country where carjackings are not commonplace, maybe.


Yes, that is the market I expect self-driving cars to target.


You don't see any demand at all in, say, South Africa, or Brazil? I think that's delusional.


Did I say that? I said I don't think they will be the (initial) target market, and thus the cars inability to deal with carjackers will be a non-issue.


> But if they actually intend to harm you, you've got bigger problems, and you should probably hire a professional defensive driver instead of expecting consumer AI to support your edge case.

I mean this is just reality in a lot of places; I don't see how you can just handwave it away. Cars are designed to operate in all kinds of extremes that probably don't apply to your personal situation.


>Cars are designed to operate in all kinds of extremes

Yes, and self-driving cars will not be suitable for those extremes. I don't see how that's a major problem?


I don't see a lot of demand for a car that's only appropriate for pleasant Sunday drives.


There's a huge range of common usage between "pleasant Sunday drives" and "extreme defensive driving against armed goons".

I don't think the latter is necessary for self-driving cars to be successful.


But it's just an example, is the point. We don't have big carjacking rings, but we have plenty of extreme weather, old roads (even dirt roads depending on where you are), and other extreme circumstances that maybe don't matter to somebody who never wants to leave a forty-mile radius of Mountain View.

Even in an idyll you could imagine some novel circumstance created by, say, downed power lines.


> This doesn't seem like it would be that difficult to detect. And also, I would expect self-driving cars to have an emergency stop button.

That it if the passenger pays enough attention, which they won't. Also note: each one of these cases can be programmed indeed. But that is like fighting Wingrad schema by typing in all possible sentences. In reality none of this will happen, but something else will, which we don't even anticipate.


Assuming you've accounted for these scenarios, yea, everything may be just fine.

I think the point is that there are millions of different things that can go wrong, and many have a 1 in 1000000 occurrence. Human intelligence is able to improvise, but what will cars do?


> Human intelligence is able to improvise, but what will cars do?

Halt, and send a warning signal to any vehicles nearby. Which is better than what we can ordinarily accomplish today in those one in a million occurances.


re: open manholes - Why would an AI have no clue what to do if it could detect the danger? Drive over the middle, drive around it, or stop. I don't think there is any reason to think AI would be worse at handling the situation than people currently are. See this manhole "catapult" video compilation.

https://www.youtube.com/watch?v=ezYV2Aas668


Nice work! Some of the situations in the drawings imply that the car needs human-level commonsense to perform properly. For certain things like 'fire in a tunnel' or 'tornado crossing the road', there could be sufficient time to warn humans to take control or at least give verbal instructions, while the car is slowing down drastically or even turning away as a safety-first precaution.

The challenge for the autonomous car would be "how to know that it doesn't really know". I wonder if some existing philosophy or theories are applicable for such a purpose in the real world. Does anyone have pointers?


These are interesting, and clever, and I agree difficult, but I don't think they necessarily need to slow down the progress of self-driving cars much at all.

Tesla's data so far suggests that their current autopilot implementation is reducing crashes by 40% - http://www.theverge.com/2017/1/19/14326258/teslas-crash-rate... - and while these cases are all problematic, they're all fairly rare, and the cases in which they come up and the car reacts wrongly and that's a major problem are going to be even rarer. On top of that, self-driving car performance is only going to improve.

It doesn't really matter (in terms of the value of self-driving cars - there's a mostly independent marketing question) if there's a couple of cases where self-driving cars make the wrong choice and kill you, if there are many thousands of cases where they save your life. It's effectively changed a great many risky situations into some new and different but less likely ones.

If driving a self-driving car significantly increases my odds of survival while driving, in addition to giving me huge amounts of bonus free time, then I'm definitely interested, regardless of risky but rare cases like this.


Regarding the "driving into a manhole" scenario:

This is unlikely to cause a problem with a car's direction or motion of travel; at worst case, it may cause a blowout, but tires and rims are surprisingly tough (purposefully). I've hit potholes as large as manholes, and other than being very surprising, no damage was done.

Then again, I drive a pickup truck - I wouldn't expect something much smaller to handle as well.

Still, the rate of speed and balance of the car would all play into the scenario. While it would be better for a car to avoid an open manhole (or pothole for that matter), it generally isn't a crazy scenario if the car hits one, either.

If you want to see and hear about crazy stories of mishaps people have, yet the car continues to be mostly drivable, check out the sub-reddit "Just Rolled Into The Shop":

https://www.reddit.com/r/Justrolledintotheshop/

You'll learn both just how stupid people are with their cars and driving, as well as just how robust vehicles actually are.


Great artwork! Coming from a country with somewhat moderate weather, I have to ask though:

> Tornado is crossing the road.

Do people really drive when there are tornados nearby? Shouldn't they be hiding in a cellar, or something?


I live in Arizona and I drive through twisters like that all the time. They are a constant feature in the desert. Along with tumbleweeds blowing along, which disintegrate nicely when hit by your car, but would appear as a large solid mass to the autonomous system and probably freak it out (as it should).


Tornado watches tend to cover large areas of potential hazard and not everyone within an area will receive advanced warning.

I've driven by two (small) tornados while travelling long distances over the last 15 or so years.


Really cool page. I think the journalists are missing that the remaining 5% of situations to be solved are much harder than the 95% of common place situations.


There are lots of places where ambushing drivers to steal them is pretty common. For example, things like throwing rocks from hiding (like a pedestrian overpass) to break some cars' windows and make them stop, and then some accomplices come out and attack. This was really common a few years ago near my home. Now, I think they will just need a stop sign, and do the same as #5.


I think the missing piece here is commonsense reasoning (I really like your art work by the way!). There has been a lot of work in commonsense reasoning in symbolic AI but not sure if anyone is working on that in machine learning.


Yup, there is a boatload of low level knowledge missing from any AI that we build. It is painfully visible with robots (and autonomous cars are robots as well), see DARPA challenge video which was then lead BTW by Gill Pratt: https://www.youtube.com/watch?v=g0TaYhjpOfo&ab_channel=IEEES...

A lot of stuff we take for granted such a supporting oneself by a nearby wall when loosing balance, is completely out of reach of todays "AI".


Love the artwork!


That's awesome. Thought-provoking for sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: