Hacker News new | past | comments | ask | show | jobs | submit login

When read literally, there is no way that any system ever, human or automatic, will be able to drive on "any road" with "perfect safety". Crazy freak accidents happen, and human drivers regularly aren't able to handle them. On top of that, most people just can't safely drive in thick fog, icy conditions, or pouring rain. No amount of intelligence will allow a car to drive through a sufficiently flooded street.

Just like with human cars, the important metric is just whether they can drive acceptably safely in the environments that they attempt to go. There's enough low-hanging fruit from faster reaction times and 360 sensing to make that feasible without needing to solve the AI-complete problem. Nobody's asking them to be able to drive at 70 MPH through a fog bank in Alaska with ice on the road, even if that is technically part of the requirements for Level 5 that even humans can't obtain.




If the standard of self-driving becomes something akin to how airplanes fly, then weather will be a delay, bypass, or do not fly precondition to the execution of the flight plan.

All this stuff about "it's right around the road" vs "it'll never happen" is all unspoken assumptions around what it means to take a "safe" "trip" "in a car" "driven" "autonomously".

Safe? Compared to humans that are alert, humans with smartphones, drunk drivers? Airplanes?

Trip? Distance? Rural vs Urban? Highway? Speed?

"in a car"? Smart fortwo? Motorcycle? RV? Sedan? SUV?

"driven"? All by software all the time and no windshield? implicit backup if uncertainties are too great and can be manually overridden?

I think what needs to happen is that you need programs tailored to specific routes/roads. As you drive any distance, you download/cache the programs for the routes and execute them.

You aren't going to be carrying around a "general driving AI", except as emergency backup to specific route downloads.

Humans work this exact way. You have the idiot tourist drivers versus people that commute on a route. Commuters know how different parts of the road's concrete sound differently, how fast they can take curves if they had to, which lane to be in to anticipate merge backups.

Those commuters know how to drive those routes in winter or summer as well, deer season or not, rain or shine. So that implies conditions-specific programs as well.


> I think what needs to happen is that you need programs tailored to specific routes/roads. As you drive any distance, you download/cache the programs for the routes and execute them.

Wow. That's so obvious in retrospect. I wonder why I haven't considered it before, nor why I haven't read of it before either.

I imagine having stationary "traffic controllers", semi- or completely automated, that keep real-time information about conditions on the segments of the roads they monitor, and which continuously assign "travel plans" to cars. An autonomous car wouldn't have to recognize weather conditions or static obstacles in fraction of a second, because there would be a static sensor network and processing centres responsible for this. All a car would have to do is follow assigned route at assigned speeds, and monitor its environment for dynamic obstacles.

This makes more sense than trying to pack all the intelligence into the car, and doubly more sense than having the car hooked up to the cloud. Unfortunately, I feel companies of today may find it difficult to coordinate on designing such a system.


"Wow. That's so obvious in retrospect. I wonder why I haven't considered it before, nor why I haven't read of it before either."

Get ready to have your mind blown because the next leap in this line of logic is to physically fix "tracks" to the road and run the cars on these "rails" - possibly on a schedule.

It's a future star-trek world we in the United States can only dream of ...


Trains work incredibly well, but they don't solve the last mile problem. Hi-rail trucks are pretty common in railway maintenance. There is probably some sort of steam-punk past that could have happened where we're all driving hi-rail cars on almost all highways, and just using cars for the few last mile trips.


Haha :). But no, we're talking PRT here, not regular trains.


Am I the only one who's terrified of the security implications of such a system? You're talking about running (from the car's perspective) untrusted code on a fleet of cars, dynamically and in real time. Even if it's cryptographically signed or whatever, you're one or a handful of exploits away from remote attackers having the capability to crash every autonomous car, simultaneously, at least in a geographical region and possibly (inter)nationally.

If security matters today, it's going to matter orders of magnitude more in a world with autonomous cars.


I read the other day about geofencing being used to enforce speed limits for rental ebikes and scooters in some jurisdictions. It's not hard to imagine it progressing from speed limits to top-down traffic control programs. This probably is the future we're hurtling towards, as unprepared as ever.


I'm talking about doing what airplanes and ATC are already doing, but in an automated way.


It's that automated part where the vulnerability arises.


What if we automate ATC first?


Who would be responsible for maintaining static sensor networks and processing centers? The current transport authorities who do such a great job maintaining dumb roads?

A lot of the problems with AVs are not just technological, but political and social in nature.


Smart roads are commonplace: https://en.wikipedia.org/wiki/Travelers%27_information_stati.... Plus we have traffic lights, adaptive speed limits, rest stop information boards, etc.

The responsibility for these systems obviously falls to the transportation authorities.

Scaling this capability up may not be easy but it's absolutely possible. As we expand our fleet of autonomous vehicles that can respond to these inputs it will become more and more useful and necessary.


This is what GM is trying to do with Super Cruise. Seems like the right way to go on my book.

https://www.cadillac.com/world-of-cadillac/innovation/super-...


The "safe" part is a huge one. We need a definition of "safe" on a formal level of ISO26262 which is also feasible. From what I have heard the validation would require billions if not trillions of miles driven. If you retrain the neural net, you start from 0. Currently, it is not feasible to sell a safe self-driving car.


We keep forgetting that in the 1920s they said -all- cars would be flying in the year 2000, and it never happened... Heck, some could say we still don't have cars that can fly realistically, we only have flawed prototypes, and they aren't safe. It's not being pessimistic, it's being realistic. Most of the hype surrounding the self driving car discussion is generated and promoted by people and companies on the profit receiving end of the discussion.

Right now it's primarily based off of cameras watching painted lines on roads. Also not a solid practice. If someone wanted to cause accidents all they'd have to do is paint the lines off the road on a section above 60 MPH.

Another problem is maintenance of these vehicles. Accidents are caused a lot by owners not properly maintaining the car's subsystems like brakes, fluids, and even keeping sensors and windshields clean. Too much performance unpredictability is introduced into the equation by things like this to make self driving cars a reality, and makers cannot reliably answer who will own up to responsibility in case of an accident.

There's also the concept of "free will", i.e. how will these cars work around human drivers, what about everyone relinquishing their personal rights to own cars, will drivers be able to go off the maps and radar routes, etc. And none of those questions can be answered. For planes, boats, large haulers and buses maybe, provided they stay in designated lanes, but for cars? I don't see it happening any time soon unless all of the questions can be answered acceptably.


Faster reaction times are "low-hanging fruit" until the reaction is wrong...

Even taking Alaskan fog banks out of the equation, a machine need to be pretty near AI-complete to not be fatally wrong about how to react more often than every few hundred 100 million miles, which is the human benchmark (including inexperienced, tired, drunk and stupid drivers) if it's driving in normal road conditions without a human failsafe. Or for the road environments to be very different, or for the 360 sensing and autobraking to be primarily driver aids, which are the real low hanging fruit for all that investment in AI processes to understand roads and control vehicles.


That’s an impressively difficult number, but I wonder where you got it from? What is a “time”? A second in dangerous conditions? A trip? How do you get a hundred million iterations of anything at human driving scale?


The rate of fatal car accidents in the US is about 1 per 1e8 miles traveled. Of course, that's a terrible measure of driver error rate: Less than 1% of all collisions are fatal and with maximally safe cars that could ensure all occupants will survive any possible collision at any possible operating speed, the worst possible driving system would receive perfect marks.


In safety-critical systems, failures are usually measured in (severity) x (probability) (and sometimes including a 'detectability' measure).

So a resulting 'acceptable' metric could factor in those less severe cases even if they occur at a higher probability. Scores outside this range would then trigger a redesign to bring it within acceptable boundaries.

I think the difficulty will be in 1) getting a consensus on what the resultant score should be and 2) getting enough information to estimate it in a statistically significant sense.


It's actually not a very useful quote because, as you say, taken literally it's an impossibly high bar. What is required to be useful is either

1.) Demonstrably better than human (whatever that means exactly) for a well-defined subset of roads and conditions such as interstate highways under some subset of weather or

2.) Demonstrably better than human for any roads and conditions that a typical adult human would typically be able to navigate door-to-door safely.

1. is a very useful driver assist system, and likely a big win for safety, but you still need a sober adult driver available to take over with reasonable notice. 2. is what you need for robo-taxis to be practical.


There are open questions about whether or not intermediate levels of automation might actually be more dangerous because they lull human drivers into a false sense of security, so I don't even know that 1 is necessarily sufficient.


My assumption with 1 is in the vein of "we're approaching an exit in two miles. Please be ready to take over." i.e. planned disengagement. Yes, any automation system that may fail by going "OMG. Do something now!" is worse than useless. I'm not sure the question is even open.


>> Nobody's asking them to be able to drive at 70 MPH through a fog bank in Alaska with ice on the road, even if that is technically part of the requirements for Level 5 that even humans can't obtain.

What is required is a vehicle that can decide not to drive at 70 MPH when it's going though a fog bank in Alaska. Humans can drive that fast in such low visibility, but we (often) have enough brains not to.


> Humans can drive that fast in such low visibility, but we (often) have enough brains not to.

I think your "(often)" is doing a lot of work in this sentence.

There are many unsafe conditions that don't require an icy road plus fog plus highway speeds: any bad thunderstorm is likely to combine poor visibility and the risk of puddles/hydroplaning, for example. But the highways don't clear out during summer thunderstorms; people seem (from their actions) content to take the risk.

If self-driving cars become common and maintain a high safety standard, I think we'll also need to see a culture shift. People will have to become comfortable saying (and hearing) "the roads aren't safe right now, so I'll not be there on time."


>But the highways don't clear out during summer thunderstorms;

Part of this is that people (and I include myself in this) tend to have a mindset of slow down but power on. (Although in my experience not enough people slow down enough.) But it's often also a reality that pulling over isn't really safe. And, even if you can get to an exit, in the case of something like a snowstorm you may have a long cold night in your car if you decide to wait it out.


>> If self-driving cars become common and maintain a high safety standard, I think we'll also need to see a culture shift.

Good idea. In fact, at the moment self-driving cars aren't anywhere near as safe as humans (even when we forget our brains home) so I think we could benefit from such a "safety culture" tremendously already.


This is the key thing here - Uber for instance doesn't need to solve L5 self driving. They need to figure out what routes are frequent in their network and what routes can be driven by their self driving car and route a ride appropriately.

It will never be Uber flicked a switch and replaced all their human drivers. It will be a very gradual transition over to self driving that may never reach more than 25-30% of all trips.

Is that worth the billions of dollars they're pouring into self driving? https://www.uber.com/newsroom/company-info/ says they complete 14 million trips a day. That is 5.1 billion trips a year (and growing). 10% of that is a 500 million trips a year. Assuming the average trip is 13$, that would be $6.5 billion per year in revenue, I think that's more than enough to turn a significant profit even accounting for R&D, Capital expenses to maintain the fleet etc.

The reason self driving will be a reality is purely economic. Noone's developing it to change the world or reduce the number of people dying in accidents or whatever shpiel they cook up.


IFF autonomous cars can drive in certain conditions and on certain roads substantially more safely and cheaply, I think that'll cause roads and commuting patterns to adapt to the cars.

So even if it's a long way away from being able to replace the majority of cars in the way we use them today. The roads and way we use cars in the future might be different.

I'm not saying it will be. It's just something I rarely hear discussed.


The important things that self-driving cars can't do aren't freak occurrences. They're elementary things like responding correctly to hand signals from someone directing traffic, or reading and obeying street signs written in plain English.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: