Hacker News new | past | comments | ask | show | jobs | submit login
Tesla’s Push to Build a Self-Driving Car Sparks Dissent Among Its Engineers (wsj.com)
245 points by dcgudeman on Aug 24, 2017 | hide | past | favorite | 383 comments



The level-2 driving that Tesla is pushing seems like a worst case scenario to me. Requiring the driver to be awake and alert while not requiring them to actually do anything for long stretches of time is a recipe for disaster.

Neither the driver nor the car manufacturer will have clear responsibility when there is an accident. The driver will blame the system for failing and the manufacturer will blame the driver for not paying sufficient attention. It's lose-lose for everyone. The company, the drivers, the insurance companies, and other people on the road.


Requiring the driver to be awake and alert while not requiring them to actually do anything for long stretches of time is a recipe for disaster.

Everybody who's looked at this seriously agrees. The aviation industry has looked hard at that issue for a long time. Watch "Children of the Magenta".[1] This is a chief pilot of American Airlines talking to pilots about automation dependency in 1997. Watch this if you have anything to do with a safety-related system.

[1] https://www.youtube.com/watch?v=pN41LvuSz10


Wiener's Eighth and Final Law: You can never be too careful about what you put into a digital flight-guidance system. - Earl Wiener, Professor of Engineering, University of Miami (1980)

It seems that we are locked into a spiral in which poor human performance begets automation, which worsens human performance, which begets increasing automation. The pattern is common to our time but is acute in aviation. Air France 447 was a case in point. - William Langewiesche, 'The Human Factor: Should Airplanes be Flying Themselves?', Vanity Fair, October 2014

Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator. - Ashwin Parameswaran (2012)

... from my fortune clone @ http://github.com/globalcitizen/taoup


That said, aviation accident rates have been falling asymptotically while this "troubling trend" has been going on. In the US we've had no fatalities on domestic commercial flights since like 2009, and only the two at SFO a couple of years ago on international flights to the US. This is on a couple of billion flights with hundreds of billions of passenger departures.


Airliner pilot discipline is orders of magnitude better than your average driver.


I'm a private pilot (general aviation, single engine cessna's). For the most part, our discipline is orders of magnitude higher than all but the best drivers. Airline pilots are orders of magnitude better than us. You are absolutely correct.


Not to mention the myriad of safety and control mechanisms in place on the aircraft (redundant systems etc) and away from it (air traffic control, IVR etc)


Airline pilot discipline was also probably pretty good fifty years ago yet the accident rate has dropped as the tech has improved.


A good pilot can sometimes recover from being given a bad aircraft. If the aircraft get better of course the accident rate will improve.


airline pilot selection plays also a role.


That's why at current we have 1.3 million people dying on the road in the US alone. It might make the matter of transitioning to self-driving worse, but it also makes manual driving worse.


The WSJ article includes NHTSA data indicating 'only' 35,000 traffic fatalities per yeah. Where are you getting this 1.3 million number from? That would be equivalent to a full 1% of US population dying on the road every 2.5 years


you're right. I googled for "number of car deaths in US" (or so I thought) and that number came up. 1.3M seems to be the number for world wide fatalities.


That number sounds like the worldwide number, not US. Many places have much, much worse road fatalities per capita.


Commercial air travel will serve just shy of 4 billion departures this year, with about 40 million flights.

You are correct in the ratio (1:100) but I think you mistook departures for flights and extrapolated 2 orders of magnitude too far.


> This is on a couple of billion flights

You sure about that number? Seems incredibly high to me.


"​Airlines are expected to operate 38.4 million flights in 2017, up 4.9%."

Maybe 3 billion passengers/year?

From http://www.iata.org/pressroom/pr/Pages/2016-12-08-01.aspx


This is a well-researched area, in addition to being pretty obvious to everyone. Personally, I stopped using cruise control years ago. If I have to pay attention, I'm better off driving.

This isn't just a matter for Tesla. The auto industry is rapidly heading for much better assistive driving systems. There's no way that the people heads-down in their cell phones are going to do this less once they realize they don't really need to pay attention.

Will accident rates get better overall anyway? Who knows? But systems that aren't intended for autonomous use are going to get used that way.


There was a cruise control ish system I heard about on a car (a Mercedes, iirc) in Germany when I was a kid: instead of a target speed, you set a maximum speed, such that pressing normally down on the gas allowed the car to accelerate up to that speed. It trims down the excess gas to avoid exceeding the maximum speed. If you released the gas, the car would coast; and if you pushed the pedal down close to the floor (i.e. "to the metal"), it would allow you to exceed the speed you'd set, for example to momentarily accelerate in order to pass.

I loved this system and have always wished for something like this on a car in the US. I've never liked the common cruise control in the US -- where you set a target and it applies the gas for you -- because I didn't like how removing my foot from the gas pedal moved my foot fractions of a second further from the brake in an emergency.


This is called a speed limiter and I LOVE it in my Mercedes. I set the limiter to match the road's speed limit and I can just drive without worrying that I'll get caught speeding. No need to constantly look down at the speedo.


Pointless in most reasonably modern cars which will chime alert you for any such problems (oil, fuel, etc). Much better paying attention to the road itself.


Clearly you don't live somewhere where speed limits are rigorously enforced. Setting a limiter frees you up from having to match the limit yourself and allows you to pay more attention to the road.


I think this was meant to be a response to this sibling post:

https://news.ycombinator.com/item?id=15095626

Probably did not wait long enough for the reply button to appear.


Except you should be checking your dashboard instruments every few seconds. Temperature, oil pressure, speed, fuel. I'm constantly doing this while driving. Road ahead. Mirrors. Instruments. Road ahead. over and over.


What are you driving that you need to check your temperature, oil pressure, and fuel every few seconds? If you don't, and you didn't have to check your speed, you could shorten your cycle to mirrors and road.


With a hot air balloon you have lots of time to respond.

Though I'd check my ballast too, and probably read a book.


> I've never liked the common cruise control in the US -- where you set a target and it applies the gas for you -- because I didn't like how removing my foot from the gas pedal moved my foot fractions of a second further from the brake in an emergency.

It's not even just that, it's worse at managing the use of the gas overall than a human with regards to cars as of 2015, it seems like the system is worse at managing the gas than humans are. I'm not a huge fan of cruise control as defined above because I find it makes me really inattentive (my problem, of course), but the benefit of not using the cruise control is that it seems like you get much better mileage. My family used to have to drive regularly between Minnesota, Wisconsin, and Illinois for a few years, and I would always get better mileage not using cruise control than my brothers (who used cruise control) would. The difference would often be as much as half a tank of gas or more on older cars (2000-2010) and on relatively newer cars (2015) it'd still be a difference of a fair number of gallons.

I think the systems just aren't good at predicting when to coast and when to accelerate, and for very hilly regions, this means a lot of wasted gas.


I wouldn't be surprised if you were allowing speed to fall on those hills, while the cruise control systems were downshifting to get the power to maintain speed.


This feature exists in my Jaguar XF. Similar to the cruise control, you can set a 'max speed' and the car will simply not accelerate past that speed. I don't think I've tried flooring it to see if it would let me exceed it in that case though.


It will, there is normally a, "kick down" point where it will accelerate at max, for emergency situations. This also normally disengages the max limit.

Another feature is speed warnings, which "beep" at you when you exceed them. Currently they only seem to be singular, but it should be possible to integrate them with satnav and speed sign recognition. I expect these would be safer, especially if it linked up with a, "black box" to report excessive violations with a parent, insurance company, police etc.


It’s almost always standard in new European cars. I use it when I am in a speed-camera infested area.


That's just a speed limiter. Have it on my 4yo Volvo. Can be overridden by depressing the pedal fully to the floor (not a kickdown, before someone jumps on that aspect - it's a manual car)


That system sounds pretty fantastic compared to the dumb cruise control I have in my car. Though it's not as predictable for the people that don't understand how it works.


That matches the behavior of the speed control I've seen in Renault cars.


For me cruise control is just giving my gas pedal foot a break on long drives. I still have to constantly adjust the speed with my hands and the brake itself. I'm also steering and watching what I'm doing.


I drive with cruise control extensively. I tweak the speed with the up/down buttons as needed to compensate for passing or being passed. Thumb is ever ready on the cancel button which smoothly initiates coasting.

This way my foot can readily cover the break pedal and I initiate breaking much quicker than off-gas>on-break


> I stopped using cruise control years ago. If I have to pay attention, I'm better off driving.

I view cruise control as a safety feature. I can keep my foot hovering over the brake pedal instead of on the accelerator, reducing reaction time in a crisis. Maintaining attention on the road has never been a problem for me, though I suppose the hovering-foot posture helps.


I recently drove an Accord with lane-following and speed adaptive cruise-control recently. It completely failed (but thankfully by refusing to try) in the one instance that I wanted to use it (stop-and-go traffic), but it was nerve wracking when it was on as it kept losing the lane. People naturally alter the throttle and speed when going around corners and up slight inclines. It feels alien when that doesn't happen.


I have a Subaru with lane-following and adaptive cruise. Lane follow fails miserably because it keeps losing the lane boundaries, even on clearly marked roads. But adaptive cruise control works extremely well, and particularly shines in stop-and-go traffic.

One catch there is that it'll stop automatically, but it won't ever go if it came to a full stop - you need to tap the gas to reactivate cruise. If the car in front of you starts moving, but you don't move, it'll make a noise (both audio, and visual on the dashboard) to remind you that you're supposed to do that. I suppose that's a kind of a safeguard to keep the human alert?


I also drive a new-ish Subaru, adaptive cruise seems to work well in some situations (low speed stop-and-go, like you mentioned) but at high speeds it is terrifying... A speed that is reasonable on straight highways is pretty jarring around curves. If my foot is on the gas I'll subconsciously make the needed adjustment, but with cruise on I don't usually react in time.

This could partly be a consequence of living in the Pacific Northwest... lots of winding mountain highways!


I'm in PNW as well. I find that it works great on major freeways - e.g. I commute over I-90, and it works great there. On I-405 as well. On the mountain routes, like say parts of SR-202, yeah, it's ill advised.


On most highways you can maintain speed through curves, but not in the mountains, and they do warn with plenty of signs. So CC is not a good fit for mountain driving.


That sounds at least somewhat workable. The Accord seemed to drop out of cruise control if the speed dropped below 15-20 Mph, so it was entirely useless in stop-and-go, and even slow-and-go.


I've got the new Hyundai Ioniq with adaptive cruise control and lane assist. Last weekend I drove 160 miles with both enabled for the first time. I just found it shifted my focus. I was much more aware of what was going on outside the car. Setting the max speed to 75 mph enabled it to follow the car in front very effectively and overcame the incline and corner speed issue. Only problem was having to intervene to prevent under-taking.


I second the effectiveness and pleasantness of Hyundai's implementation. My 2015 Sonata has those features (its standard now, but was introduced that year for the Limited trim). It, combined with auto-hold to apply the brakes when the car stops, made road trips and slow going commutes so much more pleasant. It sucks for stop and go traffic, since it disengages when the car completely stops. But I can't fault a cruise control system for sucking at a scenario that doesn't actually involve cruising.


The 2018 model year (I believe) has full stop-and-go support. The previous two model years would cut out somewhere around 25 mph (this is what I have). It really is only for freeway driving outside of heavy traffic. I've used a 2018 CRV that has stop-and-go support and it's quite nice.

The lane tracking isn't that great, but I don't mind it that much. I don't use it much for normal driving, but I found it it's pretty fantastic in heavy crosswinds. The car does a pretty good job of keeping the lanes (assuming you can see them well enough) so you basically drive like normal instead of having to constantly fight wind gusts.

Under normal conditions it doesn't do enough to be terribly useful and less you're not paying enough attention… at which point you shouldn't be using it anyway.


Surely 'speed-adaptive cruise control' is an oxymoron?


Yeah, I stopped using cruise control years ago too - I moved to Los Angeles.


> Personally, I stopped using cruise control years ago.

Cruise control seems fairly harmless - you still have to keep lanes, and keeping your foot on the gas isn't particularly demanding either. I largely use cruise control because I am able to save on gas that way, by avoiding unnecessary acceleration/deceleration. Combination with lane following and holding of distance is more problematic imo.


It's mostly that I tend to drive on roads with at least a moderate amount of traffic. It tempts me to not optimally mix with other vehicles. I just got out of the habit of using it.


The traffic aware cruise control in the Tesla is very good in traffic. Especially stop-and go traffic. My passengers always tell me to switch it on because it gives a smoother ride than me. It's also much more relaxing.


Personally I use cruise control almost constantly, for the exact same reason. When you're watching the speedo, you're not watching the road.


> There's no way that the people heads-down in their cell phones are going to do this less once they realize they don't really need to pay attention.

The counter-argument is that they do this without Autopilot anyway. Given that they're already not paying attention, adding in Autopilot seems like a net gain.


Hence my comment that maybe accident rates improve anyway. Although it's hard to predict the delta between distracted driving/no automation and oblivious driving/imperfect automation. It's at least plausible that you have fewer but worse accidents when someone's watching a YouTube video and the car suddenly panics.


I’ve long wondered if people who extensively use advanced assist systems will see deterioration in manual driving aptitude, and if that deterioration will be restricted to the operating domain for the assist systems or be more general.


That's a well-documented concern in aviation. Hard to imagine it will be less prevalent in a population that doesn't even need to get re-certified now and then. General skills deterioration is probably not the issue that having to take over very quickly is, but it is one.


I wonder how much more the average driver's skill can degrade. We never re-train so isn't there a natural degradation already? Will automation make that worse or will it not be significant?


For the typical (certainly US) driver, the initial training is just to get to a minimally viable set of skills so they can pass their driving test. The vast majority of people aren't taking performance driving courses to get their drivers licenses. I'd pretty much guarantee that almost every driver is more skilled 10 years after they get their license.


I cannot comment on driving experience/skill since I don't have a driver's license but I frequently observe drivers without a working understanding of the traffic rules and signs, even though they once learned that in the theory classes.


I'm not sure that experience implies skill but you make a good point.


I've found that CC allow me to be more focused on the road, rather than looking at my speed every so often. My CC controls are on the wheel, so I can adjust it just by moving my finger, and I always keep a foot on the (accelerator) pedel ready to react as if I was maintaining speed with my foot. I know plenty of people that "rest" their foot while using CC, and that is just asking for an accident since the reaction time is longer.


If we're going to borrow from aviation, why don't automakers develop some rudimentary automation (not autonomy) that would help avoid the most common kinds of crashes? For example something that might be automated in a modern aircraft is change flight level. The pilot can command a change to a given flight level and the automation takes care of it. Why don't we have a "change lanes" command for cars? Changing lanes is a leading cause of car collisions, and even people who do it successfully forget to use their signals, check their blind spots, etc. It seems like this level of automation (not autonomy!) would be easier to achieve.


"Lane change assist" is already increasingly common in high-end cars.

https://www.autobytel.com/car-buying-guides/features/10-cars...


Achieved. Tesla autopilot can change lanes. It also has collision avoidance on all the time.


Tons of cars nowadays come with emergency automatic braking.


Tesla offers this in their current Autopilot solution. Turn on the turn indicator and the car changes lanes if it is safe to do so.


This is the same "aviation industry" which, domestically, has had a perfect safety record since around 2009? Not near perfect, not 99%, but literally zero fatalities?

Also, as a pilot, I can tell you that the Tesla Autopilot functions very much like what we have in planes. It steers the vehicle and significantly decreases workload while increasing overall safety but needs to be constantly monitored and can easily be misconfigured or operate in unexpected ways.


If you never driven Tesla you do not know what a joy is to use the auto pilot in stop and go traffic. I see 2-3 accidents daily caused by your "alert" drivers rear ending cars. Your statement may be true for long haul rides, but I pretty sure the numbers will come ahead for the auto steerer for the normal commute.


> I see 2-3 accidents daily caused by your "alert" drivers rear ending cars.

Remind me not to drive in your neighborhood. (This appears to be hyperbole.)


"neighborhood"? Who commutes across a neighborhood as the full extent of their commute? I suspect it's within a rounding error. Driving across a city, however, I definitely see this. Daily. And "accident" I expect to mean everything from a light tap rear-ending to a full fledged crash.

People rear-end each other in my city every day. On my commute, I'll come across two a day. Wet days, a few more. The other five directions headed into the city I would expect to see similar statistics to my experience. Just listening to traffic radio, there's going to be at least five crashes around the city; almost always more. They don't report on fender benders.


About six years ago, I would drive from San Mateo to Oakland at least twice a week, after work, to meet my then-girlfriend. So that's down 92, across the bridge, and up to Oakland. It's about 30 miles, all on freeway, and at that time of day all in heavy (though not all stop-and-go) traffic.

I don't think I ever saw three separate rear-ends in a single drive. I can't say for certain that I ever saw two. I didn't drive it every day, but I probably did 100 such commutes.

You sound either like you're massively exaggerating or live someplace with apocalyptic traffic.


>I don't think I ever saw three separate rear-ends in a single drive. I can't say for certain that I ever saw two. I didn't drive it every day, but I probably did 100 such commutes.

I drive 30 miles through Chicago traffic on the Interstate everyday. I see at least 3-4 accidents per week (people pulled over to the side of the road, out of the way, or at the accident investigation sites.) Most of these are minor fender benders. I'm sure if I was in rush hour traffic for the full 4-6 hours (not just my 90 minutes of commute) I would see way more. They mostly happen in near bumper-to-bumper stop and go traffic (someone misses the guy in front of them stopping) or when traffic unexpectedly comes to a standstill from going 10-20 MPH.


Every single day of this work week so far there has been at least one and as many as 5 accidents on the same 5 mile stretch of I95 headed in to Waltham, MA.

Every morning and at least two of the evenings. And the weather has been reasonable. I typically see 4 to 5 Teslas a day during my commute... driving sedately, and they aren't they ones involved in the accident.

Maybe we'll break the streak tomorrow and have no accidents.


The guy I was talking to said he saw 10 a week minimum.

Yeah, if you spend 4-6 hours a day in traffic, you'd see a lot of accidents. That... seems uncontroversial.


Here are the traffic stats for Canada[0]. In 2015 there were 116,000 "personal injury collisions". That is ~300 injuries a day across Canada.

The Greater Toronto Area has approx. 20% of Canada's population (~35M). If we assume that 300 injuries are evenly distributed across Canada, which seems unlikely due to how bad the driving conditions are on the 401 and DVP, there are ~60 injuries per day in the GTA of various severities.

I don't think someone encountering ~3 per commute during rush hour is unreasonable.

[0]: https://www.tc.gc.ca/eng/motorvehiclesafety/tp-tp3322-2015-1...


You drove 30 miles up 880 twice a week and never saw a rear-end accident? You should play the lottery. I don't know if I see three every day, but I do see one every week. This week's was some clown in the #2 lane, _staring_ down at his phone, rear-end into a stopped car at ~20 MPH. His car was completely totaled with one of the wheels spinning off to the shoulder.

A few months ago I saw a guy, face-down in his phone, smash into an existing accident that was already surrounded by fire trucks. That was amazing.


I didn't say I never saw a rear-end accident, I said I never saw three in a single commute and am not sure whether I saw two in a single commute.

The guy I was replying to said that he saw at least 2 every commute, often more. (Edit: Actually, sorry, 2 every day, not every commute).


How do you have the bandwidth to look at other drivers' postures while driving?


I ride my bike in the Peninsula. I personally witness 2-4 accidents monthly ranging from fender-bender/taps to full-on smash-ups. Maybe 2-3 daily is an exaggeration, but they're plentiful. It's a rare day that goes by where the major freeways don't have slowdowns because of wrecks. I'd be surprised if the typical commuter on 101 saw less than 1 per day on average, actually.


Between San Jose and San Francisco on 101 pretty common to see at least 1 fender-bender per commute.


Driving and down 280 over a year's time, once a year, there can be a single day with three separate fender benders. Appearing to have happened within a 30-min window of passing by. But never witnessed 3 to happen --like happening as I was driving.


I'm in the UK, but I used to drive an hour each way to work which was about 25 miles. In a year I saw like one accident and that was someone being rear-ended in slow moving traffic.


Just take any job that requires a significant highway commute, you'll see more than that.

Whenever you get full speed traffic occasionally interrupted by traffic jams (from whatever cause, other accident, tolls, weather, low angle sunlight, construction, etc.), you'll get a higher incidence of rear-enders. Especially when the tail of the slow/stopped traffic is at a point just past a hill or curve.

I got rear-ended myself some years ago in just such a situation, clear sky & dry road. The traffic ahead had slowed dramatically driving into a section where the bright, low winter sun was in everyone's eyes, we couldn't see that before the gentle bend & rise in the road, I saw the slowing traffic & had to brake hard, the person behind me braked late and hit me even though I'd tried to stretch my braking as much as possible to give her more room. There was another similar accident minutes later just behind us.

This kind of rapid-slowing situation in tight-fast traffic will likely even get out of hand even for automated cars, unless there is car-to-car communication. This is because of the slight delay for each successive slowing car in the accordion-effect accumulates to the point where eventually the required reaction time decreases and required deceleration rate increases past the performance envelope. At that point, a crash is inevitable.

With car-to-car communication and automation, the last car in the pack can start braking almost simultaneously with the first one and avoid this.

So, no, it's not hyperbole, it's ordinary.


>This kind of rapid-slowing situation in tight-fast traffic will likely even get out of hand even for automated cars, unless there is car-to-car communication. This is because of the slight delay for each successive slowing car in the accordion-effect accumulates to the point where eventually the required reaction time decreases and required deceleration rate increases past the performance envelope. At that point, a crash is inevitable.

Is this really true?

It seems like, as long as the following delay between cars is greater than that reaction delay, there should be no such "accordion effect."


rapid slowing is fairly common in 280. I find it safest to be on the left-most lane, where you can use shoulder to get you a safe stop. rapid slowing is one reason i'd probably get level-2 autonomous car.


"I got rear-ended once in fine conditions" != "I pass 2-3 rear-ends on my commute every day"


I didn't say it was -- I opened by agreeing, then also provide my example, as a lead-in to the car-to-car communication point.

And yes, when you have an urban highway commute of any distance, it is not unusual to see that many crashes. maybe not every day, but not far off, and enough that you cannot rely upon commute times, precisely because the crashes are so unpredictable.

You might try actually reading other posts before replying with trivial inaccurate potshots. sheesh


I don't think car-to-car matters. You can't rely on it being accurate or present. The car will simply have to drive in such a way that it can always stop within the stretch of visible clear road.


Yes, if you can stay out of Los Angeles, I highly recommend it.

2 is average for about a 60 mile drive during slightly off-rush. I suspect rush is higher.

And certain areas just seem to attract idiots.


See meaning drive past, not "observe the collision."


> The level-2 driving that Tesla is pushing seems like a worst case scenario to me

What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1]. It seems probable Level 2 systems will be better still.

A refrain I hear, and used to believe, is that machine accidents will cause public uproar in a way human-mediated accidents don't. Yet Tesla's autopilot accidents have produced no such reaction. Perhaps assumptions around public perceptions of technology need revisiting.

> Neither the driver nor the car manufacturer will have clear responsibility when there is an accident

This is not how courts work. The specific circumstances will be considered. Given the novelty of the situation, courts and prosecutors will likely pay extra attention to every detail.

[1] https://www.bloomberg.com/news/articles/2017-01-19/tesla-s-a...


That's not what the concern is based on. It's rooted in what we've learned about autopilot on planes and dead men's switches in trains. Systems that do stuff automatically most of the time and only require human input occasionally are riskier than systems that require continuous human attention, even if the automated portion is better on average than a human would be. There's a cost to regaining situational awareness when retaking control that must be borne exactly when it can't be afforded, in an emergency.


> It's rooted in what we've learned about autopilot on planes and dead men's switches in trains

Pilots and conductors are trained professionals. The bar is lower for the drunk-driving, Facebooking and texting masses.

> Systems that do stuff automatically most of the time and only require human input occasionally are riskier than systems that require continuous human attention, even if the automated portion is better on average than a human would be

This does not appear to be bearing out in the data [1].

[1] https://www.bloomberg.com/news/articles/2017-01-19/tesla-s-a...


You're misunderstanding the data and the concern. Currently, Tesla Autopilot frequently disengages as part of its expected operation, handing control back to the driver. Thus, the human driver remains an attentive and competent partner to the autopilot system. That data is based on today's effective partnership between human and computer.

The concern is that as level 2 autopilot gets better and disengagements go down, the human's attentiveness will degrade, making the remaining disengagement scenarios more dangerous.


> The concern is that as level 2 autopilot gets better and disengagements go down, the human's attentiveness will degrade, making the remaining disengagement scenarios more dangerous

A Level 2 autopilot should be able to better predict when it will need human intervention. If the autopilot keeps itself in situations where it does better than humans most (not all) of the time, the system will outperform.

My view isn't one of technological optimism. Its derived from the low bar set by humans.


The problem is that in L2, the bar for the system as a whole is set by the low bar for humans, specifically their reactions in an emergency. If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead. And what people are saying here is that L2 automation increases the risk that the human will fuck up in that 1%, by decreasing their situational awareness in the remainder of time.

That's why Google concluded that L5 was the only way to go. You only get the benefit of computers being smarter than humans if the computer is in charge 100% of the time, which requires that its performance in the 1% of situations where there is an emergency must be better than the human's performance. That is the low bar to meet, but you still have to meet it.


> If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead. And what people are saying here is that L2 automation increases the risk that the human will fuck up in that 1%, by decreasing their situational awareness in the remainder of time.

Humans regularly mess up in supposedly-safe scenarios. Consider a machine that kills everyone in those 1% edge cases (which are in reality less frequent than 1%) and drives perfectly 99% of the time. I hypothesise it would still outperform humans.

Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.


> I'd hypothesise that a machine that kills everyone in those 1% edge cases (which are actually less frequent than 1%) but drives perfectly 99% of the time would still outperform humans.

Well, no.

Some quick googling suggests that the fatality rate right now is roughly 1 per 100 million miles. So, for certain fatality in the case of human control to be an improvement, it would have to happen only about once in the lifespan of about every 500 million cars. In other words, the car would, for all practical purposes, have to be self driving.


"Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers."

The part that really bothers me (for some reason) is that those edge cases are frequently extremely mundane, uninteresting driving situations that even a child could resolve. They simply confuse the computer, for whatever reason.

I'm genuinely interested to see how consumers react to a reality wherein their overall driving safety is higher, but their odds of being killed (or killing others) are spread evenly across all driving environments.

Imagine the consumer (and driving habits) response to the first occasion wherein a self-driving car nicely drives itself through a 25MPH neighborhood, comes to a nice stop at a stop sign, and then drives right over the kid in the crosswalk that you're smiling and waving at. Turns out the kids coat was shimmering weirdly against the sunlight. Or whatever.


> making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.

You are still misunderstanding the concern. The problem is not poorly trained drivers. The problem is that humans become less attentive after an extended period of problem-free automated operation.

I hear you trying to make a Trolley Problem argument, but that is not the issue here. L2 is dependent on humans serving as a reliable backup.


> You are still misunderstanding the concern. The problem is not poorly trained drivers. The problem is that humans become less attentive after an extended period of problem-free automated operation.

I understand the concern. I am saying the problem of slow return from periods of extended inattention is not significant in comparison to general human ineptitude.

Level 2 systems may rely on "humans serving as a reliable backup," but they won't always need their humans at a moment's notice. Being able to predict failure modes and (a) give ample warning before handing over control, (b) take default action, e.g. pulling over, and/or (c) refusing to drive when those conditions are likely all emerge as possible solutions.

In any case, I'm arguing that the predictable problem of inattention is outweighed by the stupid mistakes Level 2 autopilots will avoid 99% of the time. Yes, from time to time Level 2 autopilots will abruptly hand control over to an inattentive human who runs off a cliff. But that balances against all the accidents humans regularly get themselves into in situations a Level 2 system would handle with ease. It isn't a trolley problem, it's trading a big problem for a small one.


If you actually look at the SAE J3016_201609 standard, your goalpost-moving takes you beyond level 2. "Giving ample warning" puts you in level 3, whereas "pulling over as a default action" puts you in level 4.

The original point - that level 2 is a terrible development goal for the average human driver - still stands.


Yeah, you're talking about level 3. Most people think that's not a realistic level because "ample warning" requires seeing far into the future. Better to go straight to L4.

Also, you are definitely invoking the trolley problem: trading a big number of deaths that aren't your fault for a smaller number that are. Again, not the issue here. L2 needs an alert human backup. Otherwise it could very well be less safe.

But I would say the thrust of your argument is not that off, if we just understand it as "we need to go beyond L2, pronto".


NO, a higher licensing bar for human drivers will NOT solve the problem, it would only exacerbate it (and I'm ALL FOR setting a higher licensing bar for humans for other reasons).

The problem here is NOT the untrained driver -- it is the attention span and loss of context.

I've undergone extensive higher training levels and passed much higher licensing tests to get my Road Racing license.

I can tell you from direct experience of both that the requirements of high-performance driving are basically the same as the requirements to successfully drive out of an emergency situation: you must 1)have complete command of the vehicle, 2) understand the grip and power situation at all the wheels, AND 3) have a full situational awareness and understand A) all the threats and their relative damage potential (oncoming truck vs tree, vs ditch, vs grass), and B) all the potential escape routes and their potential to mitigate damage (can I fit through that narrowing gap, can I handbrake & back into that wall, do I have the grip to turn into that side road... ?).

Training will improve #1 a lot.

For #2, situational awareness, and #3, understanding the threats and escapes, there is no substitute for being alert and aware IN THE SITUATION AHEAD OF TIME.

When driving at the limit, either racing or in an emergency, even getting a few tenths of a second behind can mean big trouble.

When you are actively driving and engaged, you HAVE CURRENT AWARENESS of road, conditions, traffic, grip, etc. You at least have a chance to stay on top of it.

With autopilot, even with the skills of Lewis Hamilton, you are already so far behind as to be doomed. 60 mph=88 feet/sec. It'll be a minimum of two seconds from when the autopilot alarms before you can even begin to get the situation and the wheel in hand. You're now 50 yards downrange, if you haven't already hit something.

Even with skills tested to exceed the next random 10,000 drivers on the road, the potential for this situation to occur would terrify me.

I might use such a partial system in low-risk situations like slow traffic where its annoying and the energies involved are fender-bender level. Otherwise, no way. Human vigilance and context siwtching is just not that good.

I can't wait for fully-capable autodriving technology, but this is asking for trouble.

Quit cargo-culting technology. There is a big valley of death between assist technologies and full-time automation.


You make an important point. This is something I see a lot of people gloss over in these discussions.

It's a question that both sides of the discussion claim answers to, and both sound reasonable. The only real answer is data.

As you've said, killing 100% of the time in the 1% scenarios may very well be better than humans driving all the time. Better, as defined by less human life lost / injuries.

Though, one minor addition to that - is human perception. Even if numerically I've got a better chance to survive, not be injured, etc - in a 99% perfect auto-car, I'm not sure I'd buy it. Knowing that if I hear that buzzer I'm very likely to die is.. a bit unsettling.

Personally I'm just hoping for more advanced cruise control with radar identifying 2+ cars ahead of me knowing about upcoming stops/etc. It's a nice middle ground for me, until we get the Lvl5 thing.


The statement at the end of your comment made me wonder if there will be a time in the future where you cannot disengage the automation in the car you're currently in unless you have some sort of advanced license; Something like the layman's version of the CDL.


That solution does not work it will just increase the number of people driving without a license. For example, in France, the driving license is quite hard to obtain, you need around 20-30h hours of tutoring before you can attempt the test and it's not a sure thing to get it. So the consequence is that there is a lot of drivers without license, who are implicated in a high number of accidents.


> If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead

Not dead, which I feel is important to point out. Involved in an incident, possibly a collision or loss of lane, but really it's quite hard to get dead in modern cars. A quick and dirty google shows 30,000 deaths and five and a half million crashes annually in the US - that's half a percent.

So in your hypothetical the computer drives 99% of the time, and of the 1% fuckups, less than 1% are fatal.


Why not just mix in consensus-control, artificially generated disengagements?

Even if the system has high confidence in its ability to handle a situation, if sufficient time has passed, request the driver resume control.

Then fusion the driver's inputs w/ the system's for either additional training data or backseat safety driving (e.g. system monitoring human driver).


I like your creative thinking, but that wouldn't work. An immediate problem is it would only train the driver to pay attention when they hear a disengagement chime. L2 depends on the driver to monitor the autopilot continuously.

More productively, Tesla currently senses hands on the wheel. Perhaps they could extend that with an interior camera that visually analyzes the driver's face to ensure their eyes are on the the road.


They actually have a driver-facing camera in the Model 3, which is presumably coming to their other cars in a future refresh.


Recent Honda CRVs can have a attention monitoring system in them. I'm not sure how it works but it does seem to detect when the driver isn't looking around.


If the automation prevents more accidents than it causes, is it still that much of a concern? The results so far say no.


>What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1].

Actually the study explicitly doesn't show that.

First of all, in the study, it purely measures accident rate before and after installation, so miles driven by humans are in both buckets. Second of all the study is actually comparing Tesla before and after the installation of Auotsteer and prior to the installation of Autosteer, Traffic Aware Cruise Control was already present. According to the actual report:

The Tesla Autopilot system is a Level 1 automated system when operated with TACC enabled and a Level 2 system when Autosteer is also activated.

So what this report is actually showing is that Level 2 enabled car is safer than a Level 1 enabled car. Extrapolating that to actual miles driven with level 2 versus level 1 is beyond the scope of the study and comparing level 1 or level 2 to human drivers is certainly beyond the scope of the study.


> Actually the study explicitly doesn't show that

You are correct. We do not have definitive data that the technology is safe. That said, we have preliminary data that hints it's safer and nothing similar to hint it's less safe.


>That said, we have preliminary data that hints its safer and nothing similar to hint it's less safe.

Safer than? Human driving? No, we don't.


Safer than level 1 autonomy.

> So what this report is actually showing is that Level 2 enabled car is safer than a Level 1 enabled car.

which seems to disagree with the leading statement of the first comment in this thread:

> The level-2 driving that Tesla is pushing seems like a worst case scenario to me


"What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1]. It seems probable Level 2 systems will be better still."

As far as I know it is indeed correct that autopilot safety is statistically higher than manual driving safety (albeit with a small sample size).

However, something has always bothered me about that comparison ...

Is it fair to compare a manually driven accidental death (like ice, or wildlife collision) with an autopilot death that involves a trivial driving scenario that any human would have no trouble with ?

I don't know the answer - I'm torn.

Somehow those seem like apples and oranges, though ... as if dying in a mundane (but computer-confusing) situation is somehow inexcusable in a way that an "actual accident" is not.


"Appears" is the operative word. The new system is going to kill somebody. It hinges on building a whitelist of geolocated problematic radar signatures to avoid nuissance braking [1]. It's only a matter of time before a real danger that coincides with a whitelisted location causes a crash.

[1] https://www.tesla.com/blog/upgrading-autopilot-seeing-world-...


> What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers

That's a good question. Clearly, existing self-driving tech is safer than human drivers on average. However, "average" human driving includes texting while driving, drunk driving, falling asleep at the wheel, etc. Is the appropriate comparison the "average" driver, or a driver who is alert and paying attention?


> Is the appropriate comparison the "average" driver, or a driver who is alert and paying attention?

The most appropriate comparison set would be the drivers who will replace themselves with autopilot-steered vehicles.


> A refrain I hear, and used to believe, is that machine accidents will cause public uproar in a way human-mediated accidents don't. Yet Tesla's autopilot accidents have produced no such reaction. Perhaps assumptions around public perceptions of technology need revisiting.

Have there been any Tesla autopilot fatalities with the right conditions to spark outrage? That's a sincere question as maybe I've missed some which would prove your point.

The only major incident I'm aware of is one in which only the driver of the car was killed. In an accident like that it is easy to handwave it away pretty much independent of any specifics (autopilot or no).

A real test of public reaction would involve fatalities to third parties, particularly if the "driver" of the automated vehicle survived the crash.


I'm surprised you believe this. Drivers run people down every day and nobody even investigates the cause. Motorists kill about a dozen pedestrians every month in New York City and historically only half of those people get even a failure-to-yield ticket. Meat-puppets are demonstrably unfit to operate vehicles in crowded urban environments, everybody knows this, and nobody is outraged when the people die.


Indeed, it's probably best not to measure the utility of this tech based on preemptive predictions of how an emotional public will react or the reactions of outrage-driven media with terribly short attention spans.

The actual performance of these machines will be the ultimate test. If it does consistently improve safety then I don't really see much barriers existing here, the current unknowns and semantics surrounding it will be worked out in markets and in courts over an extended period of time and will ultimately be (primarily) driven by rationality in the long run.


Exactly. This will be decided by insurance underwriters and actuaries ultimately.

The safest option will be the way the market will be incentivised, despite all the noise around it this is the ultimate rational market.

Insurance is so boring it is interesting to me.


> The current autopilot already appears to be materially safer, in certain circumstances

It depends on how you measure this. We always talk about humans being bad at driving. Humans are actually amazingly good drivers conditioned upon being alert, awake, and sober. Unfortunately a good fraction of people are in fact not alert. If you don't condition on this, than yes, humans suck.

(Put another way, the culture we, including companies such as Tesla, foster of working people overtime is probably more responsible for car accident deaths than anything else.)

The FAA takes pilot rest time seriously. Considering car accident deaths exceed airplane deaths by a few orders of magnitude, it's about time the rest of the world take rest equally seriously as well.


I agree that level 2 isn't an ideal position, but it has also proven to be better than human drivers in preventing fatalities. In all the miles that Tesla's level 2 cars have driven there has been what 1 fatality? In that instance there was the exact question of responsibility that you suggested, but that still seems preferable to the status quo if lives are saved.


We need independent numbers on this. Comparing with the same population, the same price range of vehicles, the same road sections. Age, level of education, price of the vehicle, absence of hands-free cell interface, lack of seat-belt alarm seem to be way better predictors in USA of fatalities than having autopilot.

Comparing autopilot Tesla fatalities versus average fatality rate on one road section is dishonest.


It's not as precise as I'd like, but there has been an independent investigation of the safety of Autopilot. After the first fatality while on Autopilot, the US National Highway Traffic Safety Administration wanted to determine whether (as many fear) Autopilot posed a danger to drivers, and found that Autopilot was safe enough to keep on the roads and that Autopilot led to a 40% reduction in crashes: https://techcrunch.com/2017/01/19/nhtsas-full-final-investig...


Of Autopilot or Autosteer?


One is a part of the other, so "both" seems like the natural answer.


Do you have the numbers on how many miles Tesla's level 2 cars have actually been driven while using the feature? I see this sort of argument a lot in regards to Google's self-driving tests, and while it seems convincing to me, it doesn't seem realistic to me that's there a big enough pool of data to make that claim definitively.


From Wikipedia [1]

>According to Tesla there is a fatality every 94 million miles (150 million km) among all type of vehicles in the U.S.

>By November 2016, Autopilot had operated actively on hardware version 1 vehicles for 300 million miles (500 million km) and 1.3 billion miles (2 billion km) in shadow mode.

Those numbers are 9 months old and only apply to Autopilot v1 and not the Autopilot v2+ introduced late last year. I wouldn't be surprised if the current number is in the 500+ million mile range with only a single fatality. The sample size is obviously small, but there seems to be a clear improvement over manual control.

[1] - https://en.wikipedia.org/wiki/Tesla_Autopilot

EDIT: With chc's and my post we have 3 numbers and dates for reported Autopilot miles. Projecting that forward at a linear rate (which is conservative given Tesla's growth) would put us at roughly 750 million miles today.


It's great seeing that more and more data is being collected about this all the time. I'm a huge proponent of this tech.

What I wonder when I see these statistics, though, is whether all miles are really equal? For example, are Tesla drivers more comfortable using Autopilot in "easy" driving situations? Is there really a one-to-one correspondence in the distribution of the kinds of miles driven with Autopilot on vs. normal cars?

Furthermore, the metric commonly cited is "fatalities ever N miles." Are there fewer fatalities because Autopilot is great, or because Teslas are safer in general? Has there been a comparison between fatalities with/without Autopilot strictly on Teslas? Even then, it seems to me we are subject to this potentially biased notion of "miles" I mentioned previously. The Wikipedia article you mentioned cites a 50% reduction in accidents with Autopilot, but the citation is to an Elon Musk interview. I haven't yet seen anything official to back this up, but if anyone has any information on this, I'd love to see it!


Isn't that easily countered by comparing Tesla Model S' overall rate of accidents vs another similar vehicle, with similar safety rating, including all self-driven and human-driven miles? There should be a proportional reduction.


Yeah, I think so! That's exactly why I mentioned the accident rate reduction cited in the Wikipedia article shared above.

I'd love to see official work that explores that angle (rather than a claim from an interview, which is what the Wikipedia article refers to), I just haven't seen any document/study about it yet.


Yes but Autopilot can only be activated in the safest of conditions, the 94 million miles number takes in all types of driving factors. The comparison doesn't work because Autopilot usage self-selects for the miles where a human driver would also be much less likely to crash.


It was 140 million a year ago[1], and 222 million last October[2], so I guess a conservative estimate would be 600 million miles on autopilot (assuming that's about two months of change and usage remained steady).

[1]: https://www.wired.com/2016/08/how-tesla-autopilot-works/

[2]: https://twitter.com/elonmusk/status/784487348562198529


In fairness to Tesla that is probably why they are pushing for level 4/5 so hard. It's true that if level 2 drives perfectly for a month, yet a alone a year, the human backup system is going to degrade markedly. We're not there yet, but it's coming.

That's got to open up some liability questions. You can bet when people die it will be tested in court. You could make a case that Tesla's going to be liable for many level 2 accidents in the long run anyway, so might as well go all in ASAP.


I have been under the impression that because TESLA dispensed with the idea of LIDAR that their solution would never be workable. I am still surprised at their ability to avoid liability for the incidents attributed to the "Autopilot"


I don't see how you'd draw that conclusion. It seems possible to have a workable, purely optical solution. Our workable solution today relies purely on optical (human drivers).


Cameras don't have the same dynamic range of the human eye. A more fair test would be giving someone remote control of a Tesla with only access to the video footage.

I think cameras will hit parity with the human eye, but the question is will it happen before or after Lidar becomes more affordable and compact.


Couldn't you just have multiple cameras of different sensitivity? Cameras are pretty cheap.


The eye's dynamic range at any point in time is actually pretty bad. It's easy to beat the range of the retina with a camera sensor. The human eye can also adjust iris size and light sensitivity, and a camera can match it by adjusting iris size and exposure time. You're probably want a secondary camera for full night vision, but that's not very relevant for driving with headlights anyway.


The best cameras nowadays have more dynamic range than human eyes. Although I'm sure Tesla is not using this in its cars.


Optical plus audio, haptic and inertial sensing. And smell, but maybe we can ignore that one.


I have a Tesla with this and i LOVE it.. everyone I know on the teslaforums etc. loves it aswell. Please do yourself a favor, and try it out for a roadtrip duration or similar. Its a gamechanger.


Same. I love reading opinions about how dangerous or bad the Tesla "autopilot" is, or that it dosen't work, from people who have never owned or driven in a Tesla with autopilot.

I've probably gone about 8,000 miles on autopilot in mine (AP1) and it is truly amazing. After a road trip stint of ~200 miles, I feel much more energized and less fatigued in the Tesla with autopilot 99% of the way than I do in previous non-autopilot cars. It really is a game changer. You may think regular driving doesn't require much brain activity, or that the Tesla cruise control and auto steering don't do much, but you really don't realize how much your brain is working to handle those things until you can take a step back and let the care do most of the work. Then you can focus on other things while on the road that you didn't realize before. The behavior of other drivers for example. I can watch a car near me with greater focus and see what that person is doing.

Regardless, if you have not driven one, I highly encourage it. You really need to take it out on the highway and drive it for 30 miles+ to really understand how amazing it is. I've driven other cars with "autopilot", and just like car reviews say, they are nowhere close. (Mercedes, Cadillac, Volvo, others with just auto cruise control). It's just one more reason why current Tesla owners are so fanatic about their car, there is nothing else like it and most likely won't be until ~2020 (maybe).


>Same. I love reading opinions about how dangerous or bad the Tesla "autopilot" is, or that it dosen't work, from people who have never owned or driven in a Tesla with autopilot.

Would you say the same after if you happen to get involved in a serious autopilot accident though? That's the question.

It's very easy to be all roses before one sees any issues.


So from what I've seen, there have been a few occasions where an owner has used the autopilot as a scapegoat for their own faults, only to later admit they were at fault or wait for Tesla and third party investigators to conclude that autopilot was not even engaged, etc.

For instance, the one single fatality of autopilot to date in a Tesla, where the guy was coming up to a crest of a hill with a white 18 wheeler crossing perpendicular to the highway. Yes, the autopilot misread the 18-wheeler to be a sign above the road. (This issue is not fixed). However at the same time, the guy completely disregarded Tesla's instructions of keeping your eyes on the road at all times. Turns out, he was watching a show on his phone.

But yes, I would still say the same thing if I was using autopilot properly as intended. i.e., not watching movies while in the drivers seat of a moving car (which is against the law regardless.) I don't think there are any serious accidents to date where the driver was using it properly and following the rules. As Tesla states, autopilot is in beta (and most likely always will be), that's not to say it is unsafe, but that the driver must be aware and follow the rules and know what autopilot is and is not capable of.

I'd say it took me about two weeks of first using autopilot to understand its capabilities.

Also the best part, it keeps improving in my vehicle through updates. It's pretty impressive how good the updates from Tesla are.


could say the same about people "feeling safer" when driving themselves. which is what all the opposing side clings to. (sure isn't statistics)


>could say the same about people "feeling safer" when driving themselves. which is what all the opposing side clings to

Only people have been driving themselves for a century, and have a pretty good idea of how safe it is, including how safe it is for them and their skills / city / etc, as opposed to some "one size fits all" average.

>(sure isn't statistics)

Well, it can't be statistics because all we've got is the BS "statistics" from the self-driving car companies. Only a sucker for new technology would accept those, as there are tons of differences between regular driving. They take the cars out at optimal conditions (not rain, snow etc), they drive specific routes (not all over the US), they often have supplementary humans on board to check (do they count the times when humans had to take control and saved the day as "self-driving accidents" or not?), and tons of other things besides.


Just curious - have you driven one in situations that became emergencies in which autopilot disengaged? What was that experience like and how does it compare to emergencies you have experienced without autopilot available?


Not sure exactly what you are asking, as in, an emergency happened while driving with autopilot and then it disengaged randomly? Or I was driving and had an emergency and I had to engage autopilot?

I've never seen autopilot disengage by itself. It will after 3 warnings to the drive where they don't touch the wheel, I've never done that though.

There was one time, not quite an emergency, but basically my contact fell out of my eye, into my lap somewhere, and because I had autopilot on, it allowed me to safely use a few extra seconds needed to look down to find it. Whereas without autopilot, the number of seconds it took to find it normally would be considered very dangerous to look away from the road that long.


I meant a situation where an emergency happened outside your car and autopilot required you to take control (not without warning). Did you feel like you were more or less capable of dealing with an emergency on the road when you had been using autopilot, than you would have had autopilot not been previously engaged?


Other companies are also shipping Level-2 systems. US law says that the driver has responsibility for accidents with a Level 2 system. And Tesla shows constant reminders to drivers.


The law is one thing, but if a system works most of the time, can you really blame a driver for drifting away and doing something else. It's asking a lot for a driver to remain focused on baby sitting a robot. I'd rather just drive.


True but if it does reduce the overall accident rate, it might still make sense. Better experience for the driver and better safety are still welcome features. If you'd rather just drive then you're increasing the chance for accidents. Everyone thinks they're a great driver.


I'd rather not drive but if my choices are driving or staring out the window with nothing to do I'd rather just drive to help pass the time. Safety is a factor, but it isn't the only factor. I'd be incredibly safe sitting at home in an empty room staring at the walls, but I choose to do things that are less safe than that so that life isn't miserable.


There was a news story of how somebody lit a grill with the cover closed and it blew up. Yes, I still blame the person.


How well does the Tesla reminder system work for you?


I have to disagree as an owner/operator of a vehicle with this feature set. I consider it a major improvement to traditional cruise control, a modern iteration. When viewing it from the angle of full self-driving / autonomy (level 5) I can see how this argument can be framed, but that's not how us mere mortals view it.


I agree, but it's really difficult to draw the line.

I believe Auto transmission (AT) car drivers are more easily distracted than those driving stick shift or manual transmission (MT)).

But AT makes life easy for so many people, that no one even considers if AT is making drivers less attentive.

A classic example of this, the worst case scenario I have seen in person, is that of a woman talking on a mobile phone in one hand while switching lanes and taking a left turn at a traffic signal. Things like this just wouldn't be possible if that woman was driving a stick shift.

In that driver's case, probably a Tesla like Level 3-5 driving system would be ideal, making it much safer for drivers around.

So, should we go back to MT for everyone or nothing? Just to be under the impression that it's much safer? Or should every vehicle be a level 3 to level 5 autonomous vehicle?

I think people choosing any of the above options will have valid points and research to prove their points. Only time will tell when and how any of those research results continue to hold true.


> Things like this just wouldn't be possible if that woman was driving a stick shift.

FWIW, in the UK where manual is more or less the norm, using a phone whilst driving is still fairly common (the penalties were increased recently due to people ignoring the law). From what I've seen, a lot of people will use the gear stick with the phone in the same hand (i.e. brief break in conversation whilst they switch gear).


I've definitely seen people drive manual AND talk on their phones by holding them between their head and shoulder.


Possible. Hence way more dangerous than autonomous systems


What about tossing in tests every twenty minutes or so where you are required to "drive" for one minute (but the car remains in charge) and your driving is scored vs the A.I.? Maybe if you fail badly enough, you're scored drunk and have to pull over.


>Requiring the driver to be awake and alert while not requiring them to actually do anything for long stretches of time is a recipe for disaster.

i heard that this is why autobahns are made with unnecessary curves/turns.


There will likely be shift in the liability approach, and drivers will be made liable for accidents caused by inadequate technology.

This approach worked well (for some parties) in many other areas, for example in education: now the teachers are the only ones left responsible for the structuraly failing education system.

Now, if these were 1930s, with hundreds of independent auto makers, perhaps the invisible hand of the market would fix this.


Yeah, really seems terrible. Look at the data rather than theory crafting.

https://www.bloomberg.com/news/articles/2017-01-19/tesla-s-a...


But L2 requires the drivers active attention as they have to have their hands on the wheel every 60 seconds (or something like that). As long as Tesla doesn't remove that constraint then there shouldn't be any worries.

I don't see how you can get to L5 without L2 and the learning that is going on in these cars today.


>I don't see how you can get to L5 without L2 and the learning that is going on in these cars today.

You can use the automation as a backup safety feature until it's L4. For example, the car has the ability to stop if you try running a red light, but you otherwise are required to drive the vehicle.

Automated cars don't just have to be better than humans, they have to be as good as humans with automated safety features.


As a California driver, I always have to monitor motorcycles that are lane splitting and give them space when I see one.

So there is always something to do.


Yes, it is the worst case scenario. Also, it's the best you're gonna get. Toyota already got the nod from the courts to kill people via technology bugs in their cars (see the 'unintended acceleration' controversy that killed someone (maybe 2? I forget) a few years ago due to absolutely preventable bugs if they'd splashed out and spent a couple grand on static analysis tools), so if you think companies are going to break with their tried-and-true policy of hiring the cheapest, least experienced person they can find to slap together whatever halfway works, ignoring every warning from their engineers that they need better tools, that they need more time, that the system needs more testing to be safe, etc.

This isn't building a bridge. If a company builds a bridge and it collapses and kills people and it turns out they didn't hire qualified structural engineers or that the CEO ignored warning from the engineers to push the project for a scheduled release window or to keep profits high - the CEO goes to prison for criminal negligence. With self-driving cars, it's a different story COMPLETELY. You're talking about SOFTWARE. No company that's killed people with software has ever found themselves being found guilty of criminal negligence. And they won't for the forseeable future.

This is how self-driving cars will go. I'll give you whatever you want if I'm wrong. I'm that confident. A company, using standard modern business practices (that means doing absolutely every single thing research has shown destroys productivity and ensures the end product will be as unreliable, and poorly engineered as possible. Open floor plan offices, physical colocation of workers, deadlines enforced based on business goals rather than engineering goals, business school graduates being able to override technical folks, objective evidence made subservient to whoever in the room is more extroverted, aggressive, or loud, etc. You know, you probably work in exactly the kind of company I'm talking about because it's almost every company in existence. Following practices optimized over a century to perform optimally - at assembly-line manufacturing processes. And absolutely antithetical to every single aspect of mental work.) will rush to be first to market. Maybe they'll sell a lot, maybe not. That's hard to call. What's NOT hard to call is the inevitable result. Someone dies. Doesn't matter if its unavoidable or not. No technology is perfect. That doesn't matter either. Won't matter what "disclaimers" the company tries to pull trying to say it's in the drivers hands. The courts won't care about that either.

But... they will absolutely get away with it. They will not be fined, they will not be forced to change their practices (most likely they will not even be made to REVEAL their development practices at all). You see, if the courts bother to ask what their practices are, their lawyers will point out it doesn't matter. There's no such thing as "industry standard practices" that you could even CLAIM they failed to follow. So their software had bugs. As far as the court is concerned, that's a fact of life, it's unavoidable, and no company can be held responsible for software bugs. Not even if they kill people.

So they'll get away with it - in the courts. In the court of public opinion? Nope. You see, even if they made their self-driving cars out of angel bones and captured alien predictive technology and it never so much as disturbed someones hairdo, they are destined to fail as far as the public is concerned. Because human beings are, shocker, human beings. They have human brains. Human brains have a flaw that we've known about for ages. Well, by "we", I mean psychologists and anyone whose ever cared enough to learn Psych 101 basics about the brain. There is an extremely strong connection between how in-control a person feels they are and how safe they feel they are. Also, feeling safe is stupendously important to humans. This is why people are afraid of flying. If things go wrong, there's nothing they can do. (The same is true when they're driving a car, but people are also often just wrong and they wrongly think they have some control over whether they have a car accident or not. No evidence suggests they have the ability to avoid most accidents.) If the self-driving car goes wrong while they're not paying attention, nothing they can do. People will be afraid of them as they are of flying.

And if you haven't noticed, our society deals poorly with fear. They LOVE it way too much. They obsess over it. They spend almost every waking hour talking about it and patting themselves on the back about what they're doing or going to do to fix their fears and the fears that threaten others. Mostly imagined fears, of course, because we're absurdly safe nowadays. So it will be the only thing talked about until unattended driving laws get a tiny extension to cover the manufacturing of any device which claims to make unattended driving safe. It'll pass with maybe 1 or 2 nay votes by reps paid for by Uber, but that's it.


"People will be afraid of them as they are of flying."

This is a great analogy, because at the dawn of flight, many people were really, really really afraid of flying -- for very good reasons: the airplanes of the day were incredibly dangerous.

Yet people still flew, and flew more and more, despite many very public disasters in which hundreds of people died, and the airline industry grew and flourished.

Now most people don't think twice about flying, as long as they can get a ticket they can afford. Sure, some people are still afraid of flying, but most of even them fly anyway if they have to, and the majority aren't afraid at all or don't think about it.


Sure, planes were introduced at a time when people were willing to step back and tell themselves 'OK, I don't feel great about this but that's just my emotions running away with themselves, I really shouldn't be scared so I should just do it'. Those times are over. Suggesting people should question, much less actively resist, their most primitive impulses is seen as a direct threat to their person. It simply isn't done.

When a mom is saying 'I'm not putting my children in one of those killmobiles' and someone says 'well actually ma'am its much safer and you're endangering your childs life significantly by taking the wheel', that person gets punched in the face and lambasted on social media as an insensitive creep. That's just how it goes.


> There's no such thing as "industry standard practices" that you could even CLAIM they failed to follow.

Are you sure? What about IEC 61508 and ISO 26262? The latter especially, as it was derived as a vehicle-specific version of the IEC standard.

It's an industry-wide standard:

https://en.wikipedia.org/wiki/ISO_26262

...geared specifically to ensuring the safety of electrical, electronic, and software of vehicles.

Look it up - you'll find tons of stuff about it on the internet (unfortunately, you pay out the * for the actual spec, if you want to purchase it from ISO - it's the only way to get a copy, despite the power of the internet, afaict).

...and that's just one particular spec; there are tons of others covering all aspects of the automobile manufacturing spectrum to attempt to ensure safe and reliable vehicles.

Are they perfect? No. Will the prevent problems? No.

But to say there isn't any standards to look to isn't true.


Unless it works. better than a human. > 90-99% of the time.

Mostly, the drivers, insurance, companies and other people on the road have fewer accidents.


We don't usually switch to a new technology unless it's > 120% better than what exists today. The cool factor will not be enough for practical adoption.


tons of technology is used for lesser marginal gains. no clue where you got this idea from.


I absolutely disagree. It can't be better than humans. It needs to be flawless. Any problems where people get killed will delay public acceptance of the technology by decades, even if "statistically it's safer than humans". People don't give a damn about statistics, they give a damn about tabloids shouting "self driving cars kill another innocent person!!!". We literally can't afford that.


If the aim is to have a technology that's 100% fail-safe, we can stop pursuing self-driving cars now. We can in fact stop pursuing any kind of technology because the economic cost of making anything completely fail-safe are usually prohibitive. Also, isn't history ripe with counter examples to your argument? Planes have crashed, people died, tabloid headlines were written about it, and people still fly, probably because it's awfully convenient.


Yes, and every time there's a plane crash while on autopilot, Boeing/Airbus will order that every single plane of that type is grounded until the fix is found. For airlines it's not a huge deal since they have insurance for that kind of thing and they can always use other planes they have or rent them to subsitute in their schedules. Now imagine if there are hundreds of thousands of say self-driving Teslas(or any other brand) and your national safety regulator orders that they are all disabled until a fix is found after a particularly nasty crash. If you get up in the morning and can't use your car because it has been remotely disabled for whatever reason, you will be furious - or at least I know I would be. I'd sell it and buy a normal manual car the same day.

My point is, that all of this affects the public perception of self-driving cars - and if we want them to succeed, we need to make absolutely sure that that perception is good. We can't have the nonsense Tesla is trying to pull off at the moment, where they call their system "autopilot" but they know the system cannot detect obstacles at around pillar-height and it gets blinded by the sun and can swerve into the oncoming lane before just switching off. These are not theoretical problems - both happen in cars that are out there, right now. And if it happens to a regular Joe Smith, then Mr. Smith will think the technology is crap, and we can't let that happen.


Also, demonstrably safer than driving, by time or by distance.


After a long history of fatalities and fixes.


So you're saying we should let more people die (via human driving) to avoid people getting upset over tabloids ignoring statistics?


It seems like the person you're responding to believes that the deaths from low-quality self-driving cars will cause the technology to not come to fruition, saving less lives in the long run.


We are already "letting more people die" for economic reasons. We could mandate that all cars on the road should have the most modern 15+ airbag systems, but it's too costly. We could mandate that the speed limit should be 30mph and limited in hardware, but it's too costly economically, yet I don't think anyone can argue that it wouldn't save lives.

We make these exact choices all the time. I am saying we should "let more people die" now, so that we can save more later. That's not a novel concept.


Right? Every single economic decision we make could be framed as a 'letting people die' choice.


Then that's our own fault for being too stupid to have self-driving cars.

Ideally, we should embrace them even if they are slightly more dangerous than human drivers, because we are getting the benefit of the time that would otherwise be spent driving.


I think there's real-world evidence that this is not the case. There have already been deaths due to Autopilot, and the reaction you're describing here didn't happen.


Tesla's system doesn't have enough sensors. Musk forced his engineers to try to do this almost entirely with vision processing, and that was a terrible decision. Vision processing isn't that good yet. Everybody else uses LIDAR.

I've been saying for years that the right approach was to take the technology from Advanced Scientific Concepts' flash LIDAR and get the cost down. I first saw that demonstrated in 2004 on an optical bench in Santa Monica. It became an expensive product, mostly sold to DoD. It's expensive because the units require exotic InGaAs custom silicon and aren't made in quantity. Space-X uses one of their LIDAR units to dock the Dragon spacecraft with the space station.

Last year, Continental, the big century-old German auto parts maker, bought the technology from Advanced Scientific Concepts and started getting the cost down.[1] Volume production in 2020. Interim LIDAR products are already shipping in volume. Continental is quietly making all the parts needed for self-driving. LIDAR. Radar. Computers. Actuators. Cameras. Software for sensor integration into an "environment model". They design and make all the parts needed, and provide some of the system integration.

Apple and Google were trying to avoid becoming mere low-margin Tier I auto parts suppliers. Continental, though, is quite successful as a Tier I auto parts supplier. Revenue of €40 billion in 2016. Earnings about €2.8 billion. Dividend of €850 million. They can make money on low-margin parts.

Continental may end up quietly ruling automatic driving.

[1] https://www.continental-automotive.com/en-gl/Passenger-Cars/...


Interesting research on Continental. FWIW, looks like Tesla started using Continental's radar to replace Bosch just this week: https://teslamotorsclub.com/tmc/posts/2266769/

I suspect that if/when LIDAR is cheap enough, Tesla will use it.

In the meantime they outfit every single car with the best hardware that is realistic from a cost standpoint today, instead of waiting til 2020.


In the meantime they outfit every single car with the best hardware that is realistic from a cost standpoint today, instead of waiting til 2020.

And so far, it just sits there and does nothing useful, since the self-driving software that can do the job safely with those sensors doesn't exist.


It depends on what you're optimizing for. Others using LIDAR are optimizing for speed to market, while potentially sacrificing ability to solve the problem as fully. Musk's argument is that we know for certain that the entire road system can be navigated by visual cues, because that's how humans do it. We do not know for certain that this is possible with LIDAR.


  Musk's argument is that we know for certain that the 
  entire road system can be navigated by visual cues
We know for certain that human brains can be assembled out of regular atoms, but raising funds for a company that manufactures brains would be getting rather ahead of our current level of technology. The same might be true of computer vision and autonomous vehicles.


Please don't quote with code blocks. It messes with formatting on both mobile and desktop.


  It messes with formatting on both mobile
HN has never been mobile friendly, (element spacing, upvote button size) so I'm not terribly worried about that. If YC wanted HN to work on phones, they would fix their CSS.

  and desktop
Oh?


Block quoting can screw up desktop rendering on HN if the quoted line is too long. Say you put a 1000 character line in a literal quote:

  1.......1000
The whole page would render at the length of that line. So this text that I'm writing now, it'd normally wrap once or twice depending on your window's width. Instead, it'll only wrap after approximately 1000 characters. This is why when I block quote, I put in a line break every 60-70 characters (similar to what you did a few posts up).

I know this behavior happens with IE, I believe I've seen it in other desktop browsers but can't verify at the moment.


On desktop long block quotes go in a scrolling box. One that's pretty wide, so it doesn't usually cause a hassle to have to scroll it. But not so wide it breaks anything.


Additionally to what Jtsummers said, quoting on desktop can drastically reduce the line length compared to the rest of the lines in the comment, which is both ugly and annoying, especially since it makes it more difficult to read since your eyes are whipping back and forth.

There are a multitude of mobile apps for consumption of HN, most of which solve the code-block-line-overflow problem by adding a horizontal scroll, which only makes the problem worse by forcing users to scroll side to side to read the full quote.

Just use `>` at the start of the line to signify that it's a quote, please.


> There are a multitude of mobile apps for consumption of HN, most of which solve the code-block-line-overflow problem by adding a horizontal scroll,

HN does that natively - just drag the code block around with your finger. Still sucks to use though, so > quoting is the way to go.


So you believe the technology gap for assembling a human brain out of regular atoms is similar to that for navigating a road using cameras?


JFC. I don't get why otherwise rational people become so stupidly aggressive when faced with analogies.


Obviously not, the former is far easier.


Humans also learn how to drive using several orders of magnitude less training data than self-driving car efforts are using. Do you think that means we should strip down the training data sets to human-equivalent levels?

People and machines behave and understand the world differently. Just because it works for people doesn't mean it'll work for computers.


How on earth do you reach that conclusion? Every person has at least 16 years of high bandwidth sensor data experience before getting behind a wheel, plus millions of years of evolutionary training built into the structure of their brain.

A vast amount of data has produced every driver.


Data specific to driving is still quite limited for people, even when counting all that passenger time in childhood. The average driver in the US drives maybe 10-15k miles a year, so optimistically a person may have 200-250k passenger miles before driving? Consider that Waymo's cars drove 3 million real world miles in 2016 alone, and 25 billion virtual miles in a simulation, and all those miles are potentially used in a model shared by all the cars.

Now if you want to just look "total experience in the world as a whole" the numbers look a bit different, but if anything that just accentuates the differences here. We don't currently have a way to teach computers to construct mental models of how the world works the same way humans think about it, which would be necessary to use training data that was about the world as a whole.


What's more important, we can teach people much more effectively than machines by communicating concepts. Imagine the driving performance of a hypothetical person who is not capable of speech, so the only way to train them is having them drive in a simulator, and feed them candy when they get it right, while administering electric shocks for every driving error.


Humans are pretty bad at driving though, so I'm not sure that basing self driving technology on us is a good idea.


Humans are bad at driving because they can get distracted and can only look in one direction at a time. A computer system based on visual processing shouldn't have either of those issues.

I agree fundamentally though that it's a weak argument to say a certain approach should be taken just because it's how humans do it.


The argument is not that it's because how humans do it. The argument is that the entire system was designed to be consumed as visual cues.


As far as we know, humans are better drivers than every other species in the universe.


> Musk's argument is that we know for certain that the entire road system can be navigated by visual cues, because that's how humans do it.

i'm hesitant to argue against Musk, yet the actual goal is to navigate better than humans, and it seems reasonable to suspect that having wider range sensors would be a good (and may be necessary) foundation to achieve it.


just watching the current system navigate at night on poor lightly stripped streets is frightening enough to keep me away. search youtube for autopilot fails and there are some videos from just this year.


You seem to be assuming that the limiting factor is cost and parts. To me it appears the limiting factor is software. AIUI no one has the software to do level 4 autonomous driving, at any price. So why would a parts supplier end up "ruling" automatic driving?


I don't think thats what he said. Parts and hardware aren't a limiting factor, but they are an enablement for better software to be created (note Tesla in his example).

Without good hardware then you're stuck jumping through hoops to do all the processing software-side which leaves less time to react quickly and accurately.


I've seen you plug flash LIDAR (especially ASC's unit) several times here, but is anyone actually using it on SDVs? I've seen things that could have been flash LIDAR on test cars, but never as the primary sensor.

I worked on a team that evaluated the ASC unit a few years ago, but they found it unusable due to bloom issues. Has that changed?


I don't know how Continental is doing about staring into the sun. Here's a recent overview of flash LIDAR detector options.[1] That's more about the detectors used for looking down from aircraft, where ranges may be thousands of meters. Automobiles don't need that kind of range.

Flash LIDAR has a tradeoff between range and field of view. You can concentrate your energy or fan it out. There's been some interest in systems where you can narrow the output beam when you need to. Or you can combine wide-beam short-range units with narrow-beam long range ones.

There are other flash LIDAR companies, but many of them are vaporware. Quanergy, which bills itself as "the leading provider of solid state LIDAR sensors" announced a unit last year, but doesn't seem to have shipped. Velodyne wants to come out with a solid state product. TetraView is trying for a low-end semiconductor solution that uses common sensors. Luminar and LeddarTech want to use MEMS mirrors.

A long-range narrow-beam MEMS-steerable flash LIDAR might be useful. Look out 300m in the direction you're going when at high speed, and use other wide-angle sensors for a side view and in cluttered city environments.

Somebody is going to get this working at an acceptable price point reasonably soon.

[1] http://spie.org/newsroom/6466-comparing-flash-lidar-detector...


Why are flash LIDAR the way forward? Is it just packaging, and the ability to blend the sensor with the lines of the car? Cost? It certainly isn't image quality AFAIK.

(as an aside: I don't buy the moving parts argument -- a modern car has thousands, many of which have much tougher jobs than "spin at 900 RPM while on".)


I just recently heard about a company building automotive headlights with, let's say "selective lighting", using micro-mirrors. I was then surprised to find out micro-mirror devices have been available for quite a long time. [1] I wonder how come they haven't been used for a solid state LIDAR yet, they seem to be _very_ well suited for this. Are there any limitations because of the power of the LIDAR?

[1] https://en.wikipedia.org/wiki/Digital_micromirror_device


>Tesla's system doesn't have enough sensors. Musk forced his engineers to try to do this almost entirely with vision processing, and that was a terrible decision. Vision processing isn't that good yet. Everybody else uses LIDAR.

I think I agree with this, but is LIDAR expected to work in the rain?


I think this is the wrong question to be asking.

Instead ask: "In good weather, can LIDAR-less systems match the safety of LIDAR?" The answer is "not yet"

Until LIDAR-less systems can match the safety of LIDAR they will simply be banned from the roads, or limited to certain situations such as rainy weather.

Regulators, politicians and society will not allow Tesla to operate a system which in good weather has a much higher accident rate than is necessary - and they will not accept "cost reasons" as a valid excuse for not installing a LIDAR.

Laws are frequently named after a single child who died.


"In a gold rush, sell shovels"


You forget the (smaller, repeatedly broken up) American auto counterpart of Continental: Delphi. Google them and you'll find they're doing pretty well of late.


This is an example of Tesla's RADAR sensor at work.

https://m.youtube.com/watch?v=BE2lQK_0CDw

And Tesla's engineers aren't the first to bellyache about being asked to make the impossible a reality.


The only industry to have produced truly driverless public transportation systems is the rail industry. Not aeronautics. Rail systems happens to be my business and what I read here makes me very worried.

I don't think the majority understands what safety means in mass transportation. It's not about running miles and miles without accidents and basically saying "see"? It's about demonstrating /by design/ that the /complete/ system over its /complete/ lifetime will not kill anyone. In terms of probability of failure it translates in demonstrated hazard rates of less than 1E-9 /including the control systems/. This take very special techniques and if that could've been done using only vehicle sensors, it would have been adopted by us long ago. I am also sorry to report that doubling cameras and sensor fusion will not get you an acceptable safety level. We've tried that too, rookies.

Is it "fair", to use Elon's argument? After all, isn't additional safety enough compared to existing situation. Ah but we have been there too! For driver assistance it is indeed better. Similar systems were deployed during the second half of 20th century (e.g. KVB, ASFA, etc). But the limit is clear. It only /improves/ driver's failure rate. It does not substitute for the driver. If you substitute, you have to do much much much better. Nobody will ride a driverless vehicle provided the explanation that it is, you know, "already an improvement when compared to a typical driver". Is it fair? Maybe not, but that's the whole point for entrusting lives to a machine.


It seems like you're taking your own opinion as a universal truth. There are many people (myself included) who will happily ride in a well-tested autonomous vehicle, particularly if the alternative is a taxi, uber, or bus driver who is just going through the motions. 5x safer is still an improvement in my eyes, even if it's still short of the gold standard for aviation.


No personal opinion here. It's just facts from the only ones around who have done this before. Your argument is flawed. We have also been there. Here is the catch. You are willing to take the risk, but society as a whole will not. As soon as you release /total/ control to a machine, public authorities having authorized this become responsible for your life and those your car may threaten. Requirements for said authorities to accept such things will be staggering.

In other words, it is you that is taking your own opinion for the universal truth. You reason in a model where the driver is the only responsible party for reckless driving.


I'd disagree here too. Trains and planes have extremely catastrophic failure modes, while cars do not. Cars have the added benefit of being able to quickly come to a stop in most situations.

An even bigger driving factor (pun intended) will be that the cost of a driver relative to operation is much, much higher in a car. A solution that is better on both safety and cost will be quickly adopted.


For the first paragraph, I don't understand the logic behind your comparison between a train failure mode and a car failure mode. However your assumption on braking capability is wrong and it is a crucial point.

It does not matter how well your car can brake. Can your autonomous driving system guarantee it will always brake as hard and as fast as required when you will need it? Can you guarantee that the system will not have a bug the day it needs to brake?

Trains brake very well. There are even rubber wheel metros that brake so well we have to limit them in order not to sent everyone flying in the wagons :).

But in the safety calculations, standards like IEEE 1474 assumes degraded braking capability, and also considers that the preceding train is at stop. In other words, to be declared safe for mass usage, you can't assume average case. You must assume adverse case. You will not have a driver to notice that the car brakes poorly or to be confident enough to drive very close to the preceding car.

For the second paragraph, again, this is certainly true for driving assistance, but will most likely not be enough for driverless, as it was / is not enough for trains. Of course you may disagree.

EDIT: this is a good reference: https://www.linkedin.com/pulse/safe-braking-model-explained-...


dude. this is not logical. if a system kills less people, we should use it. otherwise, you're just choosing to kill more people for a false sense of authority/security. not only foolish, but i resent that you believe you can make the choice for others.

you are using appeals to authority and emotion rather than the scientific method.


He's arguing from a position of someone who's familiar with the socio-political process. It's a different axis than technical arguments or ideological arguments.

I have no substantial comment other than to note that when a driverless car is insurable by Liberty Mutual at normal rates, with all liability held by Tesla, then it's probably reasonably safe.


Maybe autobuses would be the difficult medium — disastrous consequence of failure with at least the same operational complexity as cars.

However I don't see how the fears of an industry that needed to have extremely good results immediately need to be extended to one in which all signs point to progressive improvement. The danger being that the general public is fast to jump on blind trust, but to be compared with a vastly different ratio of number of people who are / can be in control to number of people affected, at least for cars and trucks which as far as I know are the first POIs in this industry. Autobuses might get automation slower than the rest of the fleet (no source, just from the top of my head).


A 5x reduction in accident rate(plus the convenience of self-driving) , advertised and lobbied in the right way, can be translated to a huge political power. That could, with time, convince authorities, And that wasn't the case in the other systems you mention.


"Huge"? Are you so sure?

First, you do not need self driving car to achieve 5x improvement. Good driver assistance systems will get you there.

Then, put yourself in the choose of the politician that is authorizing this technology to be used massively. Picture the amount of cars that are going to run everywhere once this authorization is given and ask yourself: what will happen to him once the first hundred people have died because of a wrong autopilot decision (it will be such figures, since you are OK with x5 reduction). Do you think he will be sleeping well at night? Do you think his political career will be better, especially if you consider my first point? Would you step in his shoes?


> just facts

> You are willing to take the risk, but society as a whole will not.

What are you talking about? People very objectively ride planes that are not completely safe. Airlines that are nowhere near completely safe still get tons of customers.


Planes are not driverless. This is what I am talking about.


Oh, I thought you were accepting the characterization of planes as the gold standard. Because planes have the same scary part, of giving up control.

But more importantly a couple paranoid regulators don't really represent "society as a whole", so you're not drawing on particularly relevant experience here when you talk about something as locked-down as rail or planes. With the constant looming death toll of human car crashes and state by state regulation there's a lot of room for getting these systems on the road.


Aerospace is the /absolute/ leader for driver assistance. They have decades of experience, especially in the field of ergonomics and brilliantly crafted semi-automated procedures[1].

But in the end, we all accept the risk of riding planes that we don't control because we entrust our lives to /trained pilots/ not because of such systems. As an illustration, the debate is still vigorous about whether or not a computer should be allowed to sit betwen the driver and the actuators [2]. It is also the case for cars, especially after the Toyota blunders [3] so I do believe all this body of experience is relevant and cannot be easily "disrupted".

[1] https://en.wikipedia.org/wiki/Traffic_collision_avoidance_sy... [2] https://aviation.stackexchange.com/questions/149/what-are-th... [3] http://www.edn.com/design/automotive/4423428/Toyota-s-killer...


facts? Do you have sources?


The requirements for safety in railway are well defined in EN 50129, EN 50126 and EN 50128. You will be required to meet these standards for an ISA (independent safety authority) to give a positive report, and then for the applicable transportation authority to grant revenue service authorization. As you can imagine, the willingness of a given passenger to take a certain amount of risk is not part of the process described in these standards.

EDIT: theses standards are applied worldwide, despite the EN prefix.

EDIT: good reference site: http://en50126.blogspot.it/2008/07/velkommen.html?m=1


I think the mapping between rail and motor vehicle transportation is not necessarily a direct one. Rail (excluding public transit like subway) is more infrastructural and largely unseen, and cars are basically a daily experience for the majority of the population; this fundamentally changes people's perceptions of each.

> Nobody will ride a driverless vehicle provided the explanation that it is, you know, "already an improvement when compared to a typical driver".

I don't necessarily think this is true (perhaps age-correlated?). Let's set aside the issue of whether or not we can do it, for now, and assume that we have a scenario where self-driving cars are safer than human drivers.

In this context, I can easily imagine a political campaign à la "Think of the children!" that paints human drivers as fundamentally unsafe, advocating for a self-driving mandate in urban areas. Perhaps with a Cash-for-Clunkers type of deal to aid the transition. I am not saying this is desirable, merely plausible; it has all the elements of good politics: an easily-grasped bright-line dichotomy, emotional manipulation, and massive corporate benefits (for vehicle manufacturers, self-driving software vendors, and transportation providers like Uber).


Driverless trains so far are only used in mass transit so... I am afraid these are daily experience.

For the second part of your argument, there are multiple problems, again backed by examples in the railway world.

First, a politician will think twice before casting a devilish image of drivers. Having all the professional drivers against you is essentially a political hara-kiri. There is a reason why it took 10 years to migrate Paris Line 1 to driverless, and it is not technology.

Second, you would not believe how difficult it is to reach consensus on the fact that drivers are unsafe compared to a machine. Still today, even after decades of operations with no incident, there are still people in the industry that argue otherwise... The most telling example is the high speed derailment in Santiago de Compostela, essentially due to the fact that the driver is considered a good enough guarantee to drive a high speed passenger train up to 200 KPH... Sigh.

EDIT: References [1] https://www.witpress.com/Secure/elibrary/papers/978184564494... [2] https://en.m.wikipedia.org/wiki/Santiago_de_Compostela_derai...


> Driverless trains so far are only used in mass transit so... I am afraid these are daily experience.

Not in the daily lives of the vast majority of Americans.

The three train systems I've used on a regular basis in the last half decade (NJ Transit's NE Corridor, Amtrak's Northeast Regional, and Caltrain) very much do not have driverless trains.

Two of those are in the top 10 commuter rail systems in the US.


You think the rail industry understands what safety means? Trains have literally one degree of freedom in their controls - speed up and slow down. In spite of the simplicity we still see high fatality train crashes with regularity. Some how sticking people in a paper thin aluminum tube 6 miles up going 600mph can be done with zero fatalities but we can't get trains to stop killing people.

I'm ranting here, but you're on an undeservedly high horse.


No problem at all, I was half expecting such reaction.

http://www.uitp.org/sites/default/files/Metro%20automation%2...

There has been 1 accident in 30 years attributed to driverless rail mass transit control system, despite millions of kilometers travelled and a constant worldwide growth. Yes, rail has understood what driverless safety means. You may call it a high horse but driverless /is/ a high horse. That's my point.

In a way, your comment confirms this success. You think it is easy because you are used to it at an instinctive level. And that is the goal, precisely.

But it is not as simple as what you imagine. For instance, trains have multiple degree of freedoms. As an illustration, train builders calculate carefully the dynamic enveloppe of the cars (e.g. its worst lateral deformation due to bogie flexibility) and check against the tunnel walls geometry.

http://www.railsystem.net/structure-gauge-and-kinematic-enve...


Fair point, I guess I was referring more to this comment:

> I don't think the majority understands what safety means in mass transportation.

And inferring that you meant that rail understands safety, generally, better than any other mode of transportation which I took issue with. I believe you 100% when you say that only one accident in 30 years ahas been attributed to driverless trains.

What I find horrifying about rail is how simple PTC should be: "am I exceeding the maximum speed for this stretch of rail? is there a train in front of me? if yes to either, slow down" and yet Amtrak says it will take billions of dollars and decades to install. I'm thinking of accidents like the NE Corridor derailment in 2015.


I can really understand your frustration on PTC.

PTC is a problem of ROI for operators. The incentive is apparently not so great, so it will take decades. Also, and this is frequent in rail, any complicated solution that will leverage the existing infrastructure will be explored, further entangling the situation. In more controlled economies, the state has enforced for long the technology to be used. [1] Some may say it is a case where free market cannot be trusted to take the correct decisions. It is a societal debate, no longer an engineering topic. [2]

[1] Not too bad: https://en.m.wikipedia.org/wiki/Positive_train_control

[2] https://en.m.wikipedia.org/wiki/Rail_Safety_Improvement_Act_...


You're again confirming the opposite:

> the derailment was caused by the train's engineer (driver) becoming distracted by other radio transmissions and losing situational awareness, and said that it would have been prevented by positive train control, a computerized speed-limiting system


I know that was the cause. I'm lamenting that the rail system has been unable to deploy PTC because of the cost, which, as a nonprofessional, seems astronomical for what should be a simple system.

Remember, I was initially referring to rail's approach to safety generally, not just in the context of automation.


Are we talking about metros or trains? Metros are built out of the way. They shouldn't kill people because there are a few stations of track that they can kill people.

The second you put the train in the open air where other people are, where weather is, where nature is, those things kill people on the regular.

Get out of your tunnel, and suddenly your "success" has nothing to do with the control system.


Metros are the only driverless systems in mass transit so far. There are some ongoing projects for driverless freight but I consider that it is not relevant to this particular discussion.

I don't want to sound obnoxious but driverless metros routinely operate in the open air :). Tunnels are not a mandatory condition for such systems. They manage strong winds, strong rains with reduced adhesion conditions, etc. They manage the possibility for people or non-protected trains to intrude on the track. A lot of brain was put into automating decisions related to fire scenarios or emergency evacuations. Metro lines like Paris line 1 are extremely busy. There are plenty of ways to kill people, yet they don't. Just the "stuck hand-bag" scenario is a nightmare to manage. Yet.

A driverless system is truly a massive piece of software. You guys should come in the tunnels and have fun with us!


> Metros are built out of the way. They shouldn't kill people because there are a few stations of track that they can kill people.

This is demonstrably false. There are many metro systems with tracks on viaducts and at-grade (e.g. "open air"), with GoA 4 control systems.

These systems can and do detect objects on the track. They also deal with severe weather (heavy thunderstorms, snow, etc).

I think you're discounting the experience of delivering railway systems without considering lessons that could be learned in deploying them, and applying that to self-driving cars.


I'm not saying you're wrong, because I have absolutely no authority in the subject, but apart from knowing some terminology how do we know you have any expertise either?

> If you substitute, you have to do much much much better. Nobody will ride a driverless vehicle provided the explanation that it is, you know, "already an improvement when compared to a typical driver". Is it fair? Maybe not, but that's the whole point for entrusting lives to a machine.

I've heard Elon mention this and while I don't know exactly it's measured, he claimed that fully autonomous cars would have to be 10x "better" at driving than a human before they would be allowed. I'm paraphrasing when I say "better" but I'm sure I could find the video of him talking about it.

At any rate, your comment is interesting.


You are correct. I will edit and add some supporting references.

I don't know where Elon gets his numbers from, but according to EN 50126 practioners, human error is in the range of 2E-4 / 1E-3 whereas safety functions are classified at least SIL2 and that means 1E-6. In other word, the system must be x1000 better than a human.

Here is a good reference: http://en50126.blogspot.it/2009/10/safety-integrity-levels-s...


All due respect to your experience with the rail industry and its impressively high bar for safety, in the auto industry, if they can get the death rate down to 48 people killed every day in the US, that will be twice as safe as we are currently.


Sure, and this goal can be reached without driverless. I am really supportive of driver assistance. The point I try to make is that bringing driverless on top is a challenge that seems to be underestimated.


> This take very special techniques and if that could've been done using only vehicle sensors, it would have been adopted by us long ago

If human flight was possible, surely they would have done it hundreds or thousands of years ago.

Your argument disregards not only advancement in theoretical knowledge, but also advancement engineering in the form of computational power and sensor sensitivity.


You confuse technological advances and safety techniques. These are very different things. Safety techniques are theoretical principles that are used to keep technology safe. Typical examples of safety techniques in systems design are: readback, diversity, majority voting, coding of information.

In the case of establishing the correct position of vehicles, whatever the technology used it will have an error of measurement. This error accumulates over time, especially if you are measuring displacement. You will need reference points to retain your positional uncertainty within an acceptable value. An autonomous vehicle uses a map containing features that can be used as reference points and triangulate its position based on that. A human brain does this constantly using its eyes, ears and memory. It is an example of diversity: you correlate the perceived displacement by your internal ear with what your eyes sees, using a reference learned in your brain about your surroundings and sensors' ability.

To reach 1E-9, we cannot rely on such self-learned things: we can agree that the probability for the landscape to change is quite higher than 1E-9. In the case of trains, driverless trains use coded beacons / loops or GPS diversity. Such techniques imply that the infrastructure around your vehicle collaborates to the safety of the system. Hence my statement.

This being said, the rail industry has been dreaming for long of a completely vehicle centric solution. There are high stakes for that: reduced costs, competitive advantage, etc. Last attempt here: http://www.alstom.com/products-services/product-catalogue/ra...


This is a good summary of modern risk culture. It has good sides and bad sides. On the good side, you have things like commercial aircraft and trains that, despite them being inherently dangerous, manage to achieve extremely low fatality rates.

On the bad side, this risk culture destroys large scale innovation, and even safety in the long run. The problem is that we've adopted an attitude that safety is always first, meaning that it is immoral to do something in a less safe way, no matter what the other benefits might be. This means we get a regulatory, tort, and engineering culture that is willing to use existing systems, because they are grandfathered and therefore "reasonable", but will only adopt new systems if they can be shown to be perfectly safe.

This culture is fairly new. I date it to sometime in the seventies. Ralph Nader and the Pinto were both symptoms and causes. You can see the transition in, for example, how America responded to the Apollo 1 fire vs. the Challenger accident.

Since you're a a rail engineer, let's look at rail mass transit systems. The NYC subway has a limit of roughly 30 trains per hour, a two minute headways. All of the braking rates, margins of safety, and signaling systems do their job, and you never see two trains hitting each other. During rush hour though, trains are packed way over capacity, and this is mostly because of this headway limitation. If you were to imagine this on the freeway, you'd have to leave two miles between you and the next car. The cost for this level of safety is that about half a million people have to spend an hour every day in miserable conditions. Many of them choose to drive or take cabs instead, to avoid this.

When they make this decision, none of them even give the slightest thought that driving is maybe 10x more dangerous than taking the train. They understand that safety for both is plenty good, and the way that they spend two hours of their day, 13% of their waking hours, is a lot more important than some tiny difference in their risk of dying on the way to work.

Ultimately, while very well intentioned, this safety culture is inhuman. It's pessimistic. It says "nothing in your life could possibly be so important that it's worth any possibility of you being injured. "

So, this is what we face now, over and over. With self driving cars, it's, "sure, the chance of you getting killed is half as much as if you had driven, plus you get all of that time back to have a nice conversation, read a book, or idly stare out the window, but since it doesn't meet our aviation/rail transit level of safety, you can't have it." You even see it with kids -- your twelve year old kids can't be allowed out of the sight of an adult, or you're a dangerous, neglectful parent. It does not matter that childhood is the process of learning how to be an adult, and that becoming an adult is a process of progressively mastering greater and greater freedoms -- the most important thing is that we are never seen to expose children to even the most minute level of risk.

I really want us to have a real conversation about acceptable risk, informed consent, and human progress. We need it badly to regain our souls.


> On the bad side, this risk culture destroys large scale innovation, and even safety in the long run.

So true. Its an issue in General Aviation where there has been huge progress in safety devices (electronic cockpit gauges, airbag seat belts, etc) but it has been illegal to install them in older aircraft because of the FAAs very slow, expensive certification process. Finally, in the last year or so, the FAA got serious about what's called "Part 23 reform" which will vastly streamline the process for safety upgrades in older aircraft.

Also, I don't want this comment to be interpreted as shitting on the FAA. I'm a libertarian leaning former liberal who generally has very low confidence in our government but I consider the FAA, with their insane safety record of which I'm a massive fanboy, to be a exception.


It is a very nice sumary and I agree with you. Thanks for that.


What befuddles me is that in all these discussions about self-driving cars seemingly no one refers to the massive body of knowledge in this area that comes from the aviation world.

I've posted variants of this same comment several times and I'm starting to feel like a broken record.

Look at studies of efforts to make planes safer by removing the human element. While efforts like autopilot have made things safer it reaches the point where more automation can reduce safety as pilots are no longer alert and/or don't trust the instruments and/or can't fully manually override the automation.

Call it the uncanny valley of automation safety.

Bridging that last few percent for true automation (ie where vehicles aren't designed to have drivers or pilots at all) is going to be _incredibly_ difficult, to the point where I'm not convinced it won't require something resembling a general AI.

All of this is why I think driverless cars are going to take much longer than many expect.


There's a big difference: commercial pilots are highly trained, even-tempered, and take their job seriously. Most drivers are lazy, distracted, and apt to do something stupid in an emergency. It's very hard to make something safer than a commercial pilot. It's much easier to make something safer than a typical driver.


Yeah that validates his point. If the arguably most highly trained vehicle operators around lose situational awareness and fail to recover after automation fails, what do you think will happen with untrained drivers?


That's actually a confirmation bias being missed. Aircraft systems fail and commercial pilots take over and land without mishap all the time. Components inside airplanes, at airports, and even inside air traffic control can and do fail.

One example is the attack on the Chicago air traffic control system. Dozens and perhaps even over a hundred aircraft suddenly were flying around with no oversight. Every single pilot took local control, negotiated with the other pilots, and collectively were able to either land or divert without incident.


most drivers are lazy, distracted and accident prone.

If this had any element of truth roads would be empty.

This kind of sweeping and bigoted dismissal of other people is a bit too self serving in the context of self driving cars and is made too casually and too often on HN now to allow balanced discussion.


I agree. I think it will be very hard to make something that drives better than a really good human driver who is focused on driving. Getting better than the 50+% of people I see with their phone clapped to their faces while they drive (despite that being illegal in Austin) is much easier (though still difficult) task.

The other thing is that something going wrong with a plane in the air is a pretty big deal. You can't just pull over and wait. If you assume a "first, do no harm" principle of robotics for driverless cars, the failure mode should in most cases by "pull over and wait." This can still cause problems in many situations, but people do it now.


> Most drivers are lazy, distracted, and apt to do something stupid in an emergency.

Er, citation needed. I think the vast majority of drivers are good drivers — otherwise, vehicular transport would be a disaster.


There are around 1.25M vehicle fatalities every year worldwide [0]. It is a disaster. Driving has killed more people than the world wars.

"Good drivers" -- we have no benchmark to measure against. Maybe it's amazing that 10x more people aren't killed, or maybe it's dismal that anyone is killed at all. When we have autonomous vehicles, we'll have a reference to compare against. I predict that "bad" will be the only word to describe the current situation.

[0] https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r...


Yeah, that's true at least where I live. Most times I drive I'll see someone do something stupid or inattentive (or even do something stupid myself!), but I see thousands of people driving normally, i.e. driving well.

Countless times I've seen pedestrians or cyclists throw themselves in front of traffic without warning, and every time the drivers have stopped without incident. A collision is by far the more exceptional case.


Call it the uncanny valley of automation safety.

I am not disputing your assessment, but please don't discount liability. Planes can pretty much fly themselves today - there are no significant technology issues with the idea of "taxi away, take off, fly to destination, land, taxi to gate". all of this happens in what is perhaps the most regulated traffic environment on the planet.

The issue is with creating the code that deals with "oh shit" scenarios. Whilst is is probably possible, and even feasible, to write code to cover every possible failure scenario, who is going to be left holding the can when this fails (all systems have a non-zero probability of failure)?

Who will be held responsible? The outsourced company that coded the avionics/flight control software? The airplane manufacturer? The airline company? The poor fucker that wrote the failing logic tree that was supposed to deal with that specific failure scenario, but was forced to work overtime the 47th day in a row when that particular code was cut?

It is a liability nightmare, and when you add up the cost of creating a software system that must never fail, the increased insurance premiums, the PR/marketing work to convince the unwashed masses that this is actually safer, and the whole rest of the circus required to make this a reality, you will find that pilot costs are not all that bad. Especially since pilots have significant downward pressure on real earnings these days anyway.


> Planes can pretty much fly themselves today

but

> The issue is with creating the code that deals with "oh shit" scenarios.

So they fly themselves except they don't?

That's kind of my point: what makes anyone think truly driverless cars are going to happen anytime soon when a human is required to deal with these "oh shit" scenarios? What's more, I think the "oh shit" scenarios for cars are FAR more complicated. With planes someone else deals with scheduling for take off and landing. While in flight, the plane simply needs to not fly into other objects and maintain speed, direction and altitude.

As for liability, I agree. It's a nightmare, particularly when the standard will probably be "did the software cause injury or death?" when the standard should be "what is the incidence of injury or death compared to a human driver?"

I mean that'll be little comfort to the family of someone killed in an accident. We humans seem to have a weird tolerance humans negligently killing other humans.


> We humans seem to have a weird tolerance humans negligently killing other humans.

Really? If anything I'd have said it was the other way round. Humans get jailed for negligently killing other humans with vehicles, and they sometimes get jailed or banned from driving for negligently driving in a way that might have endangered another human. On the other hand, the prevailing opinion in this thread seems to be that whilst it's entirely appropriate to punish bad driving by humans, similarly egregious errors made by software should be tolerated provided their average accident rate is lower than the humans'


You could argue that in the "oh shit" scenarios for a car, the proper action is to always stop. Most human drivers will instinctively stomp on the brakes if they see anything they're not expecting, and this is pretty much what today's autonomous software does.

Recovering from the "oh shit" scenario is the difficult part, but human pilots often can't recover, after all it makes little sense to try and fix an engine on fire while flying, instead opting to land.


>the proper action is to always stop

It's not. But it's a reasonable first reaction which is why we end up doing it. (That or swerving.)

But as soon as we realize the thing that made us twitch is a squirrel or a plastic bag, our forebrain takes the foot off the brake or straightens the wheel.


So why is it unreasonable to think that a computer can do this? This, being take a reasonable first reaction to a situation, namely stop, then follow up with a proper action once more data is available.


You don't stop though. You start to put your foot on the brake and then you take it off. Presumably, for a computer which doesn't really have different classes of reaction times in the same way, should never brake in the first place.


I don't think that presumption is true, it's a high bar that doesn't really provide much benefit to achieve. If a computer decides to tap the brakes because it thinks an "oh shit" scenario is coming up, why is that suddenly a huge transgression?


The point is that computers don't really have the same type of reflexes that humans have. The theory is that everything is pretty fast. (OK, they can run a background analysis in the cloud but that's presumably too slow to be useful.) Computers are generally not going to respond with "reflexes" and then change they're minds once they've had time to think about it for half a second.

Computers could possibly be designed with these sorts of decision making patterns if there were a need to but I'm not aware of that being done today.


> Computers are generally not going to respond with "reflexes" and then change they're minds once they've had time to think about it for half a second.

Well I disagree on this point, as that's essentially how regressions work, so indirectly how neural networks work. The data the car gets isn't available immediately, all that information it takes in in half a second is useful data that aids in classification and decision making.

Just as a quick example, take https://tenso.rs/demos/rock-paper-scissors/ and think of the classifier as "making a decision", and it switches its decision based on the most recent information.


The point is that all presumably happens "instantaneously" from a human perspective. Hence the claims that autonomous vehicles have no lag in responding to events.


> after all it makes little sense to try and fix an engine on fire while flying

No, but you can ditch an engine that is on fire.

> instead opting to land.

That is supposed to be the outcome of any successful flight.

Autoland is possible with an engine out, even at low altitude (on final approach):

http://www.askcaptainlim.com/flying-on-the-boeing-777-flying...

So as far as the software goes that's business as usual and not even an 'oh shit' scenario.


Right, so why does the negative opinion towards self-driving cars seem to be that a computer isn't allowed to slow down to give it more time to react, which it would just treat as business as usual?


Well, for one you're passing your incompetence off to other drivers to deal with, something that will inevitably lead to accidents behind the car that slows down without any actual reason, for another because driving is a lot more complex than flying when it comes to automation. You might expect the opposite but pilots routinely describe their careers as 30 years of boredom punctuated by 30 seconds of sheer panic.


And why is that any worse than the current situation with humans?


This is a very important topic that I am surprised does not receive enough coverage. Thank you for bringing it up.

It will be particularly interesting if accident blame is placed on the 'dumb cars', and then insurance companies do a 180 and charge MORE for 'dumb cars' operated by humans. Once they put this information in their pricing models, I assume its 'stuck' in there until the next major NTSB report is published.

As complacency sets in over a couple months & years, the accident rates will likely swing from "dumb" human operated machines back to "Level 4 Highly Intelligent Teslas/UBERS/Argo AI", and that market might get a real shock when the pendulum comes back their way!


I agree, the devil is going to be in the almost infinite edge cases, the visual negotiation that goes on between drivers at box junctions, dealing with bad or aggressive drivers who ignore right of way or tailgate, ethical decisions in all the “Kobiashi Maru” no-win situations (extreme weather, black ice, highway pile up, mechanical failures).

An advanced AI may well be able to identify whether an object coming towards the windscreen is a bird, bat, leaf or a rock, but what will its intuition be about how much of a problem it is likely to be? Should it swerve to avoid a raccoon and risk whiplash for passengers? Should it aim to avoid large insects if the owner is vegan?

Also, people are very used to mild lawbreaking. We expect a cab driver to double park and let us out the car if there are no available parking spaces, but would an AI be authorised to bend the law or would it have to find the nearest parking space, which may be 5 blocks away and could be taken by the time it gets there?

I suspect we will get autonomous drone-like flying cars long before we get full autonomy in city centres or rural areas, because flying through a mostly obstacle-free space with an ability to avoid collisions on 3 axis seems much more reachable?


> Should it aim to avoid large insects if the owner is vegan?

Only if the other cars on the road around it are also owned by vegans.


>or don't trust the instruments

Thankfully this isn't as big of an issue with driving on the ground. Airplanes don't have sensors that give them the same precision as a car's wheel rotation or proximity to nearby objects.


On the contrary, this is a much larger issue on the ground. A commercial airplane spends most of its flight time in clear air, in an airway assigned to it by 24/7 air traffic control, safely separated from large objects with which it can collide, and largely safe from disruptions which necessitate sudden control inputs to avoid an immediate crash.

Cars don't operate in a comparable problem-space.


Proximity to other objects doesn't matter when your air speed indicator isn't accurate and you can't get lift.

https://en.wikipedia.org/wiki/Air_France_Flight_447


Biggest news buried at the end. It says that several engineers have quit since October 2016 (including the head of autopilot) when Tesla started selling "fully autonomous driving" hardware upgrade packages. Says the engineers don't agree the hardware is capable of supporting this and that it was ultimately a marketing decision.


If I were one of those engineers, and didn't believe in the claims being made, I'd personally be quite worried of being held personally liable if the company gets sued in the case of accidental death. Last thing I'd want is my bug being responsible for someone dying.

E.g. I would quit too.


Good thing it would be almost impossible to attach personal liability to an employee of a corporation unless they intentionally designed the system to be unsafe with malicious intent.


Frankly, I would be more worried about someone dying using a system that I had a hand in than whatever effect it might have on my career.


> Last thing I'd want is my bug being responsible for someone dying.

Is exactly what was written above.


In the context of a lawsuit. Which is different from not wanting it for its own sake.

But quitting doesn't retroactively remove your hand from the system if that's your main worry...


A lot of people move jobs especially in LA. Is there a first-hand link to one of these engineers critisizing the systems (not trolling just cannot get past WSJ paywall)


I just ordered a Model S with Autopilot, and as I've been reading the comments on the various Tesla forums, I'm not sure I'm ever going to use it. Some of the stories are honestly terrifying (sudden deceleration on the highway, swerving into other lanes, etc).


Absolutely do not use it.

I am one of the biggest Tesla fans out there. I fucking love the company. But Autopilot in its current form is nothing short of dangerous.

I took a test drive in a Model S a couple months ago and enabled Autopilot at the Tesla rep's encouragement while on a straight stretch of route 90 near Boston. We were going 70mph, a safe speed.

The car came to a point where the highway curved, and a slight deceleration is required to navigate the curve correctly.

Little did I know, Autopilot stays at the speed you set and does not alter it as the environment requires, short of not hitting the car in front of you. So of course it tried to take the curve at 70mph and swung out of the lane almost instantly, prompting immediate corrective action from me to avoid a serious accident.

I couldn't believe the Tesla rep hadn't made this clear. I was required to have my hands on the wheel, but the position of my hands doesn't ensure that I'm mentally ready for egregious errors on the car's part and prepared to correct them at a split second's notice at all times.

Operational question mark aside, as an investor I was also astonished that the software was still in such a rudimentary state that it didn't know to slow down on curves. I found this troubling. It was scarcely more advanced than cruise control, to be honest.

It's the one place where I think Elon is really gambling with people's lives as well as his company's credibility, the former being an infinitely worse transgression than the latter.


That's not true at all. AP (at least HW2 AP in my car) does definitely slow down for curves. This is more apparent on surface roads, which have more extreme curves. My car will slow down ~15MPH on one curve, whereas I would only slow myself ~5MPH. It's definitely a part of the system.


Yeah, this is 100% false. I've owned a Tesla for over a year. I take several curves on a daily basis with the speed set to around 82 mph. Going into the curve, the car slows down to about 65 mph, which is the speed every other car on the road takes that curve. It's even stated in the AP documentation and release notes. Go ask other Tesla owners as well. I've never heard of a Tesla being "swung" out of a lane on a curve.

Tesla uses a combination of the visual lanes, cars in front, and an accelerometer, to determine the curve and how much it needs to slow down.

Other than a freak situation last year (not on a curve, it was the hill crest with the white truck crossing the highway), every SINGLE accident has been shown to be the drivers fault with autopilot disengaged. Not a single accident of what you described has happened.


I am glad to hear this.

But it is not 100% false that the Tesla I test drove functioned this way. It is, in fact, 100% true. This happened last month.


You may however be overestimating how much a car like the S needs to slow down in a turn. It has a really low center of gravity. Maybe it didn't need to slow down at all and you were just being paranoid.

Can you provide a map link to the stretch of road where this happened?


I don't think I was being paranoid....I'm generally a very, very fast driver who prefers the inner lane and 85mph+ speeds given the opportunity. I do know the car has amazing handling but it unequivocally started swerving heavily out of the lane at the speed it was traveling at.

It's route 90 between Allston and Back Bay. It looks like a pretty gentle curve so looking at the map certainly makes my description of events seem questionable, but I can only tell you that it occurred as I described.

I understand your skepticism.


Are you sure auto steer was engaged? I also had a Tesla for a couple of years and AP absolutely slows down in curves consistently.


It gets worse, if your Tesla is following a car in front of you, and they switch lanes, but you can’t switch lanes because another car is coming from behind in that lane, the Autopilot will switch nonetheless.

This almost killed a tester from the German federal motor vehicle approval agency. Their overall report is devastating, and shows the Tesla autopilot is little more than a glorified cruise control, marketed in a very deceptive way.


I'm not sure what to say when the code I recently turned in (and passed) for the path planning project of term 3 in Udacity's Self-Driving Car Engineer works better at changing lanes than Tesla's system:

https://github.com/andrew-ayers/udacity-sdc/blob/master/Term...

Then again, it does have a failure mode where occasionally, for some reason, it will direct the car to change lanes into the path of a much faster moving vehicle in the lane being changed to. Most of this is because it only runs the behavior planner every second or so in the simulation, and probably does get everything perfectly correct in the prediction part (I honestly am not sure where the problem lies, though).


The problem Tesla has is that their system only has ~ 40 meters visibility to back or front.

That means if you're on the Autobahn, at say 130km/h in the right lane, and a Porsche is coming from behind at 300km/h, the Tesla will not be able to see it, and consider the lane free.


Remind me never to drive on a German Autobahn.


It’s quite interesting, because obviously an entirely different class of issues becomes apparent when the speed between two lanes on a highway can differ by a factor of 4.

This is what you get when the speed limit actually is "unlimited".


This is not true at all. Tesla's do not use the car in front to switch lanes.


It depends on what mode you set it to, but under some circumstances, it does.


What would these circumstances be? I have never seen my car do this.


When the Tesla can not reliably detect lanes (for example, due to too dense traffic), it starts to just follow whatever vehicle is in front, and determines the lane from that vehicles movement.

This can lead to major issues, as mentioned.


When it cannot reliably detect lanes, it disengages autosteering while making a very obvious warning sound that you cannot possibly miss.


Not always, in highway traffic the lane markings are frequently too obscured to be readable by the camera, but if it believes that the cars in front are driving in the lanes, it just uses them.

Otherwise autopilot wouldn't work at all on highways.


No, it does not use the car in front of you to switch lanes.


Sounds like the issue was the steering input, not the speed. I call BS on there being a single non-construction stretch of I-90 you can't take at 70mph in a Model S. Hell, the true speed is probably much closer to 110.

Normally I'd call the issue "the driver", but since the car was automated, it was an input failure, not a curve that couldn't be negotiated at that speed.


Elon's goal is to get humanity to another planet. The Tesla thing is just a way to get money to do that. I thought this stuff was commonly known?

I do agree with his goal. And maybe its worth some percentage of Tesla buyer's lives? Let's not talk too much about this, okay?


I think you are being sarcastic, and people don't get it.


Actually not.


You're getting a lot of naysayers here who take one experience and extrapolate it to mean everyone. There are a lot of Tesla drivers out there who use it every day to and from work. I suggest you try using and evaluating the system yourself.


Would you still be concerned in a situation where you were the only car on the road?


Yes, there are lots of complaints about it handling curves poorly, misreading overpasses as solid objects, etc. etc.


> In May 2015, Eric Meadows, then a Tesla engineer, engaged Autopilot on a drive in a Model S from San Francisco to Los Angeles. Cruising along Highway 1, the car jerked left toward oncoming traffic. He yelped and steered back on course, according to his account and a video of the incident.

Is this video online?


> Mr. Meadows said he was later dismissed for what he was told were “performance issues.” Tesla declined to comment on Mr. Meadows but noted that the incident happened months before the release of the technology, giving the company plenty of time to work out problems that had been discovered during test drives.

We used to treat our test pilots with the highest regard. How low have we fallen?


This is the third or 4th time I've heard that story repeated and the details change each time Meadows tells it, in my opinion. The date and viability of the system changes when I've read it and that leads me to believe that his motivation may not be altruistic but vengeful instead. He probably did get fired from Tesla for performance issues and so he feels the need to fire back at them by noting an incident that happened in the very early stages of the technology's development. We treated test pilots with the highest regard because they were separated from the technology/machinery itself. There was no personal stake in it for them other than making it out alive and exposing issues with the systems being tested. In the current cases, the "test pilots" have a vested interest in the system because they are the ones designing it and deploying it. That's why I have to stifle my gut reaction when I hear stories like Meadows'. Which is more likely, that Tesla would risk its entire reputation by releasing software that was dangerously inadequate or that he's a little butthurt about getting fired and threw a bit of a tantrum?


> We treated test pilots with the highest regard because they were separated from the technology/machinery itself. There was no personal stake in it for them other than making it out alive and exposing issues with the systems being tested.

I think this is an extremely salient and important point, and I've never thought of it before!

Test pilots only care about a good product, not success or failure of the product. That's not the case anymore.

Maybe this would be a good time to propose a new "test pilot core" for cars, made up of perhaps some elite drivers who don't work for any car company.

Or heck, maybe the army wants to test self driving cars!


> Maybe this would be a good time to propose a new "test pilot core" for cars, made up of perhaps some elite drivers who don't work for any car company.

Yes, having more neutral testing programs like CA's standardized testing would be nice, particularly if the tests were mandatory for all manufacturers.

I'm surprised there isn't a national testing system in place for that, but I guess it is our system to leave most control to the states.


Exactly. I would love to see a "test pilot core" but that's never going to happen as long as the major players involved in these technologies are companies that are looking to make the most profit possible. A "test pilot" that has access to all the different self-driving programs is a prime point of weakness for any kind of confidentiality or security that might be in place for these programs.


> I would love to see a "test pilot core" but that's never going to happen as long as the major players involved in these technologies are companies that are looking to make the most profit possible

Wouldn't top companies lobby for a standard when they can meet it and other companies can't? Doesn't this happen in every industry, e.g. food and drug, aerospace, manufacturing, all the time?

> A "test pilot" that has access to all the different self-driving programs is a prime point of weakness for any kind of confidentiality or security that might be in place for these programs.

How do you propose evaluating them, then? Just use people as guinea pigs and see which system causes the fewest accidents? Do you think people will elect politicians who completely ignore public safety to satisfy the whims of corporations?


Public video of a problem during testing is a big no-no. Here in Germany nobody shows issues Porsche test drivers have during >300km/h tests on public highways in the night, only if they cause a major accident.


This isn't exactly an isolated incident, YouTube has lots of videos of autopilot steering wildly off course. The biggest problem is that Tesla allows turning on autopilot on roads that are not a highway and feature significant turns and hills obscuring "perfect lane vision", and the system is not prepared to handle that at all:

https://www.youtube.com/watch?v=ZBaolsFyD9I

https://www.youtube.com/watch?v=IOnuKrzCLYc


Wow, if I were a passenger in the second video, I would insist on taking over if a human were driving. I don't know anything about the technical issues, but this is so not ready for prime time, it seems from that test autopilot drive to be super amateurish.


I'd like to point out that the first video you linked is nearly 2 years old.


What's your point? The second video is only a month old and in that video the car still drives in the way only a total jackass would drive, in the most favorable possible conditions for a self-driving car: clear and sunny with little traffic, clear road markings, and no pedestrians or bicycles.


This is software released to the public which doesn't pass a basic d(yaw rate)/dt sanity check and steers hard into oncoming traffic.

Not sure who cares about version here. Your airplane went down? Yeah, should be fixed now, no worry, try the new version!


That's absolutely terrible. If a police cruiser would be behind you while driving like that you'd have a hard time arguing you were in control of the vehicle and not being cited. Unacceptable.


If this is May 2015, this is prior to the system in question (HW2 or AP2) even existing in prototypical forms.

The previous system (HW1 or AP1) is a MobileEye chip, which is found in most other cars that also have "lane keeping" features. They all use the same chip with the same model provided by MobileEye, and can choose to overlay features on top of it or augment the system with other sensors (such as radar on the Model S/X). For that system, it is essentially the same performance whether it's a $150k Tesla or $30k Honda.


A reference to Chris Lattner:

"In recent months, the team has lost at least 10 engineers and four top managers—including Mr. Anderson’s successor, who lasted less than six months before leaving in June."


Since we're finally getting some refutations to Self-Driving Hype, let me drop some quotes here:

“I tell adult audiences not to expect it in their lifetimes. And I say the same thing to students”

"Merely dealing with lighting conditions, weather conditions, and traffic conditions is immensely complicated. The software requirements are extremely daunting. Nobody even has the ability to verify and validate the software. I estimate that the challenge of fully automated cars is 10 orders of magnitude more complicated than [fully automated] commercial aviation."

- Steve Shladover, transportation researcher at the University of California, Berkeley

http://www.automobilemag.com/news/the-hurdles-facing-autonom...

"With autonomous cars, you see these videos from Google and Uber showing a car driving around, but people have not taken it past 80 percent. It's one of those problems where it's easy to get to the first 80 percent, but it's incredibly difficult to solve the last 20 percent. If you have a good GPS, nicely marked roads like in California, and nice weather without snow or rain, it's actually not that hard. But guess what? To solve the real problem, for you or me to buy a car that can drive autonomously from point A to point B—it's not even close. There are fundamental problems that need to be solved."

- Herman Herman, director of the Carnegie-Mellon University Robotics Institute

https://motherboard.vice.com/en_us/article/d7y49y/robotics-l...

"While I enthusiastically support the research, development, and testing of self-driving cars, as human limitations and the propensity for distraction are real threats on the road, I am decidedly less optimistic about what I perceive to be a rush to field systems that are absolutely not ready for widespread deployment, and certainly not ready for humans to be completely taken out of the driver’s seat."

- Mary Cummings, director of the Humans and Autonomy Laboratory at Duke

https://www.commerce.senate.gov/public/_cache/files/c85cb4ef... [pdf]

All quotes pulled from this article (which is really quite good and you should read it in full):

https://www.nakedcapitalism.com/2016/10/self-driving-cars-ho...


The easy part is relatively easy, but it's hard to conceive how the hard part will be solved.

Just driving around New York City for a while makes me think that generalized autonomous driving, as a problem, is essentially "solving" strong AI. Consider the case of approaching a complex intersection during rush hour. There's a traffic cop in the intersection waving his hands around. You reach the intersection, the light is green, you want to proceed straight, but cars are blocking the way because they're backed up into the intersection on the cross street. The cop points directly at you, making eye contact, blows a whistle, and shouts at you, pointing and yelling "right right right". You're uncertain whether he means you should try to weave around the blocking cars, but he blows the whistle again and it becomes clear he is telling you you cannot go straight, and you must divert and make a right turn right at the intersection. He gestures again, indicating that he wants you to turn into the nearest lane on the cross street, then points to the car behind you and indicates that it should also turn, but into the center lane. You nod, and he looks away to another car.

This happened. So a self-driving car would presumably have to understand and interpret shouted commands, realize that they are the one being shouted at by someone with the right authority, recognize gestures, somehow be able to engage in the equivalent of recognized eye contact, be able to make an OK gesture, and have some sort of theory-of-mind about the traffic cop as well as the drivers of other cars.

Not easy.


Even worse: I've been on several mountain roads with stretches of one-way traffic, where either an officer has to signal a switch in lane direction every few minutes (which might not be clear otherwise, especially around a bend), or cars have to occasionally reverse to let opposing traffic through. Don't think I'll ever be letting an AI do that!


I was in a small bus on a switchback mountain road in Peru. The bus stopped at a low lying turn, and we could see that a muddy stream was racing across the road at its low point, pouring away into the valley off the the downhill edge it was eroding. The pavement under the stream was gone. The driver got out and found a couple of what looked like 2-by-8 pieces of lumber, and placed them across the stream, adding some rocks and rubble underneath like track ballast. He then slowly tiptoed the bus across these creaking muddy boards, leaning out the window to stay on track. We were not swept over the edge, as far as I recall.

Needless to say, not a situation for "auto-steer".

I'm sure this has been thought through, and I imagine the solution involves zones requiring different levels of autonomy and capability, some means of zone discovery or classification for unmarked areas, and self-driving cars refusing to continue automatically when overmatched.


I think you can argue that at least those are outlier very rural sorts of places. It's harder to write off major US cities. (To be clear, interstates are still compelling uses but they're not universal self-driving.)


That kind of thing happens occasionally around Boston when snow piles turn two lane streets into one lane, and it's common anywhere where construction or an accident partly blocks a road.


I often think of Manhattan wrt the autonomous taxi believers. (Which while extreme for the US is precisely the sort of place they'd presumably have to handle.) I sorta go: "Have you ever been there and looked around?" Design an AI that can get cross-town in Manhattan at rush hour and I start erecting a shrine to our robotic overlords.


There's quite a bit of research on the Duke site around, among other things, issues with inattentiveness when supervising automated systems.

https://hal.pratt.duke.edu/publications

I saw Cummings speak on a panel a couple years back. She talked about automation in aircraft (she's a former Navy fighter pilot) and she made the comment that humans don't handle boredom well.


Level 2 still does not drive smoothly as many have confirmed on their forums. They do require you to jiggle the wheel every few mins to ensure you're alert. There's also https://www.hbsslaw.com/cases/tesla-autopilot-2-ap2-defect


It's amazing to me that Tesla is able to sell a car in its price range that lacks basic features that come standard on a $17k Corolla like adaptive cruise or automatic emergency breaking, especially since they effectively reduced the capabilities of their cars by rolling out AP2. If any other company tried to pull that, they'd be laughed out of the room, but somehow, Tesla is cheered.


You've clearly never driven a telsa if you think a $17k Corolla has more advanced adaptive cruise control. I've driven my model S over 15,000 miles, with probably over 50% of that on autopilot (mainly interstates), and NEVER had an issue with the adaptive cruise control. Its absolutely flawless. (autopilot has its quirks, although its still an extremely useful feature, and the primary reason I bought a tesla)


Do you have the Mobileye hardware?



how'd you do that?


Add http://facebook.com/l.php?u= to the front of the URL.


Honestly, I don't understand why the automobile industry doesn't learn from the airline industry. Airplanes have worked out how to balance autopilot capabilities with the need for pilots to remain engaged and attentive for years. Simply implement a Drive-By-Wire, similar to Airbus' Fly-By-Wire systems. A driver's inputs to the controls would still be required, but the autonomous systems could prevent or limit certain actions (such as accelerating into a stopped vehicle or swerving off the road).


Airline pilots are professionals, car drivers are just trying to get somewhere... paying attention isn't their full-time job. Also building autopilot software for a near empty 3-dimensional space is much easier than for complex roads of varying shapes, moving obstacles, country regulations and different road markings...


I talked to a pilot once who asserted that this is far from settled in aviation: he described the Boeing way, and the Airbus way, as two separate schools of thought. Boeing's systems keep the humans in the loop more at the expense of the autonomous systems, Airbus does the opposite. Empirically, it doesn't seem to make a difference with regards to safety outcomes.


Aerospace engineer here, this Airbus / Boeing divide rings true. There's a 99% Invisible episode that talks about some of these differences, more from an Airbus side of things:

http://99percentinvisible.org/episode/children-of-the-magent...


There's a norm of competence in airline pilots. We can't say the same for automobile drivers.


Correct. There's also the small detail that while autopilot systems might be a rounding error in the cost of an airplane, it would be a significant financial burden for average joe/jill driver.


Plus the sky is, on average, empty.


Can't we? (about the vast majority of automobile drivers)


Depending on where you set your standards, sure. But we have an expectation in the United States safety technology has to accommodate darn near everyone.

We have a law that airbags have to accommodate unbelted passengers: http://www.iihs.org/iihs/sr/statusreport/article/35/6/1

Now sure, a passenger can be different than the driver but it's the same philosophy.

The amount of illegal maneuvers I see every day on my commute is astonishing - not using blinkers, intruding on cross walks, not moving over for emergency vehicles, following too closely, etc. It doesn't help that the only traffic law enforcement is really around speeding / running red lights / DUIs.

The problem with autonomous cars isn't the autonomous cars - it's accommodating non-autonomous actors. It only takes one google car hitting an old lady who's chasing a duck on the street to become CNN breaking news for the next 3 months a la that airliner that disappeared.


Many of the current cars are 100% drive by wire (ECU/power brakes/electric power steering), but that doesn't mean anything.

Probably quite a good part of the $hundred of millions price of a passenger plane is the autopilot (even 1% means $1M). And even at $1M/plane, 99% of a plane's autopilot works because it assumes that the current plane is the only one in a large vicinity of a point in space. This is assured by a centralized third party (control tower) that is not really automated but a very stressful human job (that's why the air traffic controllers are well paid). This is not the case with cars - in this case, most of the work being done is having each of the individual cars detect, with complex but not very good sensors and software, what is around them, in a swarm of other moving objects that do not communicate.


> Many of the current cars are 100% drive by wire (ECU/power brakes/electric power steering), but that doesn't mean anything.

Electric power assisted steering. Few cars on the road currently are correctly termed steer-by-wire. Actually, only the properly-optioned Infiniti Q50 comes to mind.


> Many of the current cars are 100% drive by wire (ECU/power brakes/electric power steering), but that doesn't mean anything.

You can still break when the ECU does something stupid. You can still break if you loose vacuum (engine not running), it's "just" more harder to do so.


What does "drive-by-wire" achieve other than removing a shaft between the steering wheel and steering rack? Cars already have collision avoidance without full self-driving capabilities.


They are doing stuff like this.

Lane departure warning and collision warning systems are pretty prevalent. And even autobraking collision avoidance systems are getting pretty widespread.


Good for the engineers having more ethics than VW, and resigning when they are asked to go farther than the technology allows.


VW engineers broke laws to do what they were asked to do. Tesla engineers are just doing the best that they can and taking longer than marketing would like. If that's a moral equivalence, then I guess we're all VW engineers.


* having more ethics than <insert car manufacturer here>

As we now know, all of them are cheating.

(Also, "having more ethics" sounds really strange in my ears. "... being more ethical than those at <XY>"?).


FYI I've found a way around the WSJ paywall - copy and paste the title into Facebook and click on that link.




Please no paywall articles. Please.


Typical wsj paywall. Web link no longer seems to get around it, either, unfortunately.


Please stop posting paywalled articles. Especially WSJ. This community represents the future of the internet. I don't know what the answer is for making sure content providers get paid, but the WSJ model isn't it. So let's vote with our attention (or lack there of) and kill this annoying practice before it makes the internet an even more walled and unpleasant place.


Your post lacks information on the part of why the WSJ model isn't acceptable. Paying money for a newspaper isn't exactly unprecedented, and there are plenty of people who are fine with that model.


They want us to commit to a subscription, using a credit card, in US currency. That is significantly different from the newspaper model, because it requires some trust on my part, and it is quite awkward; my CC sits in a firesafe when I am not traveling. Subscriptions also encourage FOMO behavior, something I am currently having a bit of trouble with, personally.

I too would prefer that we do not allow subscription content here. If the story is significant, then it will be covered elsewhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: