Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The level-2 driving that Tesla is pushing seems like a worst case scenario to me. Requiring the driver to be awake and alert while not requiring them to actually do anything for long stretches of time is a recipe for disaster.

Neither the driver nor the car manufacturer will have clear responsibility when there is an accident. The driver will blame the system for failing and the manufacturer will blame the driver for not paying sufficient attention. It's lose-lose for everyone. The company, the drivers, the insurance companies, and other people on the road.



Requiring the driver to be awake and alert while not requiring them to actually do anything for long stretches of time is a recipe for disaster.

Everybody who's looked at this seriously agrees. The aviation industry has looked hard at that issue for a long time. Watch "Children of the Magenta".[1] This is a chief pilot of American Airlines talking to pilots about automation dependency in 1997. Watch this if you have anything to do with a safety-related system.

[1] https://www.youtube.com/watch?v=pN41LvuSz10


Wiener's Eighth and Final Law: You can never be too careful about what you put into a digital flight-guidance system. - Earl Wiener, Professor of Engineering, University of Miami (1980)

It seems that we are locked into a spiral in which poor human performance begets automation, which worsens human performance, which begets increasing automation. The pattern is common to our time but is acute in aviation. Air France 447 was a case in point. - William Langewiesche, 'The Human Factor: Should Airplanes be Flying Themselves?', Vanity Fair, October 2014

Eventually mean/median system performance deteriorates as more and more pure slack and redundancy needs to be built in at all levels to make up for the irreversibly fragile nature of the system. The business cycle is an oscillation between efficient fragility and robust inefficiency. Over the course of successive cycles, both poles of this oscillation get worse which leads to median/mean system performance falling rapidly at the same time that the tails deteriorate due to the increased illegibility of the automated system to the human operator. - Ashwin Parameswaran (2012)

... from my fortune clone @ http://github.com/globalcitizen/taoup


That said, aviation accident rates have been falling asymptotically while this "troubling trend" has been going on. In the US we've had no fatalities on domestic commercial flights since like 2009, and only the two at SFO a couple of years ago on international flights to the US. This is on a couple of billion flights with hundreds of billions of passenger departures.


Airliner pilot discipline is orders of magnitude better than your average driver.


I'm a private pilot (general aviation, single engine cessna's). For the most part, our discipline is orders of magnitude higher than all but the best drivers. Airline pilots are orders of magnitude better than us. You are absolutely correct.


Not to mention the myriad of safety and control mechanisms in place on the aircraft (redundant systems etc) and away from it (air traffic control, IVR etc)


Airline pilot discipline was also probably pretty good fifty years ago yet the accident rate has dropped as the tech has improved.


A good pilot can sometimes recover from being given a bad aircraft. If the aircraft get better of course the accident rate will improve.


airline pilot selection plays also a role.


That's why at current we have 1.3 million people dying on the road in the US alone. It might make the matter of transitioning to self-driving worse, but it also makes manual driving worse.


The WSJ article includes NHTSA data indicating 'only' 35,000 traffic fatalities per yeah. Where are you getting this 1.3 million number from? That would be equivalent to a full 1% of US population dying on the road every 2.5 years


you're right. I googled for "number of car deaths in US" (or so I thought) and that number came up. 1.3M seems to be the number for world wide fatalities.


That number sounds like the worldwide number, not US. Many places have much, much worse road fatalities per capita.


Commercial air travel will serve just shy of 4 billion departures this year, with about 40 million flights.

You are correct in the ratio (1:100) but I think you mistook departures for flights and extrapolated 2 orders of magnitude too far.


> This is on a couple of billion flights

You sure about that number? Seems incredibly high to me.


"​Airlines are expected to operate 38.4 million flights in 2017, up 4.9%."

Maybe 3 billion passengers/year?

From http://www.iata.org/pressroom/pr/Pages/2016-12-08-01.aspx


This is a well-researched area, in addition to being pretty obvious to everyone. Personally, I stopped using cruise control years ago. If I have to pay attention, I'm better off driving.

This isn't just a matter for Tesla. The auto industry is rapidly heading for much better assistive driving systems. There's no way that the people heads-down in their cell phones are going to do this less once they realize they don't really need to pay attention.

Will accident rates get better overall anyway? Who knows? But systems that aren't intended for autonomous use are going to get used that way.


There was a cruise control ish system I heard about on a car (a Mercedes, iirc) in Germany when I was a kid: instead of a target speed, you set a maximum speed, such that pressing normally down on the gas allowed the car to accelerate up to that speed. It trims down the excess gas to avoid exceeding the maximum speed. If you released the gas, the car would coast; and if you pushed the pedal down close to the floor (i.e. "to the metal"), it would allow you to exceed the speed you'd set, for example to momentarily accelerate in order to pass.

I loved this system and have always wished for something like this on a car in the US. I've never liked the common cruise control in the US -- where you set a target and it applies the gas for you -- because I didn't like how removing my foot from the gas pedal moved my foot fractions of a second further from the brake in an emergency.


This is called a speed limiter and I LOVE it in my Mercedes. I set the limiter to match the road's speed limit and I can just drive without worrying that I'll get caught speeding. No need to constantly look down at the speedo.


Pointless in most reasonably modern cars which will chime alert you for any such problems (oil, fuel, etc). Much better paying attention to the road itself.


Clearly you don't live somewhere where speed limits are rigorously enforced. Setting a limiter frees you up from having to match the limit yourself and allows you to pay more attention to the road.


I think this was meant to be a response to this sibling post:

https://news.ycombinator.com/item?id=15095626

Probably did not wait long enough for the reply button to appear.


Except you should be checking your dashboard instruments every few seconds. Temperature, oil pressure, speed, fuel. I'm constantly doing this while driving. Road ahead. Mirrors. Instruments. Road ahead. over and over.


What are you driving that you need to check your temperature, oil pressure, and fuel every few seconds? If you don't, and you didn't have to check your speed, you could shorten your cycle to mirrors and road.


With a hot air balloon you have lots of time to respond.

Though I'd check my ballast too, and probably read a book.


> I've never liked the common cruise control in the US -- where you set a target and it applies the gas for you -- because I didn't like how removing my foot from the gas pedal moved my foot fractions of a second further from the brake in an emergency.

It's not even just that, it's worse at managing the use of the gas overall than a human with regards to cars as of 2015, it seems like the system is worse at managing the gas than humans are. I'm not a huge fan of cruise control as defined above because I find it makes me really inattentive (my problem, of course), but the benefit of not using the cruise control is that it seems like you get much better mileage. My family used to have to drive regularly between Minnesota, Wisconsin, and Illinois for a few years, and I would always get better mileage not using cruise control than my brothers (who used cruise control) would. The difference would often be as much as half a tank of gas or more on older cars (2000-2010) and on relatively newer cars (2015) it'd still be a difference of a fair number of gallons.

I think the systems just aren't good at predicting when to coast and when to accelerate, and for very hilly regions, this means a lot of wasted gas.


I wouldn't be surprised if you were allowing speed to fall on those hills, while the cruise control systems were downshifting to get the power to maintain speed.


This feature exists in my Jaguar XF. Similar to the cruise control, you can set a 'max speed' and the car will simply not accelerate past that speed. I don't think I've tried flooring it to see if it would let me exceed it in that case though.


It will, there is normally a, "kick down" point where it will accelerate at max, for emergency situations. This also normally disengages the max limit.

Another feature is speed warnings, which "beep" at you when you exceed them. Currently they only seem to be singular, but it should be possible to integrate them with satnav and speed sign recognition. I expect these would be safer, especially if it linked up with a, "black box" to report excessive violations with a parent, insurance company, police etc.


It’s almost always standard in new European cars. I use it when I am in a speed-camera infested area.


That's just a speed limiter. Have it on my 4yo Volvo. Can be overridden by depressing the pedal fully to the floor (not a kickdown, before someone jumps on that aspect - it's a manual car)


That system sounds pretty fantastic compared to the dumb cruise control I have in my car. Though it's not as predictable for the people that don't understand how it works.


That matches the behavior of the speed control I've seen in Renault cars.


For me cruise control is just giving my gas pedal foot a break on long drives. I still have to constantly adjust the speed with my hands and the brake itself. I'm also steering and watching what I'm doing.


I drive with cruise control extensively. I tweak the speed with the up/down buttons as needed to compensate for passing or being passed. Thumb is ever ready on the cancel button which smoothly initiates coasting.

This way my foot can readily cover the break pedal and I initiate breaking much quicker than off-gas>on-break


> I stopped using cruise control years ago. If I have to pay attention, I'm better off driving.

I view cruise control as a safety feature. I can keep my foot hovering over the brake pedal instead of on the accelerator, reducing reaction time in a crisis. Maintaining attention on the road has never been a problem for me, though I suppose the hovering-foot posture helps.


I recently drove an Accord with lane-following and speed adaptive cruise-control recently. It completely failed (but thankfully by refusing to try) in the one instance that I wanted to use it (stop-and-go traffic), but it was nerve wracking when it was on as it kept losing the lane. People naturally alter the throttle and speed when going around corners and up slight inclines. It feels alien when that doesn't happen.


I have a Subaru with lane-following and adaptive cruise. Lane follow fails miserably because it keeps losing the lane boundaries, even on clearly marked roads. But adaptive cruise control works extremely well, and particularly shines in stop-and-go traffic.

One catch there is that it'll stop automatically, but it won't ever go if it came to a full stop - you need to tap the gas to reactivate cruise. If the car in front of you starts moving, but you don't move, it'll make a noise (both audio, and visual on the dashboard) to remind you that you're supposed to do that. I suppose that's a kind of a safeguard to keep the human alert?


I also drive a new-ish Subaru, adaptive cruise seems to work well in some situations (low speed stop-and-go, like you mentioned) but at high speeds it is terrifying... A speed that is reasonable on straight highways is pretty jarring around curves. If my foot is on the gas I'll subconsciously make the needed adjustment, but with cruise on I don't usually react in time.

This could partly be a consequence of living in the Pacific Northwest... lots of winding mountain highways!


I'm in PNW as well. I find that it works great on major freeways - e.g. I commute over I-90, and it works great there. On I-405 as well. On the mountain routes, like say parts of SR-202, yeah, it's ill advised.


On most highways you can maintain speed through curves, but not in the mountains, and they do warn with plenty of signs. So CC is not a good fit for mountain driving.


That sounds at least somewhat workable. The Accord seemed to drop out of cruise control if the speed dropped below 15-20 Mph, so it was entirely useless in stop-and-go, and even slow-and-go.


I've got the new Hyundai Ioniq with adaptive cruise control and lane assist. Last weekend I drove 160 miles with both enabled for the first time. I just found it shifted my focus. I was much more aware of what was going on outside the car. Setting the max speed to 75 mph enabled it to follow the car in front very effectively and overcame the incline and corner speed issue. Only problem was having to intervene to prevent under-taking.


I second the effectiveness and pleasantness of Hyundai's implementation. My 2015 Sonata has those features (its standard now, but was introduced that year for the Limited trim). It, combined with auto-hold to apply the brakes when the car stops, made road trips and slow going commutes so much more pleasant. It sucks for stop and go traffic, since it disengages when the car completely stops. But I can't fault a cruise control system for sucking at a scenario that doesn't actually involve cruising.


The 2018 model year (I believe) has full stop-and-go support. The previous two model years would cut out somewhere around 25 mph (this is what I have). It really is only for freeway driving outside of heavy traffic. I've used a 2018 CRV that has stop-and-go support and it's quite nice.

The lane tracking isn't that great, but I don't mind it that much. I don't use it much for normal driving, but I found it it's pretty fantastic in heavy crosswinds. The car does a pretty good job of keeping the lanes (assuming you can see them well enough) so you basically drive like normal instead of having to constantly fight wind gusts.

Under normal conditions it doesn't do enough to be terribly useful and less you're not paying enough attention… at which point you shouldn't be using it anyway.


Surely 'speed-adaptive cruise control' is an oxymoron?


Yeah, I stopped using cruise control years ago too - I moved to Los Angeles.


> Personally, I stopped using cruise control years ago.

Cruise control seems fairly harmless - you still have to keep lanes, and keeping your foot on the gas isn't particularly demanding either. I largely use cruise control because I am able to save on gas that way, by avoiding unnecessary acceleration/deceleration. Combination with lane following and holding of distance is more problematic imo.


It's mostly that I tend to drive on roads with at least a moderate amount of traffic. It tempts me to not optimally mix with other vehicles. I just got out of the habit of using it.


The traffic aware cruise control in the Tesla is very good in traffic. Especially stop-and go traffic. My passengers always tell me to switch it on because it gives a smoother ride than me. It's also much more relaxing.


Personally I use cruise control almost constantly, for the exact same reason. When you're watching the speedo, you're not watching the road.


> There's no way that the people heads-down in their cell phones are going to do this less once they realize they don't really need to pay attention.

The counter-argument is that they do this without Autopilot anyway. Given that they're already not paying attention, adding in Autopilot seems like a net gain.


Hence my comment that maybe accident rates improve anyway. Although it's hard to predict the delta between distracted driving/no automation and oblivious driving/imperfect automation. It's at least plausible that you have fewer but worse accidents when someone's watching a YouTube video and the car suddenly panics.


I’ve long wondered if people who extensively use advanced assist systems will see deterioration in manual driving aptitude, and if that deterioration will be restricted to the operating domain for the assist systems or be more general.


That's a well-documented concern in aviation. Hard to imagine it will be less prevalent in a population that doesn't even need to get re-certified now and then. General skills deterioration is probably not the issue that having to take over very quickly is, but it is one.


I wonder how much more the average driver's skill can degrade. We never re-train so isn't there a natural degradation already? Will automation make that worse or will it not be significant?


For the typical (certainly US) driver, the initial training is just to get to a minimally viable set of skills so they can pass their driving test. The vast majority of people aren't taking performance driving courses to get their drivers licenses. I'd pretty much guarantee that almost every driver is more skilled 10 years after they get their license.


I cannot comment on driving experience/skill since I don't have a driver's license but I frequently observe drivers without a working understanding of the traffic rules and signs, even though they once learned that in the theory classes.


I'm not sure that experience implies skill but you make a good point.


I've found that CC allow me to be more focused on the road, rather than looking at my speed every so often. My CC controls are on the wheel, so I can adjust it just by moving my finger, and I always keep a foot on the (accelerator) pedel ready to react as if I was maintaining speed with my foot. I know plenty of people that "rest" their foot while using CC, and that is just asking for an accident since the reaction time is longer.


If we're going to borrow from aviation, why don't automakers develop some rudimentary automation (not autonomy) that would help avoid the most common kinds of crashes? For example something that might be automated in a modern aircraft is change flight level. The pilot can command a change to a given flight level and the automation takes care of it. Why don't we have a "change lanes" command for cars? Changing lanes is a leading cause of car collisions, and even people who do it successfully forget to use their signals, check their blind spots, etc. It seems like this level of automation (not autonomy!) would be easier to achieve.


"Lane change assist" is already increasingly common in high-end cars.

https://www.autobytel.com/car-buying-guides/features/10-cars...


Achieved. Tesla autopilot can change lanes. It also has collision avoidance on all the time.


Tons of cars nowadays come with emergency automatic braking.


Tesla offers this in their current Autopilot solution. Turn on the turn indicator and the car changes lanes if it is safe to do so.


This is the same "aviation industry" which, domestically, has had a perfect safety record since around 2009? Not near perfect, not 99%, but literally zero fatalities?

Also, as a pilot, I can tell you that the Tesla Autopilot functions very much like what we have in planes. It steers the vehicle and significantly decreases workload while increasing overall safety but needs to be constantly monitored and can easily be misconfigured or operate in unexpected ways.


If you never driven Tesla you do not know what a joy is to use the auto pilot in stop and go traffic. I see 2-3 accidents daily caused by your "alert" drivers rear ending cars. Your statement may be true for long haul rides, but I pretty sure the numbers will come ahead for the auto steerer for the normal commute.


> I see 2-3 accidents daily caused by your "alert" drivers rear ending cars.

Remind me not to drive in your neighborhood. (This appears to be hyperbole.)


"neighborhood"? Who commutes across a neighborhood as the full extent of their commute? I suspect it's within a rounding error. Driving across a city, however, I definitely see this. Daily. And "accident" I expect to mean everything from a light tap rear-ending to a full fledged crash.

People rear-end each other in my city every day. On my commute, I'll come across two a day. Wet days, a few more. The other five directions headed into the city I would expect to see similar statistics to my experience. Just listening to traffic radio, there's going to be at least five crashes around the city; almost always more. They don't report on fender benders.


About six years ago, I would drive from San Mateo to Oakland at least twice a week, after work, to meet my then-girlfriend. So that's down 92, across the bridge, and up to Oakland. It's about 30 miles, all on freeway, and at that time of day all in heavy (though not all stop-and-go) traffic.

I don't think I ever saw three separate rear-ends in a single drive. I can't say for certain that I ever saw two. I didn't drive it every day, but I probably did 100 such commutes.

You sound either like you're massively exaggerating or live someplace with apocalyptic traffic.


>I don't think I ever saw three separate rear-ends in a single drive. I can't say for certain that I ever saw two. I didn't drive it every day, but I probably did 100 such commutes.

I drive 30 miles through Chicago traffic on the Interstate everyday. I see at least 3-4 accidents per week (people pulled over to the side of the road, out of the way, or at the accident investigation sites.) Most of these are minor fender benders. I'm sure if I was in rush hour traffic for the full 4-6 hours (not just my 90 minutes of commute) I would see way more. They mostly happen in near bumper-to-bumper stop and go traffic (someone misses the guy in front of them stopping) or when traffic unexpectedly comes to a standstill from going 10-20 MPH.


Every single day of this work week so far there has been at least one and as many as 5 accidents on the same 5 mile stretch of I95 headed in to Waltham, MA.

Every morning and at least two of the evenings. And the weather has been reasonable. I typically see 4 to 5 Teslas a day during my commute... driving sedately, and they aren't they ones involved in the accident.

Maybe we'll break the streak tomorrow and have no accidents.


The guy I was talking to said he saw 10 a week minimum.

Yeah, if you spend 4-6 hours a day in traffic, you'd see a lot of accidents. That... seems uncontroversial.


Here are the traffic stats for Canada[0]. In 2015 there were 116,000 "personal injury collisions". That is ~300 injuries a day across Canada.

The Greater Toronto Area has approx. 20% of Canada's population (~35M). If we assume that 300 injuries are evenly distributed across Canada, which seems unlikely due to how bad the driving conditions are on the 401 and DVP, there are ~60 injuries per day in the GTA of various severities.

I don't think someone encountering ~3 per commute during rush hour is unreasonable.

[0]: https://www.tc.gc.ca/eng/motorvehiclesafety/tp-tp3322-2015-1...


You drove 30 miles up 880 twice a week and never saw a rear-end accident? You should play the lottery. I don't know if I see three every day, but I do see one every week. This week's was some clown in the #2 lane, _staring_ down at his phone, rear-end into a stopped car at ~20 MPH. His car was completely totaled with one of the wheels spinning off to the shoulder.

A few months ago I saw a guy, face-down in his phone, smash into an existing accident that was already surrounded by fire trucks. That was amazing.


I didn't say I never saw a rear-end accident, I said I never saw three in a single commute and am not sure whether I saw two in a single commute.

The guy I was replying to said that he saw at least 2 every commute, often more. (Edit: Actually, sorry, 2 every day, not every commute).


How do you have the bandwidth to look at other drivers' postures while driving?


I ride my bike in the Peninsula. I personally witness 2-4 accidents monthly ranging from fender-bender/taps to full-on smash-ups. Maybe 2-3 daily is an exaggeration, but they're plentiful. It's a rare day that goes by where the major freeways don't have slowdowns because of wrecks. I'd be surprised if the typical commuter on 101 saw less than 1 per day on average, actually.


Between San Jose and San Francisco on 101 pretty common to see at least 1 fender-bender per commute.


Driving and down 280 over a year's time, once a year, there can be a single day with three separate fender benders. Appearing to have happened within a 30-min window of passing by. But never witnessed 3 to happen --like happening as I was driving.


I'm in the UK, but I used to drive an hour each way to work which was about 25 miles. In a year I saw like one accident and that was someone being rear-ended in slow moving traffic.


Just take any job that requires a significant highway commute, you'll see more than that.

Whenever you get full speed traffic occasionally interrupted by traffic jams (from whatever cause, other accident, tolls, weather, low angle sunlight, construction, etc.), you'll get a higher incidence of rear-enders. Especially when the tail of the slow/stopped traffic is at a point just past a hill or curve.

I got rear-ended myself some years ago in just such a situation, clear sky & dry road. The traffic ahead had slowed dramatically driving into a section where the bright, low winter sun was in everyone's eyes, we couldn't see that before the gentle bend & rise in the road, I saw the slowing traffic & had to brake hard, the person behind me braked late and hit me even though I'd tried to stretch my braking as much as possible to give her more room. There was another similar accident minutes later just behind us.

This kind of rapid-slowing situation in tight-fast traffic will likely even get out of hand even for automated cars, unless there is car-to-car communication. This is because of the slight delay for each successive slowing car in the accordion-effect accumulates to the point where eventually the required reaction time decreases and required deceleration rate increases past the performance envelope. At that point, a crash is inevitable.

With car-to-car communication and automation, the last car in the pack can start braking almost simultaneously with the first one and avoid this.

So, no, it's not hyperbole, it's ordinary.


>This kind of rapid-slowing situation in tight-fast traffic will likely even get out of hand even for automated cars, unless there is car-to-car communication. This is because of the slight delay for each successive slowing car in the accordion-effect accumulates to the point where eventually the required reaction time decreases and required deceleration rate increases past the performance envelope. At that point, a crash is inevitable.

Is this really true?

It seems like, as long as the following delay between cars is greater than that reaction delay, there should be no such "accordion effect."


rapid slowing is fairly common in 280. I find it safest to be on the left-most lane, where you can use shoulder to get you a safe stop. rapid slowing is one reason i'd probably get level-2 autonomous car.


"I got rear-ended once in fine conditions" != "I pass 2-3 rear-ends on my commute every day"


I didn't say it was -- I opened by agreeing, then also provide my example, as a lead-in to the car-to-car communication point.

And yes, when you have an urban highway commute of any distance, it is not unusual to see that many crashes. maybe not every day, but not far off, and enough that you cannot rely upon commute times, precisely because the crashes are so unpredictable.

You might try actually reading other posts before replying with trivial inaccurate potshots. sheesh


I don't think car-to-car matters. You can't rely on it being accurate or present. The car will simply have to drive in such a way that it can always stop within the stretch of visible clear road.


Yes, if you can stay out of Los Angeles, I highly recommend it.

2 is average for about a 60 mile drive during slightly off-rush. I suspect rush is higher.

And certain areas just seem to attract idiots.


See meaning drive past, not "observe the collision."


> The level-2 driving that Tesla is pushing seems like a worst case scenario to me

What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1]. It seems probable Level 2 systems will be better still.

A refrain I hear, and used to believe, is that machine accidents will cause public uproar in a way human-mediated accidents don't. Yet Tesla's autopilot accidents have produced no such reaction. Perhaps assumptions around public perceptions of technology need revisiting.

> Neither the driver nor the car manufacturer will have clear responsibility when there is an accident

This is not how courts work. The specific circumstances will be considered. Given the novelty of the situation, courts and prosecutors will likely pay extra attention to every detail.

[1] https://www.bloomberg.com/news/articles/2017-01-19/tesla-s-a...


That's not what the concern is based on. It's rooted in what we've learned about autopilot on planes and dead men's switches in trains. Systems that do stuff automatically most of the time and only require human input occasionally are riskier than systems that require continuous human attention, even if the automated portion is better on average than a human would be. There's a cost to regaining situational awareness when retaking control that must be borne exactly when it can't be afforded, in an emergency.


> It's rooted in what we've learned about autopilot on planes and dead men's switches in trains

Pilots and conductors are trained professionals. The bar is lower for the drunk-driving, Facebooking and texting masses.

> Systems that do stuff automatically most of the time and only require human input occasionally are riskier than systems that require continuous human attention, even if the automated portion is better on average than a human would be

This does not appear to be bearing out in the data [1].

[1] https://www.bloomberg.com/news/articles/2017-01-19/tesla-s-a...


You're misunderstanding the data and the concern. Currently, Tesla Autopilot frequently disengages as part of its expected operation, handing control back to the driver. Thus, the human driver remains an attentive and competent partner to the autopilot system. That data is based on today's effective partnership between human and computer.

The concern is that as level 2 autopilot gets better and disengagements go down, the human's attentiveness will degrade, making the remaining disengagement scenarios more dangerous.


> The concern is that as level 2 autopilot gets better and disengagements go down, the human's attentiveness will degrade, making the remaining disengagement scenarios more dangerous

A Level 2 autopilot should be able to better predict when it will need human intervention. If the autopilot keeps itself in situations where it does better than humans most (not all) of the time, the system will outperform.

My view isn't one of technological optimism. Its derived from the low bar set by humans.


The problem is that in L2, the bar for the system as a whole is set by the low bar for humans, specifically their reactions in an emergency. If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead. And what people are saying here is that L2 automation increases the risk that the human will fuck up in that 1%, by decreasing their situational awareness in the remainder of time.

That's why Google concluded that L5 was the only way to go. You only get the benefit of computers being smarter than humans if the computer is in charge 100% of the time, which requires that its performance in the 1% of situations where there is an emergency must be better than the human's performance. That is the low bar to meet, but you still have to meet it.


> If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead. And what people are saying here is that L2 automation increases the risk that the human will fuck up in that 1%, by decreasing their situational awareness in the remainder of time.

Humans regularly mess up in supposedly-safe scenarios. Consider a machine that kills everyone in those 1% edge cases (which are in reality less frequent than 1%) and drives perfectly 99% of the time. I hypothesise it would still outperform humans.

Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.


> I'd hypothesise that a machine that kills everyone in those 1% edge cases (which are actually less frequent than 1%) but drives perfectly 99% of the time would still outperform humans.

Well, no.

Some quick googling suggests that the fatality rate right now is roughly 1 per 100 million miles. So, for certain fatality in the case of human control to be an improvement, it would have to happen only about once in the lifespan of about every 500 million cars. In other words, the car would, for all practical purposes, have to be self driving.


"Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers."

The part that really bothers me (for some reason) is that those edge cases are frequently extremely mundane, uninteresting driving situations that even a child could resolve. They simply confuse the computer, for whatever reason.

I'm genuinely interested to see how consumers react to a reality wherein their overall driving safety is higher, but their odds of being killed (or killing others) are spread evenly across all driving environments.

Imagine the consumer (and driving habits) response to the first occasion wherein a self-driving car nicely drives itself through a 25MPH neighborhood, comes to a nice stop at a stop sign, and then drives right over the kid in the crosswalk that you're smiling and waving at. Turns out the kids coat was shimmering weirdly against the sunlight. Or whatever.


> making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.

You are still misunderstanding the concern. The problem is not poorly trained drivers. The problem is that humans become less attentive after an extended period of problem-free automated operation.

I hear you trying to make a Trolley Problem argument, but that is not the issue here. L2 is dependent on humans serving as a reliable backup.


> You are still misunderstanding the concern. The problem is not poorly trained drivers. The problem is that humans become less attentive after an extended period of problem-free automated operation.

I understand the concern. I am saying the problem of slow return from periods of extended inattention is not significant in comparison to general human ineptitude.

Level 2 systems may rely on "humans serving as a reliable backup," but they won't always need their humans at a moment's notice. Being able to predict failure modes and (a) give ample warning before handing over control, (b) take default action, e.g. pulling over, and/or (c) refusing to drive when those conditions are likely all emerge as possible solutions.

In any case, I'm arguing that the predictable problem of inattention is outweighed by the stupid mistakes Level 2 autopilots will avoid 99% of the time. Yes, from time to time Level 2 autopilots will abruptly hand control over to an inattentive human who runs off a cliff. But that balances against all the accidents humans regularly get themselves into in situations a Level 2 system would handle with ease. It isn't a trolley problem, it's trading a big problem for a small one.


If you actually look at the SAE J3016_201609 standard, your goalpost-moving takes you beyond level 2. "Giving ample warning" puts you in level 3, whereas "pulling over as a default action" puts you in level 4.

The original point - that level 2 is a terrible development goal for the average human driver - still stands.


Yeah, you're talking about level 3. Most people think that's not a realistic level because "ample warning" requires seeing far into the future. Better to go straight to L4.

Also, you are definitely invoking the trolley problem: trading a big number of deaths that aren't your fault for a smaller number that are. Again, not the issue here. L2 needs an alert human backup. Otherwise it could very well be less safe.

But I would say the thrust of your argument is not that off, if we just understand it as "we need to go beyond L2, pronto".


NO, a higher licensing bar for human drivers will NOT solve the problem, it would only exacerbate it (and I'm ALL FOR setting a higher licensing bar for humans for other reasons).

The problem here is NOT the untrained driver -- it is the attention span and loss of context.

I've undergone extensive higher training levels and passed much higher licensing tests to get my Road Racing license.

I can tell you from direct experience of both that the requirements of high-performance driving are basically the same as the requirements to successfully drive out of an emergency situation: you must 1)have complete command of the vehicle, 2) understand the grip and power situation at all the wheels, AND 3) have a full situational awareness and understand A) all the threats and their relative damage potential (oncoming truck vs tree, vs ditch, vs grass), and B) all the potential escape routes and their potential to mitigate damage (can I fit through that narrowing gap, can I handbrake & back into that wall, do I have the grip to turn into that side road... ?).

Training will improve #1 a lot.

For #2, situational awareness, and #3, understanding the threats and escapes, there is no substitute for being alert and aware IN THE SITUATION AHEAD OF TIME.

When driving at the limit, either racing or in an emergency, even getting a few tenths of a second behind can mean big trouble.

When you are actively driving and engaged, you HAVE CURRENT AWARENESS of road, conditions, traffic, grip, etc. You at least have a chance to stay on top of it.

With autopilot, even with the skills of Lewis Hamilton, you are already so far behind as to be doomed. 60 mph=88 feet/sec. It'll be a minimum of two seconds from when the autopilot alarms before you can even begin to get the situation and the wheel in hand. You're now 50 yards downrange, if you haven't already hit something.

Even with skills tested to exceed the next random 10,000 drivers on the road, the potential for this situation to occur would terrify me.

I might use such a partial system in low-risk situations like slow traffic where its annoying and the energies involved are fender-bender level. Otherwise, no way. Human vigilance and context siwtching is just not that good.

I can't wait for fully-capable autodriving technology, but this is asking for trouble.

Quit cargo-culting technology. There is a big valley of death between assist technologies and full-time automation.


You make an important point. This is something I see a lot of people gloss over in these discussions.

It's a question that both sides of the discussion claim answers to, and both sound reasonable. The only real answer is data.

As you've said, killing 100% of the time in the 1% scenarios may very well be better than humans driving all the time. Better, as defined by less human life lost / injuries.

Though, one minor addition to that - is human perception. Even if numerically I've got a better chance to survive, not be injured, etc - in a 99% perfect auto-car, I'm not sure I'd buy it. Knowing that if I hear that buzzer I'm very likely to die is.. a bit unsettling.

Personally I'm just hoping for more advanced cruise control with radar identifying 2+ cars ahead of me knowing about upcoming stops/etc. It's a nice middle ground for me, until we get the Lvl5 thing.


The statement at the end of your comment made me wonder if there will be a time in the future where you cannot disengage the automation in the car you're currently in unless you have some sort of advanced license; Something like the layman's version of the CDL.


That solution does not work it will just increase the number of people driving without a license. For example, in France, the driving license is quite hard to obtain, you need around 20-30h hours of tutoring before you can attempt the test and it's not a sure thing to get it. So the consequence is that there is a lot of drivers without license, who are implicated in a high number of accidents.


> If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead

Not dead, which I feel is important to point out. Involved in an incident, possibly a collision or loss of lane, but really it's quite hard to get dead in modern cars. A quick and dirty google shows 30,000 deaths and five and a half million crashes annually in the US - that's half a percent.

So in your hypothetical the computer drives 99% of the time, and of the 1% fuckups, less than 1% are fatal.


Why not just mix in consensus-control, artificially generated disengagements?

Even if the system has high confidence in its ability to handle a situation, if sufficient time has passed, request the driver resume control.

Then fusion the driver's inputs w/ the system's for either additional training data or backseat safety driving (e.g. system monitoring human driver).


I like your creative thinking, but that wouldn't work. An immediate problem is it would only train the driver to pay attention when they hear a disengagement chime. L2 depends on the driver to monitor the autopilot continuously.

More productively, Tesla currently senses hands on the wheel. Perhaps they could extend that with an interior camera that visually analyzes the driver's face to ensure their eyes are on the the road.


They actually have a driver-facing camera in the Model 3, which is presumably coming to their other cars in a future refresh.


Recent Honda CRVs can have a attention monitoring system in them. I'm not sure how it works but it does seem to detect when the driver isn't looking around.


If the automation prevents more accidents than it causes, is it still that much of a concern? The results so far say no.


>What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1].

Actually the study explicitly doesn't show that.

First of all, in the study, it purely measures accident rate before and after installation, so miles driven by humans are in both buckets. Second of all the study is actually comparing Tesla before and after the installation of Auotsteer and prior to the installation of Autosteer, Traffic Aware Cruise Control was already present. According to the actual report:

The Tesla Autopilot system is a Level 1 automated system when operated with TACC enabled and a Level 2 system when Autosteer is also activated.

So what this report is actually showing is that Level 2 enabled car is safer than a Level 1 enabled car. Extrapolating that to actual miles driven with level 2 versus level 1 is beyond the scope of the study and comparing level 1 or level 2 to human drivers is certainly beyond the scope of the study.


> Actually the study explicitly doesn't show that

You are correct. We do not have definitive data that the technology is safe. That said, we have preliminary data that hints it's safer and nothing similar to hint it's less safe.


>That said, we have preliminary data that hints its safer and nothing similar to hint it's less safe.

Safer than? Human driving? No, we don't.


Safer than level 1 autonomy.

> So what this report is actually showing is that Level 2 enabled car is safer than a Level 1 enabled car.

which seems to disagree with the leading statement of the first comment in this thread:

> The level-2 driving that Tesla is pushing seems like a worst case scenario to me


"What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1]. It seems probable Level 2 systems will be better still."

As far as I know it is indeed correct that autopilot safety is statistically higher than manual driving safety (albeit with a small sample size).

However, something has always bothered me about that comparison ...

Is it fair to compare a manually driven accidental death (like ice, or wildlife collision) with an autopilot death that involves a trivial driving scenario that any human would have no trouble with ?

I don't know the answer - I'm torn.

Somehow those seem like apples and oranges, though ... as if dying in a mundane (but computer-confusing) situation is somehow inexcusable in a way that an "actual accident" is not.


"Appears" is the operative word. The new system is going to kill somebody. It hinges on building a whitelist of geolocated problematic radar signatures to avoid nuissance braking [1]. It's only a matter of time before a real danger that coincides with a whitelisted location causes a crash.

[1] https://www.tesla.com/blog/upgrading-autopilot-seeing-world-...


> What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers

That's a good question. Clearly, existing self-driving tech is safer than human drivers on average. However, "average" human driving includes texting while driving, drunk driving, falling asleep at the wheel, etc. Is the appropriate comparison the "average" driver, or a driver who is alert and paying attention?


> Is the appropriate comparison the "average" driver, or a driver who is alert and paying attention?

The most appropriate comparison set would be the drivers who will replace themselves with autopilot-steered vehicles.


> A refrain I hear, and used to believe, is that machine accidents will cause public uproar in a way human-mediated accidents don't. Yet Tesla's autopilot accidents have produced no such reaction. Perhaps assumptions around public perceptions of technology need revisiting.

Have there been any Tesla autopilot fatalities with the right conditions to spark outrage? That's a sincere question as maybe I've missed some which would prove your point.

The only major incident I'm aware of is one in which only the driver of the car was killed. In an accident like that it is easy to handwave it away pretty much independent of any specifics (autopilot or no).

A real test of public reaction would involve fatalities to third parties, particularly if the "driver" of the automated vehicle survived the crash.


I'm surprised you believe this. Drivers run people down every day and nobody even investigates the cause. Motorists kill about a dozen pedestrians every month in New York City and historically only half of those people get even a failure-to-yield ticket. Meat-puppets are demonstrably unfit to operate vehicles in crowded urban environments, everybody knows this, and nobody is outraged when the people die.


Indeed, it's probably best not to measure the utility of this tech based on preemptive predictions of how an emotional public will react or the reactions of outrage-driven media with terribly short attention spans.

The actual performance of these machines will be the ultimate test. If it does consistently improve safety then I don't really see much barriers existing here, the current unknowns and semantics surrounding it will be worked out in markets and in courts over an extended period of time and will ultimately be (primarily) driven by rationality in the long run.


Exactly. This will be decided by insurance underwriters and actuaries ultimately.

The safest option will be the way the market will be incentivised, despite all the noise around it this is the ultimate rational market.

Insurance is so boring it is interesting to me.


> The current autopilot already appears to be materially safer, in certain circumstances

It depends on how you measure this. We always talk about humans being bad at driving. Humans are actually amazingly good drivers conditioned upon being alert, awake, and sober. Unfortunately a good fraction of people are in fact not alert. If you don't condition on this, than yes, humans suck.

(Put another way, the culture we, including companies such as Tesla, foster of working people overtime is probably more responsible for car accident deaths than anything else.)

The FAA takes pilot rest time seriously. Considering car accident deaths exceed airplane deaths by a few orders of magnitude, it's about time the rest of the world take rest equally seriously as well.


I agree that level 2 isn't an ideal position, but it has also proven to be better than human drivers in preventing fatalities. In all the miles that Tesla's level 2 cars have driven there has been what 1 fatality? In that instance there was the exact question of responsibility that you suggested, but that still seems preferable to the status quo if lives are saved.


We need independent numbers on this. Comparing with the same population, the same price range of vehicles, the same road sections. Age, level of education, price of the vehicle, absence of hands-free cell interface, lack of seat-belt alarm seem to be way better predictors in USA of fatalities than having autopilot.

Comparing autopilot Tesla fatalities versus average fatality rate on one road section is dishonest.


It's not as precise as I'd like, but there has been an independent investigation of the safety of Autopilot. After the first fatality while on Autopilot, the US National Highway Traffic Safety Administration wanted to determine whether (as many fear) Autopilot posed a danger to drivers, and found that Autopilot was safe enough to keep on the roads and that Autopilot led to a 40% reduction in crashes: https://techcrunch.com/2017/01/19/nhtsas-full-final-investig...


Of Autopilot or Autosteer?


One is a part of the other, so "both" seems like the natural answer.


Do you have the numbers on how many miles Tesla's level 2 cars have actually been driven while using the feature? I see this sort of argument a lot in regards to Google's self-driving tests, and while it seems convincing to me, it doesn't seem realistic to me that's there a big enough pool of data to make that claim definitively.


From Wikipedia [1]

>According to Tesla there is a fatality every 94 million miles (150 million km) among all type of vehicles in the U.S.

>By November 2016, Autopilot had operated actively on hardware version 1 vehicles for 300 million miles (500 million km) and 1.3 billion miles (2 billion km) in shadow mode.

Those numbers are 9 months old and only apply to Autopilot v1 and not the Autopilot v2+ introduced late last year. I wouldn't be surprised if the current number is in the 500+ million mile range with only a single fatality. The sample size is obviously small, but there seems to be a clear improvement over manual control.

[1] - https://en.wikipedia.org/wiki/Tesla_Autopilot

EDIT: With chc's and my post we have 3 numbers and dates for reported Autopilot miles. Projecting that forward at a linear rate (which is conservative given Tesla's growth) would put us at roughly 750 million miles today.


It's great seeing that more and more data is being collected about this all the time. I'm a huge proponent of this tech.

What I wonder when I see these statistics, though, is whether all miles are really equal? For example, are Tesla drivers more comfortable using Autopilot in "easy" driving situations? Is there really a one-to-one correspondence in the distribution of the kinds of miles driven with Autopilot on vs. normal cars?

Furthermore, the metric commonly cited is "fatalities ever N miles." Are there fewer fatalities because Autopilot is great, or because Teslas are safer in general? Has there been a comparison between fatalities with/without Autopilot strictly on Teslas? Even then, it seems to me we are subject to this potentially biased notion of "miles" I mentioned previously. The Wikipedia article you mentioned cites a 50% reduction in accidents with Autopilot, but the citation is to an Elon Musk interview. I haven't yet seen anything official to back this up, but if anyone has any information on this, I'd love to see it!


Isn't that easily countered by comparing Tesla Model S' overall rate of accidents vs another similar vehicle, with similar safety rating, including all self-driven and human-driven miles? There should be a proportional reduction.


Yeah, I think so! That's exactly why I mentioned the accident rate reduction cited in the Wikipedia article shared above.

I'd love to see official work that explores that angle (rather than a claim from an interview, which is what the Wikipedia article refers to), I just haven't seen any document/study about it yet.


Yes but Autopilot can only be activated in the safest of conditions, the 94 million miles number takes in all types of driving factors. The comparison doesn't work because Autopilot usage self-selects for the miles where a human driver would also be much less likely to crash.


It was 140 million a year ago[1], and 222 million last October[2], so I guess a conservative estimate would be 600 million miles on autopilot (assuming that's about two months of change and usage remained steady).

[1]: https://www.wired.com/2016/08/how-tesla-autopilot-works/

[2]: https://twitter.com/elonmusk/status/784487348562198529


In fairness to Tesla that is probably why they are pushing for level 4/5 so hard. It's true that if level 2 drives perfectly for a month, yet a alone a year, the human backup system is going to degrade markedly. We're not there yet, but it's coming.

That's got to open up some liability questions. You can bet when people die it will be tested in court. You could make a case that Tesla's going to be liable for many level 2 accidents in the long run anyway, so might as well go all in ASAP.


I have been under the impression that because TESLA dispensed with the idea of LIDAR that their solution would never be workable. I am still surprised at their ability to avoid liability for the incidents attributed to the "Autopilot"


I don't see how you'd draw that conclusion. It seems possible to have a workable, purely optical solution. Our workable solution today relies purely on optical (human drivers).


Cameras don't have the same dynamic range of the human eye. A more fair test would be giving someone remote control of a Tesla with only access to the video footage.

I think cameras will hit parity with the human eye, but the question is will it happen before or after Lidar becomes more affordable and compact.


Couldn't you just have multiple cameras of different sensitivity? Cameras are pretty cheap.


The eye's dynamic range at any point in time is actually pretty bad. It's easy to beat the range of the retina with a camera sensor. The human eye can also adjust iris size and light sensitivity, and a camera can match it by adjusting iris size and exposure time. You're probably want a secondary camera for full night vision, but that's not very relevant for driving with headlights anyway.


The best cameras nowadays have more dynamic range than human eyes. Although I'm sure Tesla is not using this in its cars.


Optical plus audio, haptic and inertial sensing. And smell, but maybe we can ignore that one.


I have a Tesla with this and i LOVE it.. everyone I know on the teslaforums etc. loves it aswell. Please do yourself a favor, and try it out for a roadtrip duration or similar. Its a gamechanger.


Same. I love reading opinions about how dangerous or bad the Tesla "autopilot" is, or that it dosen't work, from people who have never owned or driven in a Tesla with autopilot.

I've probably gone about 8,000 miles on autopilot in mine (AP1) and it is truly amazing. After a road trip stint of ~200 miles, I feel much more energized and less fatigued in the Tesla with autopilot 99% of the way than I do in previous non-autopilot cars. It really is a game changer. You may think regular driving doesn't require much brain activity, or that the Tesla cruise control and auto steering don't do much, but you really don't realize how much your brain is working to handle those things until you can take a step back and let the care do most of the work. Then you can focus on other things while on the road that you didn't realize before. The behavior of other drivers for example. I can watch a car near me with greater focus and see what that person is doing.

Regardless, if you have not driven one, I highly encourage it. You really need to take it out on the highway and drive it for 30 miles+ to really understand how amazing it is. I've driven other cars with "autopilot", and just like car reviews say, they are nowhere close. (Mercedes, Cadillac, Volvo, others with just auto cruise control). It's just one more reason why current Tesla owners are so fanatic about their car, there is nothing else like it and most likely won't be until ~2020 (maybe).


>Same. I love reading opinions about how dangerous or bad the Tesla "autopilot" is, or that it dosen't work, from people who have never owned or driven in a Tesla with autopilot.

Would you say the same after if you happen to get involved in a serious autopilot accident though? That's the question.

It's very easy to be all roses before one sees any issues.


So from what I've seen, there have been a few occasions where an owner has used the autopilot as a scapegoat for their own faults, only to later admit they were at fault or wait for Tesla and third party investigators to conclude that autopilot was not even engaged, etc.

For instance, the one single fatality of autopilot to date in a Tesla, where the guy was coming up to a crest of a hill with a white 18 wheeler crossing perpendicular to the highway. Yes, the autopilot misread the 18-wheeler to be a sign above the road. (This issue is not fixed). However at the same time, the guy completely disregarded Tesla's instructions of keeping your eyes on the road at all times. Turns out, he was watching a show on his phone.

But yes, I would still say the same thing if I was using autopilot properly as intended. i.e., not watching movies while in the drivers seat of a moving car (which is against the law regardless.) I don't think there are any serious accidents to date where the driver was using it properly and following the rules. As Tesla states, autopilot is in beta (and most likely always will be), that's not to say it is unsafe, but that the driver must be aware and follow the rules and know what autopilot is and is not capable of.

I'd say it took me about two weeks of first using autopilot to understand its capabilities.

Also the best part, it keeps improving in my vehicle through updates. It's pretty impressive how good the updates from Tesla are.


could say the same about people "feeling safer" when driving themselves. which is what all the opposing side clings to. (sure isn't statistics)


>could say the same about people "feeling safer" when driving themselves. which is what all the opposing side clings to

Only people have been driving themselves for a century, and have a pretty good idea of how safe it is, including how safe it is for them and their skills / city / etc, as opposed to some "one size fits all" average.

>(sure isn't statistics)

Well, it can't be statistics because all we've got is the BS "statistics" from the self-driving car companies. Only a sucker for new technology would accept those, as there are tons of differences between regular driving. They take the cars out at optimal conditions (not rain, snow etc), they drive specific routes (not all over the US), they often have supplementary humans on board to check (do they count the times when humans had to take control and saved the day as "self-driving accidents" or not?), and tons of other things besides.


Just curious - have you driven one in situations that became emergencies in which autopilot disengaged? What was that experience like and how does it compare to emergencies you have experienced without autopilot available?


Not sure exactly what you are asking, as in, an emergency happened while driving with autopilot and then it disengaged randomly? Or I was driving and had an emergency and I had to engage autopilot?

I've never seen autopilot disengage by itself. It will after 3 warnings to the drive where they don't touch the wheel, I've never done that though.

There was one time, not quite an emergency, but basically my contact fell out of my eye, into my lap somewhere, and because I had autopilot on, it allowed me to safely use a few extra seconds needed to look down to find it. Whereas without autopilot, the number of seconds it took to find it normally would be considered very dangerous to look away from the road that long.


I meant a situation where an emergency happened outside your car and autopilot required you to take control (not without warning). Did you feel like you were more or less capable of dealing with an emergency on the road when you had been using autopilot, than you would have had autopilot not been previously engaged?


Other companies are also shipping Level-2 systems. US law says that the driver has responsibility for accidents with a Level 2 system. And Tesla shows constant reminders to drivers.


The law is one thing, but if a system works most of the time, can you really blame a driver for drifting away and doing something else. It's asking a lot for a driver to remain focused on baby sitting a robot. I'd rather just drive.


True but if it does reduce the overall accident rate, it might still make sense. Better experience for the driver and better safety are still welcome features. If you'd rather just drive then you're increasing the chance for accidents. Everyone thinks they're a great driver.


I'd rather not drive but if my choices are driving or staring out the window with nothing to do I'd rather just drive to help pass the time. Safety is a factor, but it isn't the only factor. I'd be incredibly safe sitting at home in an empty room staring at the walls, but I choose to do things that are less safe than that so that life isn't miserable.


There was a news story of how somebody lit a grill with the cover closed and it blew up. Yes, I still blame the person.


How well does the Tesla reminder system work for you?


I have to disagree as an owner/operator of a vehicle with this feature set. I consider it a major improvement to traditional cruise control, a modern iteration. When viewing it from the angle of full self-driving / autonomy (level 5) I can see how this argument can be framed, but that's not how us mere mortals view it.


I agree, but it's really difficult to draw the line.

I believe Auto transmission (AT) car drivers are more easily distracted than those driving stick shift or manual transmission (MT)).

But AT makes life easy for so many people, that no one even considers if AT is making drivers less attentive.

A classic example of this, the worst case scenario I have seen in person, is that of a woman talking on a mobile phone in one hand while switching lanes and taking a left turn at a traffic signal. Things like this just wouldn't be possible if that woman was driving a stick shift.

In that driver's case, probably a Tesla like Level 3-5 driving system would be ideal, making it much safer for drivers around.

So, should we go back to MT for everyone or nothing? Just to be under the impression that it's much safer? Or should every vehicle be a level 3 to level 5 autonomous vehicle?

I think people choosing any of the above options will have valid points and research to prove their points. Only time will tell when and how any of those research results continue to hold true.


> Things like this just wouldn't be possible if that woman was driving a stick shift.

FWIW, in the UK where manual is more or less the norm, using a phone whilst driving is still fairly common (the penalties were increased recently due to people ignoring the law). From what I've seen, a lot of people will use the gear stick with the phone in the same hand (i.e. brief break in conversation whilst they switch gear).


I've definitely seen people drive manual AND talk on their phones by holding them between their head and shoulder.


Possible. Hence way more dangerous than autonomous systems


What about tossing in tests every twenty minutes or so where you are required to "drive" for one minute (but the car remains in charge) and your driving is scored vs the A.I.? Maybe if you fail badly enough, you're scored drunk and have to pull over.


>Requiring the driver to be awake and alert while not requiring them to actually do anything for long stretches of time is a recipe for disaster.

i heard that this is why autobahns are made with unnecessary curves/turns.


There will likely be shift in the liability approach, and drivers will be made liable for accidents caused by inadequate technology.

This approach worked well (for some parties) in many other areas, for example in education: now the teachers are the only ones left responsible for the structuraly failing education system.

Now, if these were 1930s, with hundreds of independent auto makers, perhaps the invisible hand of the market would fix this.


Yeah, really seems terrible. Look at the data rather than theory crafting.

https://www.bloomberg.com/news/articles/2017-01-19/tesla-s-a...


But L2 requires the drivers active attention as they have to have their hands on the wheel every 60 seconds (or something like that). As long as Tesla doesn't remove that constraint then there shouldn't be any worries.

I don't see how you can get to L5 without L2 and the learning that is going on in these cars today.


>I don't see how you can get to L5 without L2 and the learning that is going on in these cars today.

You can use the automation as a backup safety feature until it's L4. For example, the car has the ability to stop if you try running a red light, but you otherwise are required to drive the vehicle.

Automated cars don't just have to be better than humans, they have to be as good as humans with automated safety features.


As a California driver, I always have to monitor motorcycles that are lane splitting and give them space when I see one.

So there is always something to do.


Yes, it is the worst case scenario. Also, it's the best you're gonna get. Toyota already got the nod from the courts to kill people via technology bugs in their cars (see the 'unintended acceleration' controversy that killed someone (maybe 2? I forget) a few years ago due to absolutely preventable bugs if they'd splashed out and spent a couple grand on static analysis tools), so if you think companies are going to break with their tried-and-true policy of hiring the cheapest, least experienced person they can find to slap together whatever halfway works, ignoring every warning from their engineers that they need better tools, that they need more time, that the system needs more testing to be safe, etc.

This isn't building a bridge. If a company builds a bridge and it collapses and kills people and it turns out they didn't hire qualified structural engineers or that the CEO ignored warning from the engineers to push the project for a scheduled release window or to keep profits high - the CEO goes to prison for criminal negligence. With self-driving cars, it's a different story COMPLETELY. You're talking about SOFTWARE. No company that's killed people with software has ever found themselves being found guilty of criminal negligence. And they won't for the forseeable future.

This is how self-driving cars will go. I'll give you whatever you want if I'm wrong. I'm that confident. A company, using standard modern business practices (that means doing absolutely every single thing research has shown destroys productivity and ensures the end product will be as unreliable, and poorly engineered as possible. Open floor plan offices, physical colocation of workers, deadlines enforced based on business goals rather than engineering goals, business school graduates being able to override technical folks, objective evidence made subservient to whoever in the room is more extroverted, aggressive, or loud, etc. You know, you probably work in exactly the kind of company I'm talking about because it's almost every company in existence. Following practices optimized over a century to perform optimally - at assembly-line manufacturing processes. And absolutely antithetical to every single aspect of mental work.) will rush to be first to market. Maybe they'll sell a lot, maybe not. That's hard to call. What's NOT hard to call is the inevitable result. Someone dies. Doesn't matter if its unavoidable or not. No technology is perfect. That doesn't matter either. Won't matter what "disclaimers" the company tries to pull trying to say it's in the drivers hands. The courts won't care about that either.

But... they will absolutely get away with it. They will not be fined, they will not be forced to change their practices (most likely they will not even be made to REVEAL their development practices at all). You see, if the courts bother to ask what their practices are, their lawyers will point out it doesn't matter. There's no such thing as "industry standard practices" that you could even CLAIM they failed to follow. So their software had bugs. As far as the court is concerned, that's a fact of life, it's unavoidable, and no company can be held responsible for software bugs. Not even if they kill people.

So they'll get away with it - in the courts. In the court of public opinion? Nope. You see, even if they made their self-driving cars out of angel bones and captured alien predictive technology and it never so much as disturbed someones hairdo, they are destined to fail as far as the public is concerned. Because human beings are, shocker, human beings. They have human brains. Human brains have a flaw that we've known about for ages. Well, by "we", I mean psychologists and anyone whose ever cared enough to learn Psych 101 basics about the brain. There is an extremely strong connection between how in-control a person feels they are and how safe they feel they are. Also, feeling safe is stupendously important to humans. This is why people are afraid of flying. If things go wrong, there's nothing they can do. (The same is true when they're driving a car, but people are also often just wrong and they wrongly think they have some control over whether they have a car accident or not. No evidence suggests they have the ability to avoid most accidents.) If the self-driving car goes wrong while they're not paying attention, nothing they can do. People will be afraid of them as they are of flying.

And if you haven't noticed, our society deals poorly with fear. They LOVE it way too much. They obsess over it. They spend almost every waking hour talking about it and patting themselves on the back about what they're doing or going to do to fix their fears and the fears that threaten others. Mostly imagined fears, of course, because we're absurdly safe nowadays. So it will be the only thing talked about until unattended driving laws get a tiny extension to cover the manufacturing of any device which claims to make unattended driving safe. It'll pass with maybe 1 or 2 nay votes by reps paid for by Uber, but that's it.


"People will be afraid of them as they are of flying."

This is a great analogy, because at the dawn of flight, many people were really, really really afraid of flying -- for very good reasons: the airplanes of the day were incredibly dangerous.

Yet people still flew, and flew more and more, despite many very public disasters in which hundreds of people died, and the airline industry grew and flourished.

Now most people don't think twice about flying, as long as they can get a ticket they can afford. Sure, some people are still afraid of flying, but most of even them fly anyway if they have to, and the majority aren't afraid at all or don't think about it.


Sure, planes were introduced at a time when people were willing to step back and tell themselves 'OK, I don't feel great about this but that's just my emotions running away with themselves, I really shouldn't be scared so I should just do it'. Those times are over. Suggesting people should question, much less actively resist, their most primitive impulses is seen as a direct threat to their person. It simply isn't done.

When a mom is saying 'I'm not putting my children in one of those killmobiles' and someone says 'well actually ma'am its much safer and you're endangering your childs life significantly by taking the wheel', that person gets punched in the face and lambasted on social media as an insensitive creep. That's just how it goes.


> There's no such thing as "industry standard practices" that you could even CLAIM they failed to follow.

Are you sure? What about IEC 61508 and ISO 26262? The latter especially, as it was derived as a vehicle-specific version of the IEC standard.

It's an industry-wide standard:

https://en.wikipedia.org/wiki/ISO_26262

...geared specifically to ensuring the safety of electrical, electronic, and software of vehicles.

Look it up - you'll find tons of stuff about it on the internet (unfortunately, you pay out the * for the actual spec, if you want to purchase it from ISO - it's the only way to get a copy, despite the power of the internet, afaict).

...and that's just one particular spec; there are tons of others covering all aspects of the automobile manufacturing spectrum to attempt to ensure safe and reliable vehicles.

Are they perfect? No. Will the prevent problems? No.

But to say there isn't any standards to look to isn't true.


Unless it works. better than a human. > 90-99% of the time.

Mostly, the drivers, insurance, companies and other people on the road have fewer accidents.


We don't usually switch to a new technology unless it's > 120% better than what exists today. The cool factor will not be enough for practical adoption.


tons of technology is used for lesser marginal gains. no clue where you got this idea from.


I absolutely disagree. It can't be better than humans. It needs to be flawless. Any problems where people get killed will delay public acceptance of the technology by decades, even if "statistically it's safer than humans". People don't give a damn about statistics, they give a damn about tabloids shouting "self driving cars kill another innocent person!!!". We literally can't afford that.


If the aim is to have a technology that's 100% fail-safe, we can stop pursuing self-driving cars now. We can in fact stop pursuing any kind of technology because the economic cost of making anything completely fail-safe are usually prohibitive. Also, isn't history ripe with counter examples to your argument? Planes have crashed, people died, tabloid headlines were written about it, and people still fly, probably because it's awfully convenient.


Yes, and every time there's a plane crash while on autopilot, Boeing/Airbus will order that every single plane of that type is grounded until the fix is found. For airlines it's not a huge deal since they have insurance for that kind of thing and they can always use other planes they have or rent them to subsitute in their schedules. Now imagine if there are hundreds of thousands of say self-driving Teslas(or any other brand) and your national safety regulator orders that they are all disabled until a fix is found after a particularly nasty crash. If you get up in the morning and can't use your car because it has been remotely disabled for whatever reason, you will be furious - or at least I know I would be. I'd sell it and buy a normal manual car the same day.

My point is, that all of this affects the public perception of self-driving cars - and if we want them to succeed, we need to make absolutely sure that that perception is good. We can't have the nonsense Tesla is trying to pull off at the moment, where they call their system "autopilot" but they know the system cannot detect obstacles at around pillar-height and it gets blinded by the sun and can swerve into the oncoming lane before just switching off. These are not theoretical problems - both happen in cars that are out there, right now. And if it happens to a regular Joe Smith, then Mr. Smith will think the technology is crap, and we can't let that happen.


Also, demonstrably safer than driving, by time or by distance.


After a long history of fatalities and fixes.


So you're saying we should let more people die (via human driving) to avoid people getting upset over tabloids ignoring statistics?


It seems like the person you're responding to believes that the deaths from low-quality self-driving cars will cause the technology to not come to fruition, saving less lives in the long run.


We are already "letting more people die" for economic reasons. We could mandate that all cars on the road should have the most modern 15+ airbag systems, but it's too costly. We could mandate that the speed limit should be 30mph and limited in hardware, but it's too costly economically, yet I don't think anyone can argue that it wouldn't save lives.

We make these exact choices all the time. I am saying we should "let more people die" now, so that we can save more later. That's not a novel concept.


Right? Every single economic decision we make could be framed as a 'letting people die' choice.


Then that's our own fault for being too stupid to have self-driving cars.

Ideally, we should embrace them even if they are slightly more dangerous than human drivers, because we are getting the benefit of the time that would otherwise be spent driving.


I think there's real-world evidence that this is not the case. There have already been deaths due to Autopilot, and the reaction you're describing here didn't happen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: