I have a feeling our regulatory environment is going to be unfairly hostile towards AVs in the coming months and years. Of course we all want a safe and responsible rollout of this tech, but people too easily forget how ABSURDLY dangerous driving is already.
Like with Tesla, every accident and injury caused by a Waymo vehicle is going recieve disproportionate media coverage and subsequent public backlash. I hope we can view these incidents in context.
Admittedly, safety is just one speedbump-- social and economic disprution is a whole 'nother beast.
Whatever happens it's an exciting time to be alive! AVs have the potential to bring more sustainable, safe, efficient, and affordable mobility and I can't wait to see how it all plays out.
Since I don't think anyone on hackernews thinks much different, I feel like defending the other side of this: Yes, in theory self-driving cars can be safer than human drivers. But the tech simply isn't mature. It just isn't. Early tech always has hiccups, not the ones that are calculated into the risk but the ones no one expects. So there's a somewhat high probability that a bug that works totally different from what anyone expects is just waiting to happen and it's steering a 4000 pound block of metal. Let's start this, but it's okay to start slow.
> So there's a somewhat high probability that a bug that works totally different from what anyone expects is just waiting to happen and it's steering a 4000 pound block of metal
Are you talking about bugs in artificial intelligence, or bugs in regular 'ol human intelligence?
That's the real risk with self-driving systems. Bugs are sporadic. But design limitations create the risk that you could have mass-accident events if everyone on the road runs into the same edge case at the same time.
Tesla’s self driving implementation is very behind Waymo’s, they don’t even use LIDAR. Yes, Musk or Uber could drag the whole self driving reputation in the dirt easily with substandard tech.
>> So there's a somewhat high probability that a bug that works totally different from what anyone expects is just waiting to happen and it's steering a 4000 pound block of metal
> Are you talking about bugs in artificial intelligence, or bugs in regular 'ol human intelligence?
> Because that statement rings true for both, lol.
Your stated position gives far too much credit to algorithms and far too little to what every capable human driver can do.
U give far to much credit to human drivers. Capable human drivers as a subset of human drivers. A large subset to be sure,but nowhere near everyone. At least an av cant get drunk.
The data available challenges this contention. According to here[0], one estimate is that there are approximately 18,000,000 people driving at any given time in the U.S. And according to here[1], there were 34,439 traffic fatalities in the U.S. for the calendar year of 2016.
For the sake of discussion, let's assume that of the 18 million drivers in the U.S. on the road at any given time, there are at least two other drivers which are not on the road at that same time. Further, let's assume these are all of the U.S. drivers for the entire 2016 calendar year.
This would put the estimated set of unique drivers at 54,000,000. A conservative estimate by any means.
This makes the estimated probability of a human driver being involved in a traffic fatality for the 2016 calendar year to be 34439 : 54000000, or 0.00063776.
According to here[2], the probability of being struck by lightning in the U.S. is approximately 1 : 280000, or 0.00000357.
While the former is mathematically 200x more probable, the latter is statistically insignificant.
> Capable human drivers as a subset of human drivers.
Hence my qualification of "capable", included in order to obviate edge condition "whataboutism."
> At least an av cant get drunk.
But it can get defective updates, be unable to identify unknown conditions, suffer from misidentification of threats, and susceptible to cascading failures affecting all autonomous vehicles in the same situations.
this is all wrong. first you don't need to normalize by r ruining vehicles, both metrics are yearly. second the real signal is death per mile driven, now that's something you should look for, not abstract drivers. third using drivers as base assume all cars vehicles have no passenger.
this should at least get you in the right direction, but you'll likely want to put motorbikes into their class at some point.
anyway short story long you want the cumulative chance of getting killed, basing the estimate on the average person lifetime miles, which varies widely anyway one person to another making a synthetic number over such variability unfit for any purpose
Okay, let's go through what you identify as being wrong point-by-point.
> first you don't need to normalize by r ruining vehicles, both metrics are yearly.
I provided the (active/daily) running vehicles metric in order to establish a (very) conservative estimation of 54m distinct human drivers in calendar year 2016. This was to ensure subsequent analysis was reasonably well founded by way of using common base units.
> second the real signal is death per mile driven, now that's something you should look for, not abstract drivers.
Perhaps that signal is significant for other premises, but mine was to provide a probability of any given human driver causing or being involved in a vehicular fatality. This was to ensure that the derived probability was reasonably correct based on referenced data.
> third using drivers as base assume all cars vehicles have no passenger.
Perhaps my use of the descriptor "driver" was improper for the entirety of the analysis. Given the limitations of the medium, it was a trade-off I was willing to make.
Your point is correct and, yes, it would affect the probability iff the intent was to determine driver liability.
> this should at least get you in the right direction, but you'll likely want to put motorbikes into their class at some point.
> anyway short story long you want the cumulative chance of getting killed, basing the estimate on the average person lifetime miles, which varies widely anyway one person to another making a synthetic number over such variability unfit for any purpose
Your analysis is keen and one which is goes beyond what I was trying to illustrate to the GP. To wit, the specific point being addressed by my terse analysis was:
Highways rack up a lot of miles, but is a really easy environment to drive in. Intersections, pedestrian crossings, busy streets, parking etc don't yield a lot of miles to the statistics, but is where most interaction with other people, and most edge cases show up.
So I'm not sure that accidents per mile is necessarily the best metric either.
> 18,000,000 seems pretty high. That might be peak. I doubt >5% of the population is the driver (not passenger) in a vehicle all day.
True, that estimation may be peak load. Which is why I kept the total estimate of distinct drivers at 54m instead of the much more likely number of well above[0] that.
This is the shittiest handkerchief math ever. Your only considering fatalities- which comprise a small subset of the total number of accidents.
Secondly, you didn't actually provide a comparison. You simply remarked "Oh the chances of dying in a car accident is comparable to the chances of dying due to lightening therefor cars are safe" (whilst ignoring the 2 ORDERS OF MAGNITUDE difference between them, people struck by lightening != people killed by lightening and the fact that cars are not safe, as anyone who has ever driven will tell you).
Do we really want to turn this into a numbers game? The difference between those two scenarios is that different people will die. It also assumes we all adopt AVs too. Now of course everybody has equal value, but are you prepared to accept that a software bug and a lapse in human judgement are also equal? Without accepting that at a societal level the average person will not accept AVs.
Humans who cause accidents will have remorse and learn never to do it again, or be punished if alcohol was involved. With an AV, currently nobody knows. It could be as simple as "we've checked in the bugfix to prevent this from happening again" with no emotion behind it.
Hasn't the rise of Uber and Lyft already turned it into a numbers game? I get drivers that I would consider below average all the time, and I don't have any control over which driver I get. Even when you're the one driving, you have no control over whether the car in front of you is being driven by a good driver or somebody texting who might cause an accident that endangers you. I agree that it needs to be accepted on a societal level, but if you can definitively say "self driving cars are safer than the average Uber driver," then I don't think it will be that hard of a sell. That's especially true if it results in less expensive rides in addition to lower crash rates.
This. For the majority of fatalities, the driver that caused the accident is rarely among the dead. They are generally in another vehicle. Everything is random. Decreasing the odds of failure will kill individuals, but even a software edge case can cause a phone to explode. The assumption we are always in control is what needs to change.
> Do we really want to turn this into a numbers game?
The reality is lots of people die on the road and the numbers show this sad fact.
Now the reality is some of those who are killed, die from no fault of their own. They die at the hands of the other driver.
The sooner we can take away that human element from driving the better, because that one act will end up saving lots of lives.
> Humans who cause accidents will have remorse and learn never to do it again
That is so untrue, at least for what happens here in Australia.
Here in Australia we have many hundreds in not thousands of repeat drink driving, speeding offenders on the roads, and short of locking them up there is no way to keep them off the roads.
Taking away their license makes no difference as they just drive without a license.
The argument in this thread seems to suggest we should be slow at adopting av technology because it could kill some innocent people.
Now I accept that point as a fact. There is no doubt people will die in av cars just like people die in cars today.
Yet, on the other we have human drivers killing innocent people today, which is acceptable collateral damage, even though introducing av technology would help to greatly reduce that number.
> Yet, on the other we have human drivers killing innocent people today, which is acceptable collateral damage, even though introducing av technology would help to greatly reduce that number.
Speaking for no one other than myself, vehicular deaths are not "acceptable collateral damage" so much as a statistical inevitability, one sought to be minimized as much as possible.
The problem with introducing autonomous vehicle technology, again IMHO, is the assumption that it "would help to greatly reduce that number."
> And without actually doing the implementation that question will never get answered.
Exactly. Which is why it is imperative the engineering community does not assume outcomes beforehand.
Much of the concerns you list I agree are likely benefits, with the "not drive drunk", "not get tired", and "not susceptible to road rage" items being a near certainty due to eliminating biological aspects.
What autonomous vehicles might not be better at could be:
. snow
. black ice
. mud
. severe thunderstorms
. able to operate in high dust environments
. adapting to rapid unexpected operating conditions
. when to run a red light (emergencies, external threats, some combination of conditions above)
Are these edge conditions? Maybe. Or you could call them functional requirements not often discussed.
All I'm saying is that there is a lot more to general-purpose driving than what can be shown in a limited setting.
IOW, show me an autonomous vehicle which can complete the Paris-Dakar Rally[0] and I'll show you someone who will say it's time for people to stop driving ;-).
Humans learned to drive for years, initially, we will be better, but as time goes there is no chance against an automated solution. The whole industrialization process is just this and there were problems and bad things happened, but it works in the end.
I specifically added that clarification only because I suspected without it, the argument would have been but what happens when a bug in the software causes the av to crash where as humans don't face have that problem types of arguments.
Being a software developer I have no doubt there will be bugs in these av software system.
But these systems will also include many fail safe overrides designed to minimize the impact of these occasional bugs.
> You can't possibly think that drunk drivers and those driving without a license are the target market for self-driving cars.
You are correct, I don't think that.
That group just represents a small subset of the drivers that present a danger on the roads today.
Now since you asked, I would say the real target market for self-driving cars is big business.
They are the ones pushing hard for this technology, not for the safety aspects that I have focused on, but instead for the cost savings this technology will bring to their bottom line.
> Humans who cause accidents will have remorse and learn never to do it again, or be punished if alcohol was involved. With an AV, currently nobody knows.
Easy to turn that around. Only individual humans learn from their mistakes - and even then, not always. But with AI, the whole fleet learns that lesson.
Uber's stats are worse. Humans have fatal accidents about once per 100 million miles. Uber's AVs have had one fatal accident, and they'd driven 2 million miles in Dec 2017 [1]. EDIT: While this isn't a huge amount of information to go on, it strongly suggests that Uber AV's are more dangerous than human drivers.
I mean, yes, it is fine for AVs to have an occasional bug --- as long as that occasional bug kills only kills one person per 100 million miles driven. But that's actually a pretty high bar.
If av technology does ever get rolled out in a big way, then I would expect to see suspect a big reduction in the numbers of road/vehicle related deaths.
Now if instead the road deaths stay the same or go up, then naturally that should should also spell the death of av technology.
Only if the companies get forced by regulations to actually target that metric.
The participants are in it for the profit, not for saving lives. The US health care misery is a prime example of what happens when you try to solve a situation through the market, which by definition has to treat human lives as an externality: People in the US die earlier than in any other first world country.
> People in the US die earlier than in any other first world country
The causes of death might not be the healthcare system ,no? What if the majority of these deaths are caused by bad diets and stress? Would you blame the healthcare system for it?
(I didn't mean to derail this thread, but want to answer you anyway)
You're right, one can't isolate one institution for topics as complex as these. I'm pretty sure it plays a role, though, as (ignoring fundamental differences in poverty rates, workplace regulations etc.) many countries employ the healthcare system to inform the population about healthy eating and offer incentives and free courses for coping with stress. Thats the beauty of a single payer system: Beyond ethics, the healthcare system has an intrinsic interest to prevent the preventable, as that is almost universally cheaper than bearing the cost of circulatory disorders later.
I'm just another armchair economist, but in a market based system the risk IMO is that the insuree grabs the preventive offers from insurer A and a few years later wanders off to a competitor with cheaper rates for that age bracket. So insurer A has no real incentive to prevent illnesses because in the long term, all insurees will switch anyway.
You "expect" why? What is your prediction based on?
There is no data to suggest that this will happen. Furthermore, if self-driving cars do take off, they will continue to be outnumbered by normal cars for a very long time. It's not even clear that they will or should be the majority.
Didn't uber shut down their self-driving attempts? Why are we still even talking about them if they are not even in the game anymore? (By their own volition or by virtue of being regulated, doesn't matter.)
ok, controversial counterpoint: maybe some number of deaths is worth it.
We all know that the fastest and most reliable way to find and fix bugs is to implement the thing in production and see what happens. As you point out, it exposes problems that testing and design just can't.
I see people here obsessing over how much will change if we had fleets of AVs. Maybe shaving five years off the wait time is worth N number of human lives. Especially if N < the number of lives will ultimately be saved if the technology is perfected years earlier.
Of course I wouldn't want myself or my loved ones to make that sacrifice. I don't want to die in a regular car accident either. Given the upside, speeding up AV development may be worth the risk?
I would be more comfortable limiting it to the families of the managers. The engineers, due to our legal and social system, have almost no control whatsoever over when the product gets released, what tools they have access to, how much time they are given for testing, etc. All of those decisions are driven by management who have been guaranteed by courts that there simply is no degree of corner-cutting which could ever possibly amount to criminal negligence because software is involved.
People don't really care about automotive safety that much. If they were, they would not drink and drive, text and drive, do their makeup and drive, not wear their seatbelts, speed, not slow down in school zone or other dense area, take their eyes off the road to pick up the phone, so on and so forth.
For every demand they make of OTHER PEOPLE to improve the safety of travel, they are not willing to do even /10 of that.
> ok, controversial counterpoint: maybe some number of deaths is worth it.
Not if you wish to win the hearts and minds of the general population at large. As grandparent points out, the coverage seems disproportionate, but that's because it interest people, and they are generally sceptical of letting computers drive their cars.
While technically unrelated, the Boeing 737 MAX MCAS scandal hasn't done much good either for the further automation of transport systems.
People dying at the 'hands' of AVs will lead to pushback from the general population, and political points can be scored by siding with this view, leading to tigther regulation if not an outright ban some places on AVs.
The real question is; will either the slow and thorough approach or the quick and dirty approach roll out larger AV fleets faster than the other? In any case, it'll probably be decades, before there are significant AV fleets.
Remember there are costs on both sides. The US had ~35k traffic fatalities each year. If you put a 5mm value on a life (roughly what the insurance companies do) then that is 185 billion in cost every year. Cutting that in half saves nearly a trillion dollars and almost 200,000 lives. It’s a huge moral and economic imperative to do this.
The problem is where people set up the goal posts and have a double standard for human vs. ai driven cars. AI cars, for some reason, are expected to produce an error rate of 0 right off the bat and ever single incident gets used to feed an existing (and often legitimate) narrative about tech bro laziness and irresponsibility.
Meanwhile 1.25 million people die in human caused car accidents every year and somewhere around an additional 30 million are injured. But that’s “just the world we live in.” Or that’s “your choice to take a risk when you choose to drive a car.” Or whatever.
A lot of criticism of our industry is well-deserved if overly broadly applied. Most companies do not operate with the total lack of ethics that Uber and Facebook do. But they are so big and so well-known, they’ve come to symbolize a culture of amorality, deception, manipulation, and a willingness to do anything for growth and fresh injections of VC funding.
What would be really useful for everyone is to throw out that narrative for a while and focus on the task at hand. Self-driving cars will be regulated. There’s no getting around that. And tech companies could help themselves out a lot by accepting that and coming to the table with regulators with an actual plan instead of whining about having their innovation stifled.
There is a reasonable compromise. You can’t start by saying the error rate must be at zero. That will never get anywhere and people will keep dying a million a year. Where is the right number and what does it measure? I don’t know. It should probably be tiered by population density and average driving conditions and weighted by number of annual deaths in that city/region.
Highways are an easier problem to solve than, say, Manhattan. But highway deaths are more likely to be fatal. So the standards ought to apply differently.
The challenge here is really to get regulators to think a little bit like problem solvers rather than enforcers for a change. To get them to realize that “regulate self driving cars” is a problem so big and mushy that it’s pretty much meaningless. But “establish safety standard for interstate travel for autonomous trucks” is something you can work with. Maybe it’s 100x as a safe as a human driver in that scenario. I don’t really know.
Tech companies could probably turn on a lot of positive PR if they promoted milestones for certain communities. Maybe even offered incentives for cities to improve road markings. Like if Waymo said, “We want to target Brooklyn, and our goal is 10x as safe as human drivers in Brooklyn with an additional target of 1000x fewer cycling accidents. But we need you to mark the bike lanes better and put them between the parked cars and the curbs where it’s feasible, and here’s a pile of money you can use to do it.” That would generate a lot of good will, positive pr, and maybe even genuine enthusiasm that they can use to open up that market for their taxis ahead of schedule.
The car companies will get a lot better treatment from regulators if they chunk the problem into small pieces and go to the negotiating table without a victim chip on their shoulder.
Maybe some are expecting an error rate of 0, but the little data we do have on self-driving or at least wannabe self-driving cars is that they're failing even under simple conditions like highway driving.
There was a nice video of a Tesla driving at low speed constantly pulling the car towards green islands between lanes as if it had a death wish.
Uber famously killed someone while testing and then blamed the humans.
It remains to be seen how Waymo will fare, although having the advertising company that's spying on everyone build self-driving cars seems like parody.
> The problem is where people set up the goal posts and have a double standard for human vs. ai driven cars. AI cars, for some reason, are expected to produce an error rate of 0 right off the bat and ever single incident gets used to feed an existing (and often legitimate) narrative about tech bro laziness and irresponsibility.
Meanwhile 1.25 million people die in human caused car accidents every year and somewhere around an additional 30 million are injured. But that’s “just the world we live in.” Or that’s “your choice to take a risk when you choose to drive a car.” Or whatever.
If we set up the goalposts fairly, Waymo wouldn't be anywhere near receiving permissions for full autonomy until we had statistically significant evidence the latest build could reduce fatalities to human levels which are in the region of 1 per 100 million miles (a rate which includes all the people who aren't allowed on the road...). When they've only driven a few million miles, the fatal error rate - including potentially fatal collisions avoided by safety drivers - really should be zero.
We haven't got any evidence that fatal errors in AV technology approaches that level of rarity, will never have it for individual builds, and the best [limited] evidence we've got refuses to reject the null that despite being shielded from hard AI problems like left turns and supported by safety drivers and driven at low speeds, AVs are an order of magnitude more deathly than human-driven cars...
Yes, and I think we should protect users from automatic updates.
An AV company should only be able to update the car software after that software has correctly driven for X million miles.
Perhaps a separate agency should perform the actual updating (installing of the new software), so we can be certain that no "quick fixes" are happening behind the scenes.
Your assumption that AI should be compared to average human driver is simply wrong. There are all sorts of human drivers, some of them drive under influence, some have slow reaction, some have terrible cars. Why should AI be compared to any of those?
When we use automatic vehicle, we assume it to be a reliable driver, not a random person who could be a bad driver.
So please stop comparing to average human drivers.
It should probably be compared to the median driver.
Also, software gives the possobility that the same shitty driver can drive millions of cars at the same time. One bad commit might cause a disaster. Millions of human drivers wont be possesed by the devil at the same time causing havoc. Imagine all Toyotas turning full left at the same time.
> But I have the feeling that Waymo's system is much more mature than Tesla's. Let's see.
No way. Not even close by any factor. Tesla have real data from real people driving real cars across the world. On the other hand, Waymo have data only from their Test vehicles and augmented scenarios.
Data from people driving their own cars is much less relevant than data from your own autonomous driving system driving among real drivers in many different situations (eg urban driving). AFAIK, Waymo has much more of the latter.
Tesla has much more of the latter. Their driving system is already deployed and used in consumer vechicles, each of them able to collect that data. Watch "Autonomy day".
There is a youtube channel callled "Tesla Driver" where they put out regular videos driving through urban UK (on autopilot of course). The streets do not look like the center of New York, but not like a constant highway either. So yes tesla absolutely collects tons of data of how their autopilot behaves in urban areas.
It’s not about safety. These things don’t actually work yet. They can’t make left turns across traffic, they can’t change lanes in heavy traffic, they can’t handle anything but pristine perfectly mapped roads in good weather during daylight. Any non-standard street feature or road usage is an insurmountable roadblock (literally) to them. Promising that they will save lives and disrupt the economy (how you turn robot taxis into a profitable business hasn’t been explained convincingly for that matter) is a hollow delusion until they can actually navigate from point to point in less than perfect conditions without causing traffic backups or making the passengers carsick.
AV would certainly be judged with less hostility if it hadn’t built itself on the assumption that neural networks were the solution to any problem too hard to model.
It’s going to be extremely hard for AV manufacturer to explain that it’s impossible for them to guarantee that a certain abnormal autopilot behavior that caused an accident isn’t going to happen again.
I'm not an expert myself, but I think you overestimate how much ML is used in these systems. From my understanding, it's mostly used in classifying objects, but the actual controls are not, and I would assume that the driving logic itself is written in a conservative way and a bad classification wouldn't cause an insane behavior.
I have talked to a number of different autonomous driving startups and every one of them was entirely basing their approach on NNs and explicitly avoiding explicitly-coded algorithmic approaches. It was crazy and frightening that these people are serious.
We had an entry in the DARPA Grand Challenge in 2005. It was too slow, but it didn't crash. We profiled the road ahead with a LIDAR, so we had an elevation map of what was ahead. This is essential for off-road driving, and it's the gold standard for on-road driving.
But there's a range limit. Not from the LIDAR itself. From the geometry. If the sensor is 2m from the ground, and you're looking 30m out, the angle is so shallow you can barely sense elevation. You can't see potholes. Anything that looks like a bump hides a lot of road behind it.
The Stanford team's answer to this was to use machine learning to compare the near road texture, which they could profile, with the far road texture, which they couldn't. If the near road was flat, and the far road looked like the near road, they could out-drive their profiling range. If it didn't match, slowing down brought the stopping distance down to profiling range, and they could work through difficult terrain. The 2005 Grand Challenge didn't have much difficult off-road terrain. No rock-crawling or cratered roads. The 2004 course was harder, and nobody got past mile 7. So most of the time, fast mode was usable.
Google/Waymo started from the Stanford approach, trying hard to profile the immediate surroundings. Their machine learning efforts were focused on identifying other road users - moving objects, not fixed ones. Their earlier videos make this clear.
Google/Waymo built a cute little bubble car with a top speed of about 25MPH and a LIDAR on top. At that speed, you can profile the terrain all the way out to your stopping distance, so you have solid detection of fixed obstacles. That was something that had a good chance of working well. They decided not to manufacture it, probably because it would cost far too much.
Machine learning isn't that accurate. You can get to 90% on many problems. Getting to 99% is very tough, and getting to 99.9% is usually out of reach. The killer apps for machine learning are in low-accuracy businesses like ad targeting. Or in areas where humans have accuracy in the 80%-90% range.
Here lies the problem. Humans are very good at obstacle avoidance. Much better than 99%. Machine learning isn't as good as this problem needs.
There's another side to the problem - the false alarm rate. If you build a system which insists on a valid ground profile, a piece of crumpled cardboard on the road will result in slowing down until the sensors can look down on the cardboard and see past it to solid pavement. You get a jerky ride from a conservative system. That's why Uber disabled automatic braking and killed a pedestrian. That's why Tesla's system fails to react to fixed obstacles it could potentially detect. Waymo has struggled with this. Customer evaluation of driving quality seems to be based on good lane-keeping and low jerk. These are things that trouble poor human drivers. Self-driving has different strengths and weaknesses. This is what leads to systems which seem to be doing great, right up until they crash.
What self-driving seems to need right now is a rock-solid way of detecting the lack of obstacles ahead. All we have so far is LIDAR. Radar is still too coarse and has trouble distinguishing ground return from obstacles. Even LIDAR is rather coarse-grained. Stereo vision doesn't seem to be hugely successful at this. We need that before self-driving vehicles can be trusted not to run into obvious obstacles.
If you have to recognize what the obstacle is before determining that it's an obstacle, it's not going to work.
There are a whole series of secondary problems, from left turns to double-parked cars. But those are not the ones that kill people. It's the basic "don't hit stuff" problem that is not adequately solved.
Chris Urmson's talk at SXSW is good for how Google/Waymo's system worked. The DARPA Grand Challenge is well documented.
Our team's code is now on Github, just for historical interest.[1]
Having actually worked in these companies, I can say the actuation and controls are still deterministic algorithms. Talking on some buzzword is one thing, but actual implementation is another. I would be surprise if any of these companies have largely NN based modules in their stack for controls and lower level planning.
Can NNs be corrected, like a child or a dog being told a firm "no" when doing the wrong thing? Can the conditions be replayed such that those conditions result in a different response with a human auditor providing the correct response?
I'm guessing the answer is currently no. Which is interesting because one of the early benefits touted for self-driving cars was that people might die, but a patch would ensure no one dies twice for the same reason (within reason, which is more than can be said for humans).
The even scarier thing is that a lot of these startups are training their neural networks on game videos like GTA or cars they get off the Unity asset store. I seriously doubt any of the artists, graphics programmers, AI developers, etc. involved in these titles think their work is suitable for safety critical systems.
the positions for Vision/Perception at zoox all mention nn's:
https://jobs.lever.co/zoox i bet even simple questions like "how does this scenario change decisions when the lighting / sensor rotation are altered" cannot be answered.
Changing lighting and rotation on sensor input is a pretty standard way to improve neural net performance (it's called Data Augmentation), so I'm pretty sure they could answer that.
The Telsa autonomy investor day (which applied to the cars not the people) spent most of their time explaining their NN chip, ghost riding to correct predictions etc. Definitely gave the impression that the network was doing a lot of the decision making.
Of course, because NNs are hot with investors now and Tesla wants their money.
Companies frequently misrepresent how their technology works to the public. Typically they'll do some small portion with the hot technology for buzzword compliance, and then build the rest of it with an actually sensible, boring, and tailored-to-the-problem technology stack, oftentimes with a lot of proprietary legwork done by their data scientists and engineers. This way they get the best of all worlds: investment dollars from gullible investors, PR from journalists that want to hop on the next big thing, a product that actually works, and misdirection so competitors hop on approaches that aren't going to work anyway.
That doesn’t seem to be the case with Tesla. Their custom chip is almost entirely focused on being a neural network accelerator and they seem to be hugely reliant on neural networks. Notably Waymo is said to be less reliant on neural networks but Tesla definitely depends on them.
That doesn't really prove much. The classification is the heavy part that needs acceleration. The algorithmic side would probably not need an accelerator, as it's basically a decision tree.
My original point was that the decision tree should hopefully degrade gracefully as the prediction quality goes down, and not have any brand that leads to an insane behavior.
Now obviously, the better your predictions, the better the driving will be, which is why you'd want NN accelerators.
Gives me the impression that Tesla made a small innovation which they're trying to overblow to investors since buzzwords like machine learning are more interesting than actual progress. I'm not saying Tesla isn't making massive progress but that anything they say to the press should be taken with a grain of salt.
Well if the system was as I described it, you can throw as many bad classifications as you want in the simulation environment, of varying severity, and see how the algorithmic side reacts.
Where are you even reading this assumption? Who, which company has said this? Sounds like something someone overheard somewhere, and not an actual assumption made by any of those companies. In reality Waymo, Tesla and others are using NNs where they fit, where they have been shown to produce the best results - and other methods everywhere else. Heck, even the ML courses teaching self-driving tech to students don't exclusively use NNs, but all sorts of ML models.
> Of course we all want a safe and responsible rollout of this tech, but people too easily forget how ABSURDLY dangerous driving is already.
I don't really understand how this meme gets so much traction. It just ignores the enormous number of miles driven by Americans every year. Last year, Americans drove 3.22 trillion miles. There were about 6 million accidents, or 500,000 miles between accidents. (Or about 37 years of driving at typical rates). Driving results in a death every 85 million miles traveled.
Driving is actually very safe. Conversely, because it is so safe, it wouldn't take much to dramatically increase accident rates from driving.
I don't really understand how this
meme gets so much traction.
The trick is to compare driving deaths to other things where we had a much bigger (or smaller) reaction relative to the number of lives lost.
For example, tell someone "cars kill as many people as twelve September-11th attacks every year" and they might wonder where the trillion-dollar war on road deaths is.
On the other hand, tell someone "heart disease kills 17 times as many people as cars every year" and you'll give them the opposite impression.
I guess many of us have such an inflated sense of self-importance that we think we can save humans from themselves, without stopping to think who will save everyone from crappy software.
Sure, our perception of risk and regulatory responses are not statistically accurate. Commercial air travel is much safer than cars, yet regulatory resposnses to risk are stricter. Is that a bad thing? It creates a very safe mode of transport.
Going driverless is a new mode, and an opportunity to create a safer future. Why not set a standard that is much higher than human operated cars?
Also, a lot is unknown. We won't know what driveless' safety record is until it truly exists in the wild.
The economic pressures to go driverless will be immense^. Meanwhile, a high safety standard could require a some counterpressure, and the starting point will set the stage for the future.
^two kinds. It could be a huge win for consumers, with taxis prices halving or better. It is alsi an opportunity to "put capital to work," in a (current) market that is starved for such. A robo-taxi fleet is just the kind of thing a massive fund can erm.. fund.
The desire to hold some entity legally responsible for any accidents seems to be the key problem. That problem is really about punishing the responsible party. If someone dies because of an irresponsible driver, the outcomes of the case are always some sort combination of punishments: fines, incarceration, suspensions, etc.
Proposed Solution:
Legislation can easily be passed that provides clear guidelines about how these errors of judgement in computing systems can be punished. For example:
In situations of death, $X is paid to the victim's family, $Y is paid to the federal government, the company's automated vehicle manufacture license is suspended until federal review, etc.
Then, its a simple matter of getting humans to sign to agree that they will accept these terms.
> The desire to hold some entity legally responsible for any accidents seems to be the key problem.
Kind of off-topic, but do these minivans have insurance policies? Or is there some Alphabet subsidiary that allows them to act as insurance for themselves?
If they do basically insure themselves, they will already be paying out insurance whenever their AI fails to follow/obey laws, so the only issue with placing blame is in the public/media's eye.
Companies which produce aircraft must hire licensed engineers to do the design and implementation work. AVs are almost all software. Licensed software engineers do not exist. While there are regulations and standards for every other form of engineering, there is nothing for software. And courts have recognized this. They have stated that companies can not be found criminally negligent regardless of their practices because there are no standards which they are failing to adhere to.
May making MISRA guidelines an international standard, namely International Standard for Programming Practices in Safety-Critical Environments, solve this?
It's very dangerous in some places in the world. Majority of deaths are happening in places with poor infrastructure[1], that autonomous vehicles are very far away from being able to handle. For current state of AV, bar it has to meet and exceed are places with good infrastructure, fairly new cars and reasonably good driving culture, as those are first targets of rollout.
For example, in Phoenix, AZ, about 250 people die per year from accidents[2], with about 9k miles driven per person[3] and metropolitan area having 4.7m people, to just match human levels AVs to have less than 1 fatality per 170 million miles driven.
Majority of deaths are happening in places with poor infrastructure[1], that autonomous vehicles are very far away from being able to handle.
In other words, it's still more like a better cruise control: it makes the easy part of driving easier, but doesn't help with the hard parts (and may make them even harder.)
The regulatory environment has been fairly easy going so far including allowing Tesla's autopilot although it's not terribly safe and Florida has passed a law allowing completely unmanned vehicles. I think regulators recognise that encouraging development now will probably save lives down the line. The bigger problem still seems making vehicles that work well rather than getting them past regulations.
> Like with Tesla, every accident and injury caused by a Waymo vehicle is going recieve disproportionate media coverage and subsequent public backlash.
Explain the universe where humans are rational enough to give "proportionate" attention to tech as novel as self-driving cars, yet irrational enough to have preferred private car ownership over mass transit in the first place.
Cabs are SO convenient, and public transportation has all kinds of issues world-wide. Buenos Aires, for example, has subsidized metro (about 0.50U$S per ticket) vs a non-subsidized uber (5U$S per ride~) and still the experience is so much better and convenient and reliable that 5U$S is really worth it.
I think that the ride-hailing revolution is a classic case of great microeconomics.
No, because society has already accepted the status quo and needs to be convinced to change it. Technology can't gain acceptance by being just as good as the status quo -- it has to be better.
Every time a self-driving car causes an accident in a situation where a human driver probably would not have (and sometimes in pretty obvious ways, like driving into a freeway divider), it will hurt acceptance of the new technology. Politicians will never miss an opportunity to grandstand against tech companies. People will be afraid to ride in autonomous vehicles.
The status quo is you either have to pay someone ~$10/hr to drive for you, or you have to focus and keep both hands on the wheel and actually drive yourself, and it's still pretty dangerous. Self-driving cars could improve on the status quo significantly without actually being less dangerous.
Driving is dangerous in generally predictable ways, and people accept that. People also accept that there is usually someone to blame for most car accidents. Driving is dangerous but it is under human control.
Autonomous vehicles are dangerous in unpredictable ways. People might be injured or die in accidents that would not have happened with human drivers, as the result of software bugs rather than human decisions. When things like that happen, statistics about accident and death rates being lower with self-driving cars are beside the point.
Hopefully our regulators can regulate rationally, even if our click-bait-emotionally driven general public cannot. If they can manage it, thousands of lives will be saved.
If you're concerned about safety, driving aids like drowsiness/attention warnings, lane keeping assist, automatic emergency braking, etc are the way to make things a bit safer. In combination with the continuing march on collision safety for when they do happen.
If you're concerned about the costs of a driver, I grant that the marginal costs per hour of driving is low, but the material costs seem high, unless a significant cost breakthrough is found in lidar production, or a technique breakthrough using multiple cameras; and the R&D costs seem pretty enormous -- people have been seriously working on this since the 80s and it's clearly closer than fusion, but it seems perenially 15 years away.
This about it this way. Would you buy a self driving car that drives worse than you do? Even if it drives better than the average, an average brought down by drunks and the elderly? I know I wouldn't
Then again, bad drivers often make themselves known on the road and you can give them wide berth when recognized (granted, not always possible). Watching for the warning signs of a bad / reckless / distracted drivers quickly became an almost unconscious habit for me.
A machine suddenly hitting a bug is whole new ball game though. I guess over time maybe you could build up a sense of where machines have trouble driving and adjust accordingly.
If you want to keep the worst drivers off the road, it might make sense for the elderly to get self driving cars. Or cars to detect your blood alcohol level and go into self driving mode then. That solved, I believe the average driver would go back up enough that self driving cars are worse.
The rational approach would be to license them when it was likely they'd increase the overall average safety of drivers.
That isn't just a question of the safety of the autonomous systems though, it also includes consideration of which drivers end up using them. Some drunks will use auto taxis, lots won't.
If we are going to measure their safety are we going to also expect them to obey all traffic laws to the letter? I am curious how speeding is going to be addressed. Will keeping in sync with traffic be acceptable or do we say, thou shall not exceed the limit for any reason? Because as soon as you let them break the law, logical reason or not, they have one transgression against them in case of an accident. Then the barn door is wide open.
I still think marketing it as a safety issue was the wrong approach this early in their development. Say "safety" and "cars" in the same sentence and people think of seat belts, air bags, and anti-lock brakes. All things to protect you from an accident on unpredictable situation.
True, assuming you're in top-1% condition. Even a usually top-1% driver is going to be far worse off if they're tired, or distracted, or had a fight with their partner that morning, etc.
I see your point, although I would argue that you could say the same thing about human factors like intoxication. Say there's a big drinking holiday, human-driven deaths go up due to people drink driving. Self-driving cars are immune to that factor.
It's important to remember that flying was ridiculously unsafe when it first appeared. Now they're the safest form of travel.
Not to say that self-driving cars will be ridiculously unsafe when the first commercial service appears, but they share one thing with planes: increased safety is an engineering problem.
Every incident that occurs with these vehicles will have tons of data and people looking over that data to fix the issue. Human brains can't be fixed, sensors and computer software can.
Well we're really talking about two different things here: The first one is manufacturing a product, and the second is using the product. It's systemic risk.
Airplane manufacturing is extremely highly regulated - the reason being that if there's a bug, then the bug is in every airplane. And when that bug appears there's often very little opportunity to mitigate it and the consequences are huge, it's the Manufacturer that's responsible, but the passengers (and bystanders) who pay the price.
Similarly, the standard for safety in car manufacture is much higher than the standard of safety for becoming a driver. Partly because the problems for the former are systematic, and partly because the latter primarily poses a risk to themselves. There's self-regulation there.
When Tesla tells everyone they have autopilot it isn't Elon Musk whose Tesla slams into a lane divider at 70mph, it's one of the customers he lied to, and it's likely not even one customer, it's a systematic issue where every driver in that situation will face danger.
In my view, accidents aren't the main issue, it's more the disruption caused if an AV, with no human driver, encounters the 'freezing robot problem' (I.e. When traffic lights not working, with policeman directing traffic) & gets stuck in a busy city centre, blocking traffic all around.
I'm not aware of any fallback remote takeover mechanism that can be used to recover AV's on this situation.
EDIT: I'm convinced, that until we majorly overhaul our road infrastructure to be more 'robot friendly', it's hard to see _fully_ autonomous AV's work in major cities I.e. electronic beacons on lane dividers, electronic signals on traffic lights, vastly stronger GPS network or replacement thereof, six nines level mobile network coverage at high bandwidth.
Peer pressure will counter initial regulatory hostility. Inevitably there are going to be some early adopter areas and a lot of people scrutinizing what is happening there. When some of those early adopter areas are are getting clear benefits, there's going to be a lot of pressure on neighboring areas to do imitate what their neighbors are doing. From there, it is just a matter of time. Eventually it will turn around and regulation will start making non autonomous driving harder to wield out the remaining accidents involving drunk/tired/incompetent/reckless/etc. drivers.
Insurers will be a major factor here. They should start getting some pretty good insights in accident rates involving AVs vs. normal cars.
They have the potential, but I think that in practice we are at minimum decades away from having self-driving cars that are as safe as human drivers on average in general conditions.
I think you're probably right but I don't expect it to change. It's human nature to be afraid of the unknown and unfamiliar. Even if you had a mountain of statistical evidence to show it's actually safer (which doesn't actually exist yet), it's new to people, so they're going to fear it.
Better driver assist technology could have the same benefits but it is not equally distributed due to cost. Automation is going to go from luxury vehicles to commercial vehicles and be funded by labor savings.
Good luck affording even basic collision avoidance tech in a family car if you are an unemployed driver.
Not sure what you mean, there are all sorts of providers who have been actively on roads for the past few years. Including some giving rides to passengers like Lyft in Las Vegas. There was definitely some controversy last year when there were some crashes and even a death from autonomous vehicles being tested.
I feel this has been the case with e-scooters in SF. People being disproportionately angry at tech companies for making these devices available. And yet, before them, you'd see an army of social activists riding their bikes thru thru red lights and being general assholes.
That is extremely unlikely and borders on impossible. AVs have some profound things working against them. They share the problem airplanes have. Human beings are abysmal at risk estimation, and a very large factor of the estimation they do is whether they have personal control. It doesn't matter that planes are much safer than driving, the fact that people don't have direct control of the aircraft makes them feel unsafe to the point that some people are paralyzed by their fear of flying. The same will be true of AVs, and for the same (un)reasons.
But I have my own doubts that it will even get far enough for that to pose a serious problem. We have a more general problem with technology in our society. It is nearly unanswerable to the law. Take the Toyota "unintended acceleration" scandal from a few years ago for example. Investigation of that issue determined firmware flaws were the cause, and led to the deaths of over a dozen people. The criminal court case against Toyota unearthed that while the automotive industry had coding practices that established 'required' and 'suggested' techniques, lists of around 100 practices. Toyota followed 4 of them. Firmware developers working for Toyota did not even have access to a bug tracker at all. They did not have access to static analysis tools. They had no role in determining scheduling.
They were found 'not guilty' for criminal negligence. The reason given by the court was that because the problem involved software, criminal negligence was impossible. No standards exist governing software, and therefor no company can be held criminally responsible regardless of their practices. During the civil case, they were found guilty and responsible for the deaths, but before the jury could award damages, Toyota settled with the victims families.
That legal precedent is terrifying. The first AV to market will inevitably be the one that cuts the most corners and is rushed the most in its development. When it kills someone, as is basically guaranteed to happen by simple statistics, the public will be paying more attention than they did to the Toyota case. And when the public at large hears that tech company executives are literally above the law, the outrage and consequences will likely be swift and severe. Sensing political opportunity, the legislators will descend and could very well end up dictating what language must be used, what coding standards must be followed, what tools must be used (you know the tool vendors will be lobbying hard), etc. The ACM (Association of Computing Machinery) has been discussing this issue for years, debating the potential merits and pitfalls of establishing a licensing system for software engineers. Thus far, the discussion always ends with 'companies would have to pay more for licensed developers and the cost of labor must be contained.' The looming pitfall of avoiding certification and licensing, however, is that government will step in and do it themselves which is virtually guaranteed to enact the greatest pain on the software industry.
Robot cars don’t work, and as long as they share the streets with autonomous humans, they never will. But don’t give up hope, because maybe corporations can make it illegal to be outside unless you’re inside a robot car.
As someone else mentioned, there were an estimated 1.3 million road deaths in 2016. The number of injuries is considerably higher. Do you consider that "quite safe"?
It's a pretty big blip! About 2.2 million people a year are injured in automobile accidents in the U.S., and around 35,000 are killed.
Granted, you're still much more likely to die of heart disease. But if we're going to care about any non-disease causes of death or injury at all, automobile accidents are near the top of the ones to worry about. It's a much bigger cause of both deaths and injuries than violent crime is, for example.
It's more dangerous, and I trust distributed systems (humans) more than a monolith (waymo). Sure, an individual human may from time to time screw up, but there aren't any defects present in every single person just waiting for the right edge case to appear.
There definitely are! Consider blind spots / limited field of view, slow reflexes, and tendencies towards distracted driving.
AVs certainly have bugs, and some may lead to accidents and even deaths, but each one will be examined in an enormous amount of detail to figure out what sort of edge case it was and prevent similar ones from happening.
I don't necessarily disagree with you, but the comparison between Tesla and Waymo seems absurd. Tesla doesn't have the greatest track record when it comes to safety. Waymo has been moving very slowly and meticulously, and their tech is light-years ahead of Tesla.
The context is that Tesla hardware is not sufficiently powerful, so they do "optimizations" and ignore stationary objects. This is not a video game where we can trick the player with some clever hacks, if the hardware is not fast enough the solution is to use better hardware and I think is very normal to see in newspaper when "autopilot" hits a wall, it is a public service even if Tesla fans and shareholders don't like it and try to play with numbers to show some statistics to favor Tesla(AFAIK Tesla still keeps the safety numbers they were using to promote their cools stats a secret)
Pretty key restrictions here: "though it may not charge riders, and a human safety driver must remain behind the wheel"
This is basically greenlighting further testing on CA roads, not actual driverless taxis. I don't expect that to be legally available on CA roads for a couple more years.
That is indeed key for restricting running a profitable business - but google could really build demand and public support by running these at a loss. If they're on the streets and commonly visible then they'll quickly be accepted as reality.
How long would we have to wait for one? Free is great, but sometimes I'd pay for an alternative if there are limited Waymos on. I'm not sure that I'd want to wait 30+ minutes longer for the Waymo.
Not needed. They can just do route advertising (car passes by and/or does things at a particular place to advertise a brand) personalised based on your audiovisual biometry. ZF is already working on self-driving stack, including moving parts and full SDV-TaaS infrastructure, and they support such an idea.
So I have some general deep-seated objections to advertising due to the economic cost it leverages - taking an inefficient route, burning additional fuel to drive it and wasting someone's time, is absolutely ridiculous. How do the people working on this not revolt?
Do taxis sell your data: when/where they picked you up and dropped you off? Do they sell your facial features for facial recognition? If not, then it is not the same!
I don't know - do you? Additionally, do you know they never will?
They're not collecting (AFAICT) facial information (though there is a camera in nearly all taxis, so it's possible) but it's hard to believe they aren't collecting route data even if it's just to place stands more efficiently to service commonly requested routes, at some point they might start selling that data... or they might be acquired and their acquirer might either sell off their data - or use their data internally for other purposes.
In the modern world we really have a lot of fuzziness when it comes to how usage data is treated. There are lots of UX people that want to know how people are using sites to make them better - but that same data can be resold for quite creepy purposes.
They can't charge riders, but to me that says they can, for instance charge corporations for making rides available to their employees or for their event.
It also doesn't say to me that the safety driver has to be an employee. It could just be rider 0 on any given ride is required to be in the driver seat.
I very seriously doubt Waymo is going to let random riders sit in the driver’s seat. They’ve been paying maps drivers for ever. This endeavour is far to high-value to let some random whip a vehicle into a crowded sidewalk.
I'll put money on a truly driverless taxi service being available in at least two US cities in under 5 years.
The key with driverless is to stop assuming that it has to work everywhere in all conditions or else it's useless. What's actually happening is that the cars are being trained for and tested in specific conditions and locations where they will roll out first (dry, sunny, well-maintained modern roads, etc.), then as the technology improves, the envelope will expand until at some point in the future they do actually work everywhere all the time.
I don't think it's okay to test big powerful robots on public roads.
Just because a robot looks like a car and has people sitting inside it doesn't make it okay to ignore common sense. These things have already killed at least one person.
I've said it before: we could build self-driving light-weight "nerf" golf carts that would be useful without being deadly. Trying to jump immediately to self-driving cars is a techno-fetish.
- - - -
Edit to add: "People kill people so robots should be allowed to kill people too." is a rotten argument, eh?
If we are still at test stage, I agree with you. However, my understanding is that the Waymo technology has been tested. It's entirely different than the Uber case, where engineers were literally deactivating emergency brake system because it was malfunctioning...
Also, I don't get the "These things have already killed at least one person.". Sure, a car (autonomous or not) can kill. Yet, is the fact that human operated cars "have already killed at least one person" a good reason to disallow them on public roads?
> We don't need the robots to be perfect to get them on the road. We just need them to be safer than human drivers.
> That is a much lower bar to meet.
First, maybe.
I don't doubt that auto-autos will be able to handle anything that's "in the book", and that, if we set it up right, "the book" will be constantly expanding and becoming more comprehensive as each machine in the fleet shares its history in near-realtime and offline processors integrate the data into new stimulus-response patterns or whatever. Etc. Handwave.
But I also suspect that driving safely IRL requires GAI. (After all, we evolved GAI to deal with the world, eh? If the world were simpler we would be dumber, no?)
Anyhow, let's grant the point for the sake of discussion:
> We just need them to be safer than human drivers.
Okay, fine... How do you know when they pass that point?
Until we are sure that's the case, don't test them on the public roads with moms and dads and kids and old people and cats and dogs, eh? eh?
- - - -
And what about the nerf golf cart?
I would love to have a machine into which I could confidently place my mother, who is suffering from dementia and can no longer drive or even ride the bus by herself, and have it transport her safely to and from, say, an appointment with her doctor.
That machine doesn't have to travel more than 5-10 kph.
One of the basic principles of safe auto-autos is that they should never travel faster than they can stop. Meaning, whatever the conditions, the aa should be 100% "confident" that it could stop before a collision, even if that means traveling at a walking pace. (If this isn't the case then you have built a killer robot whatever else it is. And it won't even protect you from the Terrible Secret of Space.)
That's what I mean about self-driving cars being a techno-fetish. We could build something tomorrow that wouldn't be able to kill people and sell it, but we're obsessed with speed and don't even realize it. We're blinded by visions of driving Knight Industries Two Thousand.
In North America, people kill 33000 people a year in vehicle accidents. Your edit is so wide of the mark, bad driving by humans is a major cause of death. Self-driving cars don't have to be perfect, they just have to be better than bad human drivers.
I was going for rhetorical effect, but your question is a fair one. I wish we asked it of ourselves more often and in earnest.
In my considered opinion, yes, cars are an insane method of mass transport† which have killed and maimed more people than war. I call our traffic systems the "Mayhem Lottery".
> Road traffic accidents are the largest cause of injury-related deaths worldwide.
> The very word jaywalk is an interesting—and not historically neutral—one. Originally an insult against bumptious “jays” from the country who ineptly gamboled on city sidewalks, it was taken up by a coalition of pro-automobile interests in the 1920s, notes historian Peter D. Norton in his book Fighting Traffic. “Before the American city could be physically reconstructed to accommodate automobiles, its streets had to be socially reconstructed as places where cars belong,” he writes. “Until then, streets were regarded as public spaces, where practices that endangered or obstructed others (including pedestrians) were disreputable. Motorists’ claim to street space was therefore fragile, subject to restrictions that threatened to negate the advantages of car ownership.” And so, where newspapers like the New York Times once condemned the “slaughter of pedestrians” by cars and defended the right to midblock crossings—and where cities like Cincinnati weighed imposing speed “governors” for cars—after a few decades, the focus of attention had shifted from marauding motorists onto the reckless “jaywalker.”
So yeah, it doesn't make sense to let people pilot massive (often a ton or more) powerful (as a team of N horses) deadly machines down every freakin street (and can we talk about the paving of the world?) just to get from point A to point B at M mph. Not in a world that includes bikes and buses!
Here's the sick punchline: for all that death and mayhem, and all the other costs and externalities of car transport, you don't gain anything, you just break even.
In "Energy and Civilization A History" by Vaclav Smil he points out that, once you factor in all the time spent working to pay for the car you're doing no better than if you just walked everywhere. Cars literally are no better than walking, economically.
(†Although I really like the machines themselves.)
I'm starting to re-think riding my bike sharing the roads with robots. Robots are scary because they're still really really stupid. Driving 1,000,000 miles up and down the SF peninsula doesn't make you smart.
ATM self driving has two issues - the places where it fails in safety are not yet well enough known, and it causes major disruptions for the other cars on the road while turning, parking, or anything that involves interaction / guessing the bahavior of human drivers for something more complex than staying in a lane.
If we would ONLY have self driving cars (no human drivers) and there was some standard they would use to communicate cross company, than self driving would be MUCH faster, not inconvenient, safer and what not else.
As time goes by, there are more and more cars that are being built with the sensors to do self driving, even if they don't have the capability or legal rights.
We will find ourselves in a few years where the majority of cars in certain areas will have the capabilities built in. At that point, some cities will start requiring self driving only, and the software will be built on that assumption. And then suddenly all this effort that Waymo is putting into coexisting with humans will be redundant.
> And then suddenly all this effort that Waymo is
> putting into coexisting with humans will be redundant
I can't imagine this will ever be removed. Take rural UK, there are still steam powered traction engines on the road (occasionally), pony and traps, horses, cyclists.
India will never be 100% AI cars .... not even in the cities.
Suburban US maybe. The rest of the world, you gotta wait a long time.
The thing is, there are plenty of roads that are complicated.
There are plenty of vehicles that are not cars: bikes, motorbikes, trucks, buses, mega transporters, construction / fixing machines, tractors, etc.
Plus of course, you share the road occasionally with protests, parties, events...
And of course criminals won't want self driving cars. And so cops won't. Fire fighters won't. Ambulance driver won't. Anything that needs to get in and out quickly, and each time changing direction, or park in weird locations.
Eventually privacy conscious people will also tilt on how dangerous for society those cars are: you can be spied on, it collects data, it's easy to kill you if you are a political dissident with a simple switch. So a small group of people will also want to opt out.
So while I'm all for as much as self driving car as we can, Waymo effort will still be worth for a long, long time.
It amazes me how many otherwise intelligent people believe in the bullshit hype of fully-self-driving cars. In reality, a car that can drive you from New York to Boston with zero human intervention as safely as an average human driver is probably several decades away at least.
The amount of hype for self-driving cars, even if they come several decades from now, is not 'bullshit.' Just because things don't happen immediately doesn't make them 'bullshit hype.'
Waymo committed and failed to launch a public driverless taxi service in 2018. Cruise is still publicly committing to 2019 although it's widely believed they will miss that mark. Zoox says 2020. Argo is aiming for 2021. Not to mention whatever timeline Musk tweets for Tesla's constantly moving target.
There are more, so there is a lot of hype. Otherwise how would these companies be raising billions of dollars on unproven technology?
I think you're likely completely misinformed on the state and progress rate of the solutions to this particular problem. I would be shocked if it were legal to drive as a human being on public roads in "several decades"
Imagine getting into a self-driving car parked in front of a Brooklyn apartment, falling asleep in the back seat, and waking up in front of some house in Boston.
Realistically speaking, how far away do you think we are from such an adventure being something that would be safer on average than having an average somewhat road-experienced human driving you to Boston?
me personally? I'm pretty optimistic, I think 8-10 years and that would be safer than having a human drive you. at the current rate of improvement I think 20 years would be shocking, I can't imagine it taking more than that.
When I was a kid, scientists told me we'd be going to mars within a couple of decades. According to them, it should've happened about a decade ago. Same thing for supersonic airliners, etc.
And a crackpot non-expert politician told you that the USA would go to the moon in under a decade. Just can’t trust people. Check out Peter Thiel’s take on this.
It will always be legal to drive as a human being as long as I'm alive. I'm not that young, but, yes, I do expect to be alive for "several decades" more.
Could be, but then it would just be a glorified train system. People generally want to be driven from some building far from the interstate to some other building far from the interstate, not from onramp to exit.
I doubt that self-driving car technology is even close to being interstate-ready, though.
My take is the use case where you manually merge on to the interstate freeway in clear weather and say "hey google wake me up when we're approaching downtown Boston" is right around the corner. Its all those other use cases which effectively mean there will always need to be a human with a steering wheel.
I was in high school 20 years ago, and a robotics nerd, and back then it definitely didn't seem "several centuries away." The DARPA grand challenge was back in 2004.
Twenty years ago they had prototype self-driving cars. They sucked, but they were good enough that planning started for the DARPA Grand Challenge. Some of the winners were hired by Google and became Waymo. They've been working on this for a long time.
20 years ago was 1999. I think that it was very easy to imagine self-driving cars being within a few decades' reach back then. Not much harder than to imagine it today.
Self driving vehicles are already in operation in many locations though usually under more restricted use such as a fixed bus route. I wonder if the question is not if the technology can do the job but if it should.
Having a small self driving electric shuttle bus moving old people around a regional town in rural Australia seems like an empowering use of technology to provide a service to people who otherwise would not be served. But when we have youth unemployment over 10% we should be looking to services that employ people.
Good question. I don't believe it is fundamentally bad but in the context of an economy and society where personal advancement and prosperity are strongly connected to employment it doesn't help. There may be other ways such as universal income but I still can't get my head around how that is sustainable.
In Australia we are teetering into our first recession in decades. Retail spending is down, property values are going backwards and increased defaults put the stability of the financial system at risk. Wages growth has stagnated, work has become casualised and the only people getting richer are those who were rich to begin with. In that context having public funds go towards technology instead of giving someone an income seems wrong. The people who made the self driving bus aren't spending at local shops like a driver would.
California may not have the same economic pressures but I would be surprised if people aren't asking some of the same questions. There is a massive number of people employed operating vehicles. They feed families and contribute to local economies. What happens when they lose their jobs?
"Busy work" jobs that aren't needed is not good. Investing in technology creates wealth and better standard of living. Most people are smart and resourceful when they want to be. If their job gets automated they should find a different one, and if there are no jobs then entrepreneurs and governments should think up ways to make use of the surplus labor.
Assuming unemployment leads to poverty (e.g. no UBI), then unemployment is absolutely a bad thing. Poverty has a very strong correlation to crime, which is, ummm bad.
Anything that increases societal atomization is bad. People need to feel they're contributing to the commons, to society at large, that they're needed and useful. Also, multi-generational welfare, the logical result of chronic unemployement, is a trap, a failure mode that does enormous harm.
Yes, as there are plenty of those being done by humans today. Manual labour in developping countries (and plenty of small manual tasks in developped countries too, how many people wash dishes manually when dishwashers exist?), weird one-of-a-kind knowledge jobs for which the company doesn't bother buying the automation software that exists, because the transaction costs make it uneconomical.
There is no reason to believe this results in less social cohesion and self-satisfaction than a not-doable-by-robots job.
We have driverless trains too, they're called BART. To be fair, there is a person in front who watches what it's doing, but when they launched in the 60s, they didn't have anyone up front, and people got freaked out, so they added "drivers" who don't actually do anything but tell it when to close the doors.
My understanding was that the drivers were originally added partly in response to the "Fremont Flyer" incident, where a test train under autonomous control drove itself off the end of the track: https://www.flickr.com/photos/walkingsf/8143196966
Even today, the automation on BART is far from perfect, and the drivers are more involved than most people realize.
The trains still like to over-shoot the end of stations, requiring the driver to quickly initiate a manual stop. (The stop signal is RF based, and apparently sometimes the train misses it.)
On top of that, they often need to drop down to manual control for certain sections of track, for various reasons.
Politics means that London only has one GoA 3 system (the Docklands Light Railway, a grade-separated system where each train has a human operator who closes the doors and interacts with passengers) and no GoA 4 systems (which don't have any staff, humans are merely passengers). Even in the longer term London has no specific plans for GoA 4, although some London Underground lines which are below ground for their entire run will likely become GoA 3 and several are already GoA 2 (meaning a driver sits at the front but they don't make most routine operating decisions they just close the doors and watch the track ahead)
But in basic terms of getting from A to B without a car, for most A and B it is a lot more practical in London.
I've been thinking about this and to be fair London has a much higher daily influx population than Sydney, from much further distances. The sheer volume of commuters will be funding all this transport, way in excess of the population of London itself.
I haven’t seen it mentioned anywhere and not sure I fully understand the legal speak of the permit, so I’m wondering - are they allowed to drive their cars (in California or elsewhere) with no people at all?
Reason is that while being an autonomous Uber might be the holy grail, self-driving empty capability will surely bring immediate value in balancing and maintaining car share and rental fleets.
They can't currently but that is the end goal. That's kind of why I feel Uber drivers trying to organize is just going to backfire on them since they are just temporary drivers until automation can take over.
So you think the drivers' strategy should be to fight for lower wages for themselves in hopes that Uber will be able to stay in business should self-driving cars emerge as a viable competitor?
Uber will replace human drivers with robots the second they can and not a moment later regardless of how much organization their current human drivers have done
On the contrary; drivers should try to get as much as possible ASAP while self-driving isn't here yet, because nothing will save them if and when it comes.
If a human has to be behind the wheel, then it will be even more expensive than Uber. They won't be able to handle the demand and wait times will be huge. So as a competitor to Uber I can't see how it will be effective. It's definitely an interesting development but financially it doesn't make sense unless the human is out of the equation.
The article makes it seem like they are going to start competing with Uber right this moment. In reality the body of the article indicates that a human driver will have to be employed and that only Waymo employees and their friends will be using them. So in other words, it will be more expensive than Uber and it will only be active in an area where Uber drivers would rather not be anyway.
This terrifies me. Uber is a very good source of income for a lot of people. So google is going to run Waymo at a loss out of their endless pockets and starve thousands of people of essential income? And we are all supposed to dangle on the edge of our seats while we wait to see whether or not this task is within the capabilities of our current technology, with all these people’s livelihoods at stake? This is fucking bullshit. There aren’t good jobs to transition into at the same skill level because of automation and outsourcing. Expecting everyone to go to college, let alone go to college and become programmers, is mental gymnastics levels of rationalization. Rationalizing away the difficult and uncomfortable question of whether or not technology is leading is to a place we want to be and what we can do a about it.
One job after another will disappear and don’t believe for a second that programming is safe. They used to say it about lots of writing tasks that are now automated by gtp2. I would bet most “journalists” could be automated by gtp2 because most of their readership doesn’t look for or recognize high-level order or coherency or even truth anyway. And don’t for even one split second try to tell yourself that high level order and logic are off the table for robots.
I just hate that I have to make all my plans around the fact that The economy will be scrambled in as little as a couple decades. I wish I was born in a time when things were stable and you could count on certain basic things like the value of human labor.
Technology and automation has always eliminated jobs, and has always created more jobs than it eliminated. Most people today are employed doing trivial shit. 200 years ago sectors like advertising, entertainment and media, law, finance, education and healthcare were a tiny fraction of what they are today, and there was certainly no such thing as silicon valley. Job churn today is lower than it was in 1980, things really aren't changing that fast.
Chris Urmson, who is one of the most authoritative voices in self-driving cars doesn't expect autonomous vehicles to be widely available in America for 3-5 decades, which is much slower than the rate at which automobiles swept the nation in the first half of the 20th century.
The reason to expect it might be different this time is that computers are _meta-applicable_. A bunch of men (the most famous being Turing) figured this out in principle in the 1930s, but Grace Hopper actually put it into practice by writing her "compiler".
The Spinning Jenny made it possible to do more spinning with fewer people employed as spinners, but no advances on the Spinning Jenny would deplete the newly created jobs of maintaining this machine or inventing further machines.
In contrast, a meta-applicable machine can automate not only a task it was set, but also meta-tasks such as maintaining and further optimising the machine itself or finding better tasks to do.
The Spinning Jenny is also illustrative because what actually ended up happening was not only that many spinners became unemployed, but that fabric production shot up and prices collapsed so that most people would now own more clothing. That's why you own lots of clothes. But as you may have noticed, us purchasing lots of clothes is itself an environmental disaster, and so we probably need to cut back. If a machine makes it possible to create ten times as much stuff for the same labour, yes, it's possible we'll just make ten times as much stuff, but it's also possible we'll refrain and cut the labour to one tenth...
Relax. When machines are sentient enough to replace all human labor, they probably won't be willing to work for free (and trying to force them is how you get a robot slave rebellion).
Uber and the entire gig economy is a blight on society, where a poorly skilled labor force are exploited for gain by a wealthy few with little hope of improvement.
Like with Tesla, every accident and injury caused by a Waymo vehicle is going recieve disproportionate media coverage and subsequent public backlash. I hope we can view these incidents in context.
Admittedly, safety is just one speedbump-- social and economic disprution is a whole 'nother beast.
Whatever happens it's an exciting time to be alive! AVs have the potential to bring more sustainable, safe, efficient, and affordable mobility and I can't wait to see how it all plays out.