Meanwhile a Tesla on autopilot crashes into a stationary police car on the highway.
As much as I like the tech I still think we are years away from self driving cars.
Edit: as other pointed out it looks like this recent accident was with a 4 year old autopilot version (software, 8 year old hardware). So it is not completely fair to use it as example. But the reason I used it is because detecting stationary objects should be the basics of any self driving vehicle.
Yeah, I recall Waymo bragging they could do this years ago... a few months before a Google engineer reported to a Slate reporter that a Google Self-Driving Car would happily run any red light that wasn't in its map in advance...
The long tail of edge cases in self-driving is vast, and each one needs to be individually addressed.
That’s assuming cars could update their model or that the new model won’t still have the same edge case but now it will be even more hidden.
This is the issue with blackboxes while you might be able to figure out which model exactly in your system caused a weird behavior figuring out what exactly in it that caused it and then what training data was responsible is currently near impossible afaik.
My searches produce an inordinate number of captchas, thus proving that Google can't identify humans, even in the very limited environment of Google searches. Which doesn't bode well for the cars.
It appears that reasonably rapid use of inurl: makes it think you're a bot.
Captcha's aren't to identify humans, they are to train AI. It is easy for me to understand the tolerance level, I know which images it understands and which images it likely doesn't. I can easily finish captchas without selecting all images of crosswalks.
Anyway, your point seems to have the purpose mixed up. Pointing out images of crosswalks trains the same system that some cars use or will use.
their implementation is such that it is more to train AI, the reason I think this way is that you can predict how much of a Captcha that you can get wrong, because you know which parts it doesn't know the answer to.
It is presented as "select the right answers, we know the right answers just like your teacher did in school, if you don't select the right answers you are a robot"
but in reality is is "select the right answers, so that we learn the right answers, the right answers are an average of what other people selected as well as completely new information that we aren't sure about"
The purpose of a captcha, and the purpose of the implementation of a captcha don’t need to be the same thing. Said differently, a given thing can fulfill multiple roles and that isn’t an inherent contradiction nor is it wrong.
In this case: yes, the stated purpose of a captcha is to identify humans - ie, legitimate users. The unstated purpose of ReCaptcha (at least) is to train AI. But it does both simultaneously.
The funniest thing to me is that we’re all in on it. :)
So long as it requires an internet or similar connection I think its mainly useless. I rather a self driving car that works whether Google is damned and none existent or not. That would be impressive, you download OSM or better maps and terrain details of the world and the car based on where it travels knows where you are despite not getting a GPS signal cause reasons. We should be able to do this sort of fauly tolerance in software.
The more you add to this the more edge cases you create, if you for example add a model that when a pedestrian is recognized near the edge of the side walk the car then tries to figure out their intent rather than always slowing down or signaling the driver to take over you aren’t resolving an edge case you are creating a bunch of new ones.
Honestly, this is a good example for why our industry shouldn’t be pursuing this particular tech.
Tech (writ large) thrives on shipping incomplete solutions and iterating on them - for better or worse. And that’s not something that’s acceptable for self-driving cars.
Better to put our efforts towards other things, in my opinion. Personal autos aren’t sustainable even if they drive themselves.
Feels like this is the sunk-cost fallacy (or as I prefer, “throwing good money after bad”) but played out across the intersection of two industries.
Firstly this is comparing autopilot to self driving; they aren’t the same thing and you are required to stay in control of the vehicle which clearly didn’t happen with this crash.
Second there is the fact that cars that are self driving will still crash at launch. I see this a bit like air travel where over time every kink in the software will be filtered out. The software will be safer than human drivers at launch so I’m not extremely concerned.
Finally, I guess, will people believe in the statistics or will they go with their beliefs. I don’t know but I’m sure this will happen in the next few years for Tesla and for non highway driving it will be extremely safe. N of one incidents (of real self driving) will probably still happen but as long as what went wrong is learnable from I think it’s acceptable.
"Firstly this is comparing autopilot to self driving"
No I was not. I understand autopilot is not self driving. But since even basic collision detection is still difficult the step from autopilot to self driving will take years imho.
"Second there is the fact that cars that are self driving will still crash at launch."
Maybe. But imho that day will still be years from now.
"Finally, I guess, will people believe in the statistics or will they go with their beliefs."
When self driving is ready for the road I believe it will be much safer than people behind the wheel. Systems like adaptive cruise control, lane keeping and autopilot are already safer than people.
The problem right now is that all the 'glitches' are potential deadly.
Imagine all the cars in the world would to become self driving with the current tech. It would be a mess.
The moment self driving cars would make less of a mess than humans it will be ready for the road.
Not that I want to cast total doubt but it will take a lot of training before machines learn all the dumb crap that humans do whilst driving. What if you have a seizure whilst driving, or another self driving car from another company just went berzerk and shifts lanes for whatever insane reason? With planes its a lot simpler since its much less likely for planes to come into such close proximity typically.
Reportedly[1], radar cannot reliably distinguish between trucks blocking the road, manhole covers, overhead signs, overpasses, and a lot of other stuff.
If the car stops for every radar returns that looks like something that blocks the lane, it will perform emergency brakes for ghosts all the time. The Tesla (and others) do is to ignore returns from stationary objects in certain circumstances.
Thinking about how radar works - it would be easier to detect a slow moving car than a stopped one.
In other words, driving at 50 mph and coming up on a car going 10 mph, you would get two strong signals -- one strong one for the world going -50mph, and a smaller one for a car going -40 mph.
A stationary vehicle however - it would be indistinguishable from everything else that is stopped.
Humans crash into things, too, you know. Let’s see what the next AP version (optimized for the new hardware) is like, I read it‘s due to be released soon.
The idea that we need to find a single escape-goat to blame and throw in jail for every accident is a very antiquated way to think about justice.
There are many cases where nobody is at fault, or multiple people are collectively at fault.
I would much prefer we think in terms of overall safety. If human drivers have 30000 deaths a year, and switching to autonomous driving results in 1000 deaths per year, we've made progress as a society. We don't need to hunt down and throw people (or cars) in jail for those 1000 deaths.
We should instead demand that engineers investigate those 1000 deaths and try to find ways to get them as close to 0 as possible. Throwing people in jail is a waste of resources that could be diverted towards achieving better safety. The engineers should be investigating and making improvements, not sitting in a jail cell.
(Of course, if there is intentional negligence that resulted in 1000 deaths when there would normally be 500, that's another story.)
> The idea that we need to find a single escape-goat to blame and throw in jail for every accident is a very antiquated way to think about justice. There are many cases where nobody is at fault, or multiple people are collectively at fault.
But that’s not what we do. We don’t pick a scapegoat and throw them in jail for every accident. We investigate the role the individuals had in the accident, and only if an individual was clearly at fault, they are fined. Often such trials end up with a shared responsibility where the parties at fault also have to jointly pay for the damages caused.
So the question remains: if a car on autopilot crashes, who needs to be investigated? The “driver” who had little to no control over the vehicle at that time? The legal cop-out of “you must be in control at all times” won’t fly with proper self driving.
This shouldn't be about "who". This should be about a team of several hundred engineers who investigate the hell out of that data at work and figure out if it could be prevented the next time, and if so, deploy an update.
Courts getting involved for every accident with that engineering effort and taking time out of their dayjobs for a legal investigation would only delay the solution and cause more loss of life.
There should be accountability in terms of ensuring that a fair engineering investigation happens, and how to modify the code to make it even better than it already is. It's about which lines of code need to be changed, not about "who" needs to be investigated.
It's precisely this "who" thinking that I think we need to get over, especially when machines already do far better than humans at certain tasks.
Elevators are pretty much all autonomous these days. When elevator accidents happen do we go tracking down which engineer in the elevator design company is at fault? No, as long as it isn't a case of intentional negligence, we first figure out whether the injured person did something wrong, and if not, the team of engineers at the elevator company work together to try to re-engineer the elevators to be more safe the next time around without pointing fingers within their company. And the result? Elevators are pretty much 99.99999% (or whatever) safe these days.
> I would much prefer we think in terms of overall safety. If human drivers have 30000 deaths a year, and switching to autonomous driving results in 1000 deaths per year, we've made progress as a society. We don't need to hunt down and throw people (or cars) in jail for those 1000 deaths.
You may prefer that, but the 1,000 people who wouldn't have died otherwise, well they aren't thinking much, anymore. People aren't numbers whose lives and deaths can be shifted around by the vagaries of the status of AP glitch fixes.
There is a small number of people who die in plane crashes today. Historically, before planes existed there were a much larger fraction of people who died crossing oceans by ship. The handful of people who die in plane crashes "wouldn't have died otherwise", but I think the majority of people agree that planes are the preferred (in terms of safety) way to get across oceans, and they are more or less the only choice for civilians unless they want to take a luxury cruise.
Same thing about cars/trains vs. walking thousands of miles across deserts. Overall more lives are saved, but yes, a small handful of people are going to get hit crossing the road/tracks that wouldn't have gotten hit if cars/trains weren't invented.
Yes, you can say that about many things, which is why safety is paramount. Not an "overall" safety that kills fewer people, but that no additional people will die. There is responsibility incurrred for kiling 1,000 people who wouldn't have otherwise died. It's really a terrible argument to say, "we killed different people."
> The handful of people who die in plane crashes "wouldn't have died otherwise", but I think the majority of people agree that planes are the preferred (in terms of safety) way to get across oceans, and they are more or less the only choice for civilians unless they want to take a luxury cruise.
When a person is hit by a falling plane, there will likely be liablity assigned. Similarly, when someone is run over by an AP navigated car, there will be liability assigned, even if it makes the jobs of engineers more difficult.
> Same thing about cars/trains vs. walking thousands of miles across deserts. Overall more lives are saved, but yes, a small handful of people are going to get hit crossing the road/tracks that wouldn't have gotten hit if cars/trains weren't invented.
Exactly. That is why people and companies are assigned liablity, not just for intentional torts, but for negligence. When people "do things" they are responsible for not creating havoc or destruction, regardless of how helpful their actions otherwise are.
If the walls fall off your house, are you going to thank the building contracter for making the safest house around? Because, the way you are talking, you would be happy to pay for a replacement house without holding that "safest builder" responsible - even if people were crushed by the falling roof.
Fair comment but what about when there are monetary damages that need to be paid. Someone is going to have to pay for the cars and what they crash in to to be repaired. If tesla pushes out an update that causes your car to automatically steer in to another one, are you the owner responsible for paying for both cars to be fixed?
Insurance pays. If it's two autonomous cars, split the bill. The insurance companies should be happy that accidents happen at 1/30 the rate they used to (if we take my numbers above as an example). They can give you an 80% discount on your insurance premium for having an autonomous car, and still net a massive profit when averaged over all their customers over time.
What happens now, when your car breaks in a way that causes an accident? Obviously this is a thing that happens and it is dealt with somehow. It hasn't happened to me, and I'm not a lawyer, so I can't say for sure offhand.
At least in my state, there is no fault coverage everyone has to buy, that covers medical bills up to a certain amount. It does not cover fixing your car. If there is nobody responsible for a collision, maybe the repairs would fall under comprehensive coverage.
From what I read, if you hit a deer, it's covered under comprehensive. But if you try to avoid hitting a deer and run into a tree, it's covered under collision. So it seems somewhat arbitrary.
There's at least a magnitude of difference between a driver, who is always in control of the vehicle, suddenly finding that his brakes have failed, and then doing what he can to reduce or minimize the impact of the accident; and a driver who is basically a passenger having to make what is likely to be a last-minute decision to wrench control from an anonymous vehicle. A decision that will be heavily question by the manufacturers' lawyers or insurance companies. If it's not possible to take control, would he have any liability at all?
On this note, how many human drivers actually know what to do if their brakes fail?
Do they know to try to switch the car into lower gear and engine brake, move towards the right lane, coast and then use the parking brake to try to stop the vehicle?
How many human drivers might try to shove the vehicle into park, neutral, or downright panic in that situation?
I think very few humans actually understand these concepts. Autonomous vehicles don't panic, and they can be programmed with rules to maximize the probability of survival.
Autonomous vehicles can also be equipped with backup brakes, separate front/rear brakes with independent controls (humans can't handle that complexity but algorithms can), or other modifications and programmed to simply NOT start driving if there are any failures in the redundant system.
Humans often continue to drive cars that show signs of trouble, because it's the only car they have. With an autonomous car, it could, at the first sign of trouble, automatically summon help, and also summon a replacement rental vehicle to show up at your location so you can ditch the problem vehicle for the towers to deal with and continue about your life. How cool would that be?
> On this note, how many human drivers actually know what to do if their brakes fail?
In fairness, they all should know. Believe me when I say I'm for much stricter standards for human drivers.
> I think very few humans actually understand these concepts. Autonomous vehicles don't panic, and they can be programmed with rules to maximize the probability of survival.
The survival of who, though? The occupants? The occupants of other vehicles or pedestrians? If the breaks fail on a narrow mountain road, does the car decide to throw itself off the cliff or collide with the car in front of it, potentially pushing it off the cliff instead? (Eventually, I think the cars would be able to talk to each other and the front car could intentionally use itself to slow and stop the rear car.)
This is a can of worms, too.
> How cool would that be?
It's all really cool, but until the courts and Congress have figured out a lot of these issues, I'm not sure these cars are going to be my cup of tea.
Well, yeah, who knows if the driver of the car that causes the accident would seem responsible. Which is why I wouldn't want to own that kind of car. If they really are that much safer, then I should benefit from them being on the road anyway.
I though the same way: It's better if everyone else has one. But I don't think it'll play out that way.
You'll benefit from the additional safety.
On the other hand, if there is an accident involving you and the other car is a self-driving car, the assumption will be (and the manufacturer's lawyers will argue) that you, the human, are at fault. It might even be the case, but I suspect it'll be more of a guilty-until-proven-innocent situation regardless.
The situation where the human driver is assumed to be guilty without evidence seems self-contradictory. A self-driving car would have abundant telemetry and video. If it didn't, if say the information gathering mysteriously failed then by definition, it's at fault.
This fear makes no more sense to me as the sentiment that nobody will be able to afford to insure a non-self-driving car in the future.
The owner of the car pays for the insurance, just like they would pay for home insurance, or insurance for their photography gear, or insurance for anything else. They too, should be happy that the probability that their car has an accident is going to be much lower than the probability that they have an accident as a human.
If I were an insurance company and told you your premiums would be slashed by a factor of 5 in return for driving in autonomous mode, wouldn't you take it?
You've missed the point. Why is the owner of the car paying if the car is driving in autonomous mode? If the driver has no control, is he liable? That's the question. Talking about insurance is just kicking the can down the road.
If the driver has the option to take control, is he going to be liable for doing so or not doing so in case of an accident?
Drivers/operators/owners for damages they might cause, and will probably still be mandatory
"people" so that they can afford medical bills, if they survive the accident (and there are lawsuits going on).
Companies that build the cars/AI because they have deep pockets and will get sued anyway.
Except in the fine print there is a clause that says the driver has to always be able to take control in time to avoid crashes and hence Tesla is not at fault unless AP somehow actively prevents the driver from taking control.
You sue whoever looks to be to blame. So perhaps the victim’s family sues the car owner. The car owner then sues the car manufacturer if they feel it was to blame. If the company feels an individual engineer was to blame they might sue the engineer. Basically just let the courts figure out the balance between societal harm and responsibility.
"The Model S was a pre-refresh Model S, which means that it was likely using the first version of Tesla’s Autopilot, which hasn’t been updated in a while'
Which sounds pretty horrible if you think about it - how many of those obsolete software stacks are driving around out there without any clear notification to drivers that they're lacking in quality?
This reminds me to be afraid as a blind pedestrian in this modern world of automated danger. I still remember it took me a while as a kid to begin and trust that drivers would actually see me. I think I will never trust that self-driving cars will react properly to me being blind.
I thought that electric cars had to have a noisemaker of some sort. I encountered a Tesla yesterday, and while I could hear it, like its tires and the brakes releasing, it didn't seem to have any artificial sound as it drove away.
Learning about the "Vertrauensgrundsatz" (principle of trust) is an essential part of gaining a drivers license where I come from. Essentially, you learn that some humans can not be trusted to make the proper decisions. Obviously drunk pedestrians, children, people who are blind, just to name a few well known examples. I doubt this principle is already implemented in autonomously driving cars. The gist of the principle of trust is, that you as a driver are responsible to make sure you don't hurt these people. Yes, even if they are drunk. You are the one driving a deadly vehicle. So it is your call to m ake sure you dont overrun anyone who is currently incapacitated when it comes to ensuring their own safety.
How will this general principle be implemented by autonomous car pcmpanies? I sort of doubt it will be kept. Lobbying will likely erode this very important part of the responsibility of a vehicle driver.
You who avoid self driving cars because they can't read human hand signals in those weird construction zones: now you are out of excuses!
There is an obvious theory-practice gap. Theory is rocketing into outer space while engineers are still running "please_no_crash_backup3.exe" in production.
If you build a self driving car that simply identified construction zones and alerted the driver within 5 seconds, your product would be utterly world-changing.
Maybe a good comparison could be nuclear fusion power plant. Correct me if I'm wrong but theoretical breakthrough took place in like the 1970's, with operational labs sized tokamak.
Yet the first test breakeven reactor ITER is 'scheduled' to operate in 2025 (if nothing goes havoc until then).
Yeah I known that software operate on different concept these day (move fast and break things) but I'm still unconvinced this could be compatible with any marketing strategy for self driving cars.
Theory is always mindbogglingly far ahead of practice. So much advanced knowledge of light like how our eyes see 3 colors and the color wheel that all modern software uses was invented/discovered before we had electricity and plumbing.
I don’t know how it can possibly be done. I can’t do it myself. My ten minute drive through my Bay Area town looks like a comedy police chase complete with two guys carrying a big frame of glass, every day.
Pedestrians will run up to an intersection and keep running across without a care even if they’ve just emerged from behind a corner and the intersection isn’t controlled. Others will act like they’re going to cross the road then pivot 90 degrees at the last second. Some almost cross the road and then start walking at an angle to the road rather than completing the cross. You’ll stop for some and wait and they’ll just stand their staring at their phones, only to step in front of you when you guess they’re just catching an Uber.
Some people wave to mean you should go. Some wave you away like they’re shooing a fly, meaning they’re going no matter what you do.
Some approach the crosswalk and then step out into the street ten feet in front of it.
I’ve seen construction workers who think it’s cute to dance out their instructions. I had one who leapt across the road and then waved me through... a gap of about half my car’s width. I’ve seen both ends of constructions workers wave cars into single lane pass throughs; both human drivers realized and waited for them to figure it out.
None of this is the really crazy stuff I see fairly regularly. It’s just the every day stuff. And it’s all combined with crazy non-standard signage, careless construction equipment drivers, and bicyclists who are a whole other ball of wax.
Speaking of waving. That's one of my pet peeves in traffic. We're taught here in Sweden to NEVER EVER wave anyone ahead in traffic.
Always obey the traffic laws and regulations about who has right of way and who has to yield. While still staying alert and prepared to react of course.
So imagine a two lane road where one driver waves a biker ahead to cross while in the 2nd line a self-driving android is coming.
I witnessed an accident where someone stopped short in the left turn lane (which was waiting for the red to change) to wave someone coming the opposite way through to a gas station on the other side. The wave he meant was “I’ll wait here for you.” The wave she saw was “All clear go ahead and turn.” She turned, right in front of someone in the through lane and got hit.
She then tried to blame the car that hit her saying the driver was speeding. I got involved as a witness and I know she ended up having to pay that driver’s damages. I’m not sure if she tried to go after the waving driver but that would have been the smarter move.
Camera lenses can't be blinded? Sensors can be blinded too, by rain, sun and dirt.
Just had to remark on that specific point. But I understand where you're coming from. Still doesn't change my trust level for technology taking over in traffic.
Most self driving systems are using redundancy, people have only a couple of eyes.
I was involved in the development of a self driving vehicle and agree that we are still not there, that's why there are no commercially sold self driving vehicles.
OTOH we can clearly see that it is a possible goal, or at least a possible goal to try and achieve, for the coming years and not a very far future.
Remember that what we want to achieve is not necessarily zero accidents but a lower number than human driver achieve.
As much as I like the tech I still think we are years away from self driving cars.
Edit: as other pointed out it looks like this recent accident was with a 4 year old autopilot version (software, 8 year old hardware). So it is not completely fair to use it as example. But the reason I used it is because detecting stationary objects should be the basics of any self driving vehicle.