Alternate headline: Some people overstated how simple this problem was because they didn't know what they were talking about, could promote themselves as experts, could get funding for a self-driving startup or some combination of the above.
I guess that isn't as pithy but it's closer to the truth.
When it became clear that Uber's strategy (under Kalanick anyway) was premised on replacing drivers with AIs before the cash ran out I couldn't see general self-driving vehicles coming within 20 years. I still say that's true. AI assistance? Sure. But there's an uncanny valley there too where the AI will be good enough in most circumstances that drivers lose attention and people will die. You already see this with Tesla autopilot.
Here's a simple counterexample to the idea that self-driving cars are "just around the corner": in NYC, quite a few buildings have doormen. This is great for residents. Part of this is dealing with deliveries and so forth but there's also an issue of general security. People can sneak in (and I'm sure do) but just the fact that a human is there acts a strong (but not complete) deterrent. Just like having a dog is one of the most effective burglary deterrents.
What prevents a lot of bad actions on the roads is actually fear. Fear of what other drivers might do. Fear of road rage by other drivers. That sort of thing. This is just how humans work.
Once a driver knows the car next to them isn't driven by a person it changes their behaviour. They will do things they wouldn't do if it were a human behind the wheel, particularly because they know an AI won't ram them, cut them off, yell at them and whatever. There's no fear there. Even if there's a passenger in the car, it's still (psychologically) different.
How do you program around humans changing their behaviour to take advantage of there being no driver in your car?
>Alternate headline: Some people overstated how simple this problem was because they didn't know what they were talking about, could promote themselves as experts, could get funding for a self-driving startup or some combination of the above.
I jokingly call it management myopia: anything I don't understand can't be that hard.
That sounds more like any use cases I don't understand can't be that important. i.e. people who use software different than the way I designed are wrong.
> Once a driver knows the car next to them isn't driven by a person it changes their behaviour.
If the car next to me isn't driven by a person, but is bristling with high resolution cameras that will immediately upload footage of my face to the authorities if I collide with it, I would indeed change my behavior.
I wear a helmet cam, and yes, as an individual it's basically impossible to get traction, even when you have a clear plate and face shot. But when the self-driving car company has thousands or millions of these incidents on record, they'll quickly find themselves wielding enormous power.
They'll be able to correlate across time and space, similar to how phones running Google Maps are reporting their position and velocity to create up to the minute traffic overlays. They'll be able to approach politicians and police forces with a message like "hey, want to get serious about safety? These are the top one hundred drivers in your area who need to be taken off the road now— click through to our portal to find dozens and dozens of videos of each one speeding, weaving, failing to yield, running stop signs, etc."
Each individual incident may be hard to prosecute, but when you have all of them in a bundle, a few a week for months at a time, it'll become impossible not to act on it. When crashes happen, they'll be able to shame the jurisdiction after the fact by publishing dumps of the incidents leading up to it that were not acted on.
Heh, imagine "vigilante" network justice. Oh, hey, you messed with our cars one too many times. We'll arrange for some cars to line up in front of you and drive 25.
They would simply report their customers just as well. Covering them up is a liability, they would probably have another way to tell anyways. I got an automated parking fine from a shared car a few weeks ago - it's real.
Eventually, money. You have to imagine that info about dangerous drivers would be tremendously valuable to insurance companies as well, if it could be shown to predict crashes.
In my experience, I have never been surprised by overly cautious drivers, only drivers who have an excess of confidence. How do you define 'excess caution'?
For the most part I don't really disagree with you, but there are ways drivers could behave unpredictably out of what might be called an excess of caution. For example, my wife just yesterday told me about something that happened to her that day. She was waiting to turn right from a side street, behind another car also turning right. The first car started their turn, she pulled up, checked that there was ample room, and so started to turn in behind them. Then the car in front slammed on their brakes, stopping partway into the lane. It seems they did this because there was a car approaching in the distance on the main street.
The thing is, that car was far enough away that both cars could have easily turned in front with room to spare. My wife had seen it before starting her turn, and realized there was plenty of time. In fact, after a few seconds, the driver in front of my wife realized just how much time they had, and finally made the turn, still before this other car arrived. Because my wife is an attentive driver this wasn't a problem, but you could certainly see a driver in her position starting to make the turn while looking to the left, and rear-ending the car that suddenly stops for no apparent reason. Of course they would be at fault for doing so, but the first driver's excess of caution, if you want to call it that, would also be a contributing factor.
Been surprised plenty by “overly cautious” drivers, mostly having to do with slowing down or stopping unexpectedly.
Folks that hit the brakes crossing a green light. Braking well before the off ramp so they are going 40 on the highway. Slowing down while merging.
The worst ones are those that are scared to go but decide too late. Like stopping in the middle of the intersection while turning left because there is a car coming from the distance but late enough they end up blocking the left lane going straight.
As a frequent pedestrian in Seattle, overly cautious/"courteous" drivers are a big nuance. Particularly drivers who are over-eager to stop at crosswalks, sometimes parking in the middle of the intersection to wait for the cross walk on the other side. Some drivers around here have a habit of stopping prematurely too, waiting for me to cross before I've even gotten to the intersection.
Sometimes it gets truly absurd. Once I was standing on the sidewalk at an intersection waiting for an Uber when a driver stopped at a green light and started honking at me, furious when I refused to cross the road.
In Seattle all unmarked intersections are considered pedestrian right-of-ways, by law. There have been campaigns to promote this. Ten to twenty years ago it was not uncommon for pedestrians to throw tantrums and shame drivers for not stopping them, usually in Fremont and Capitol Hill. That may explain some of the behavior; possibly the drivers you encountered were native Seattlites with intersection PTSD.
But I generally agree with you. it’s a often egregious and can be extremely dangerous for both the pedestrian and other vehicles when a lone driver stops unexpectedly on a >2 lane road.
In some states, you can and will be ticketed if you don't stop whenever someone steps off the curb. I seem to remember hearing or reading that is the case in Massachusetts, so you may see drivers from out of state being very cautious even elsewhere.
Also, the other day, I was making a left from a one-way street, from the left side of it, and someone tried to drive around me on the shoulder to the left, because they were unaware I was waiting for a jogger pushing a stroller to cross in front of me.
I almost got run over today on Capitol Hill because of this. A driver in one direction was so insistent that I cross in front of them that I failed to be sufficiently attentive about the other direction and almost got hit.
In Portland, OR we have 4-way intersections where the stop signs alternate every 2 blocks. Drivers still tend to stop at every intersection, even when they have no stop sign.
Ha, I don't come to a complete stop, but I'm guilty of slowing down at the intersections without stop signs because I've experienced enough instances of cross traffic blowing through their stop signs. In the city I think it would probably be better if all intersections were four-ways.
>When it became clear that Uber's strategy (under Kalanick anyway) was premised on replacing drivers with AIs before the cash ran out I couldn't see general self-driving vehicles coming within 20 years. I still say that's true.
Even so, this never made sense as a business strategy, unless Uber somehow has a comparative advantage in achieving self-driving cars and then can get a long monopoly on them.
If they don't -- if others have SDCs around the same time, then sure, their costs go way down, but they can also charge a lot less, because the competition has the same "advantage"!
It's like:
"Hey, man, do you really think your grocery store can make those superhuge profits when you sell at the wholesale price?"
'Oh, no, I have a plan. You know those barcodes they're putting on products now? I'm going to invent a way to integrate that with the checkouts, and boom, I don't have to pay for people to put price tags on. Labor costs go way down!'
Same problem: "No, you're not. Someone better at that will, and they'll sell the tech to all grocery stores."
There was never a reason to believe Kalanick's operation as it stood in 2010/11 had that comparative advantage, so that strategy never made sense. And today, not surprisingly, Uber is one of the worst at SDCs, and is correspondingly unlikely to have that advantage.
I think you mean competitive advantage, not comparative.
I got a D in my only economics class, but I think that I've seen comparative advantage used to mean that "X is better at doing Y than doing Z", whereas competitive advantage is "X is better at doing Y than Z is at doing Y".
If you're not using it as jargon, then the fact that it is jargon might be confusing.
If you’re going to insist on proper jargon, you might want to refresh first. Comparative advantage [1] in X means you suffer a lower opportunity cost (forgone profit from what you could have done) to produce X than others.
A canonical example would be the doctor who is a better secretary than their secretary (produces more value per hour, looking only at secretarial value). The secretary still has a comparative advantage in secretarial labor because they forgo $0 of doctor income to work that job, while the doctor would forgo $100/hour they could be earning as a doctor in order to work as a secretary.
This is true even though the doctor has an absolute (or competitive) advantage in secretarial work.
The real test for whether something is financially optimal is if you have a comparative, not absolute advantage.
Uber not having a competitive/absolute advantage in SDCs would also be a reason not to do it, because of the details of this case, but comparative advantage is correct as well, though perhaps an unnecessarily strong criterion for the point I needed to make.
"Comparative advantage [1] in X means you suffer a lower opportunity cost"
Yes, however I don't see a relevant difference between that and my phrasing.
It still looks to me as though you were clearly describing a lack of competitive advantage, while using the word comparative. Sure, you can discuss comparative advantage if you want.
Counterpoint: personally I don't avoid doing things on the road from fear of the person behind the wheel but for fear that their reaction will cause an accident (swerving, slamming on the brakes, etc). Self driving cars have the same problem and honestly are probably less predictable.
This might be due to the fact that I don't really care much about road rage. At least where I live, the chance of someone getting out of their car and actually escalating it into a situation that matters is basically 0. When I lived in a major city, I felt the same way but for different reasons (most people in big cities Don't Have Time For That). But I can't imagine it's an isolated opinion.
I had that opinion too, until someone decided to get out of their car and escalate a situation that really shouldn't have been such a big deal. But they were macho in a car oozing with hormones so yeah, it happened. Even so, I'm personally more motivated by financial damage, the headache of dealing with insurance, and the consequences of being found to have broken the law, than I am by the threat of road rage.
it became clear that Uber's strategy (under Kalanick anyway) was premised on replacing drivers with AIs before the cash ran out
This is kind of nitpicking, but this has always been a backup plan at best for Uber. Look at their expenditures and it is clear that their strategy has never been "all in" on self-driving.
Their main plan was and basically still is a combination of driving competitors out of the market, increasing their efficiency, raising prices, and lowering payments to drivers, before the money runs out. If self-driving cars happen, it will be a tremendous boon to Uber's bottom line, but it is not a necessity.
As usual the article conflates the (very difficult problem) of self-driving everywhere with various other sub-problems.
The self-driving problem on highways is much more solvable, and can further be improved by convergent infrastructure evolution (forewarning of traffic jams from central control, embedded rfid to aid pathfinding in bad weather, weather reports, wildlife nets/barriers, highly standardized signage, internetworked cars). It would solve and serve a number of functions that are extremely valuable in efficient logistics, safety automation of boring long term driving, and better utilization of infrastructure (automated overnight driving when the interstates are much less used).
I don't really care if my trip to Taco bell is automated. Sure it would be nice, but that is so insanely difficult compared to automating a 400 mile trip on an interstate.
When they can full automate a 400-800 mile interstate trip so that I can sleep in my car while it drives overnight, it will massively disrupt the airline industry in so many positive ways.
But the article is from a sensationalist news source, so what can one expect.
>When they can full automate a 400-800 mile interstate trip so that I can sleep in my car while it drives overnight, it will massively disrupt the airline industry in so many positive ways.
This could be implemented today with train tracks and automated trains.
Someone will comment that there is not enough profit but are the roads making profit?
If you have the money you can travel A class, much more comfortable then sleeping in a car or an airplane.
I am not against cars so don't reply with some examples of why you need a cat, I understand that... my point is that for long distances transport where you could sleep,rest or do something else trains could be done today where AI driven cars would take at least 20 years.
I suspect that part of it is that a lot of people in the tech and tech journalism space live in and around cities. Many of them are mostly focused on being driven around and not owning a car.
Long highway drives may be something they don't do that often. So the ability to own a car that can automate that subset of their driving probably doesn't strike them as all that interesting.
Personally, as I've said elsewhere, this seems like a much more tractable problem and gives me a lot of the total benefit of self-driving.
It's way more difficult from a fleet operational perspective and more importantly, way less profitable. Take LA<->SF as an example. That's about 6h driving, and buses do it for $25-50. That's a measly 7-14 cents per minute. Once you've made the trip, the company has to find someone going the other way. If there's any directional or temporal imbalance, the fleet ends up with a lot of cars sitting idle or driving hours to be useful, further reducing utilization. If the car has an issue halfway, now someone has to pick the customer up hours from a company facility and get them to their destination. All this for pennies. You don't even get an easier problem, because you still have to drive in the city to drop people off.
Compare to city driving: 10 minute drive, $5. That's at least 5x more profitable and takes less capex. Plus, if anything goes wrong, your facilities and backup vehicles are already nearby.
I'm assuming individuals still own cars with autonomous features (or rent them like they do today). Which is sort of my point. This capability doesn't really help those who want (or need) to be driven everywhere. It's a convenience (and safety) feature for car owners. It would be a very nice one. But it doesn't really help eliminate car ownership or the need to drive.
Basically the vast majority of people take very few 400-800 mile interstate trips so that doesn't matter [conversely the trucking industry does so long distance trucking would be a strong market]
As you said the Taco Bell trip doesn't matter.
The one area where it does matter is for the longer distance commuter--the unpleasant experience of spending an hour every morning and every night in stop and go traffic.
I always thought Uber’s strategy was to get large enough for the VC and PE guys to be able to offload the whole pile of steaming garbage onto the index funds.
That is an interesting point. I can see that human drivers will be much more willing to drive aggressively (i.e. play chicken) if they know the other driver(s) are AI and will take evasive action.
For example, I avoid doing things on the road that an inattentive other driver might be slow to react to and hit me, such as suddenly cutting across 3 lanes of traffic.
Exactly. It's a form of bullying. Taking a swipe at cars that are under AI control is a form of intimidation just like "rolling coal", popping the finger, throwing coins at other cars, ICE pickups parking in EV charging spots, etc. So while it makes sense to me that an AI car should do something to indicate it is so to other drivers (e.g. quick double-pulse of the hazards every 10 seconds perhaps) that comes with a very real possibility of greatly increasing "road rage" incidents and even potentially undermining what driving assistance is meant to be for.
For anyone else who was confused like me, ICE here = internal combustion engine, not Immigration and Customs Enforcement. I was wondering what Immigration's connection to electric vehicles was.
No, that is reductio ad absurdum. He is simply saying that things which are too predictable are easily gamed, and this changes others' behavior in a game theoretical sense. Irrationality can have utility: see the "Ultimatum game."
Nah. You need to have an ALRU and vehicle to vehicle comm net whereby all the self-driving AI's distribute amongst themselves information on jerk behavior so the entire macropopulation of AI cars can wolfpack these drivers.
Now please laugh, put down the phone and don't even think about implementing this. It's a terrible idea.
I assume, by then, you'll see massive insurance discounts if you provide the feeds to your insurance company. They'll use these feeds to evaluate your risk and the risk of every car around you. Cars, especially those with manual drivers, who are seen causing unsafe conditions will receive an eventual, or maybe immediate, notice: "Your premium has been increased for aggressive driving".
In countries where regulating things is understood as a purpose of government the availability of self-driving will end the "everybody needs to" justification for lax driving standards.
First thing you'll see is Continuing Education. Today for personal drivers testing is once and done. If you can hold in your instinct to aggressively cut off other drivers, pass on the inside and generally be a maniac for the length of the test you're set for life. There are small moves towards more continuing education (e.g. Speed Awareness requirements for people who keep getting tickets) but that'll speed up enormously once self-driving is viable.
Compare the regime for driving an articulated lorry (mandatory refresher courses, licenses automatically expire and you must be re-tested) to my grandfather being legal to drive long after he was in no physical or mental condition to be safe on the road.
Then I think you'll start to see tightening of basic requirements. That incautious fast turn you took becomes a Test Failure not a slap on the wrist. All fatalities result in lifetime disqualification. And when your lawyer says "My client needs a car..." the judge says "This isn't a license for a car, it's a license for driving. Buy a car that drives itself" much more often. Rich footballers who used to get away with this have already started to see judges say well, why don't you just hire a chauffeur? Self-driving tech would push this down to middle earners.
See, I come up with a tongue-in-cheek idea that with a creative could make for a great set of movies.
HighwAI or HAIghway or AI-95 or Night on the Interstate where the plucky action hero has to save the day with the help of an old car, a grizzled mechanic, and his sweetheart from a malignant swarm of AI controlled cars that have terrorized and paralyzed the country by killing any humans they sense near the roadway. Or a small town is terrorized by a swarm of autonomous vehicles that lurk the stretch of highway between them and the next town, and some disaster necessitates a massive human piloted convoy protected by the town sheriff and traffic cops as best they can until they can eventually make their way to a rendezvous with the State Patrol. Or teenagers having to avoid a gruesome death at the whims of autonomous vehicles on the way to Grandma's house.
Then you all have to bring insurance into it.
Seriously HN. Keep this up, and I don't think we can be friends anymore.
How that works will depend on whether the insurance company cares more about how you drive, or whether you cost them money in payouts. If there are no claims related to your policy, and haven't been for years, then it doesn't really matter how you drive.
In absence of competition they might use it as a way to gouge you for more money, but there is no lack of competition in the insurance space.
> couldn't see general self-driving vehicles coming within 20 years. I still say that's true.
20 years is a long time, arguably you're already wrong because self-driving vehicles already exist. But Self-driving with the ability to replace Uber drivers will be here by 2030 at the latest.
If you’re talking about level 5 then I disagree with this. None of the technologies I see in use today can work in the same weather conditions that humans currently drive in. Things like snow, fog, and even heavy rain.
If you’re talking anything below level 5 then all we will be doing is making current human drivers worse. Some of the deaths so far have been people who knew the limitations of the technology and still got complacent. If it requires you to have constant supervision then people simply won’t do that and will die.
People obsess about weather conditions as a limiting factor for level 5 and it's bizarre. Bad weather conditions will favour robots over humans because the robot drivers have sensors that can penetrate fog and rain whereas humans have to rely on mk1 eyeballs that can't.
The real limiting factors for level 5 are less legible to humans. We understand the physical world via years of experience and good priors. Teslas crash straight into concrete lane separators and Ubers hit pedestrians that they know are there. In the latter case the actual reasons for the collision would strike a human as absolutely nuts - things like the algorithm not having a proper sense of object persistence or being terrible at commonsense reasoning. Yes, the Uber system was quite primitive, but the way that it broke gives us some insight into how AI drivers face a totally different set of challenges than what we would think of as "difficult conditions".
My interpretation of the Uber situation is that they were rushing in an incredibly reckless way. I think the Uber system described in the NTSB report is probably far worse than what Uber's more conservative competitors (Waymo, GM Cruise, etc) had at the time.
(Disclosure: I work for Google which is a sibling of Waymo)
Can you link to some of these sensors? As far as I know, anyone who knows what they’re doing is using Lidar at the moment which has significant difficulties with weather.
Obviously you don't have to operate the self driving vehicles in all conditions in a ride sharing service, because you have the option of operating human driving vehicles in the worst conditions. [I also expect you would use human driving vehicles to help out in peak demand periods--you don't want to spend big money on a self-driving car unless it is going to be heavily used.]
Aggressive driving will be captured and recorded by multiple cameras (potentially by multiple vehicles), making most court cases open-and-shut. I think that serves to put the fear in people.
Troll driving. Imagine testing/probing AI cars with the intent of getting them into an accident with you so you can sue for uber-millions and AI hysteria social media points.
You have to wonder, when you see these stories, whether the engineers in charge of these programs spent much time driving around, trying to see the road like an AI might. In my experience, driving in the city, it's rare that I can go more than a few miles before some exception comes up that requires human judgment. Here's a few of the things I saw just last week:
* Car double parked. Do I cross the center line to go around them?
* Stop sign that's been bent and it's no longer obvious which street it refers to.
* Semi with its hazards on in a left turn lane. Do I make a left turn around it?
* Cop directing traffic by hand at an intersection.
Most city driving is a series of continual exceptions to the rules, or situations that are one-off. Who thought this would be easy?
I honestly can't picture a self-driving AI that doesn't just amount to straight-up General Intelligence. Not only does driving involve predicting the trajectory of various moving objects (already very hard), it involves predicting the behavior of various other intelligent agents as well as communicating with them.
Especially that last one -- nobody seems to consider how much we communicate while driving. Several types of communication require having a body: flagging someone on, waving "thank you", eye contact, LACK of eye contact ("they don't see me, I better [prepare to] brake/swerve"). Some involve subtle movements of the car: rolling forward after a stop sign ("I intend to go next"), or hugging one side of lane. Even something as apparently binary as beeping the horn can indicate probably a dozen or more different things depending on physical context and length of the beep, not to mention local culture.
IMHO cars are for getting between cities, or between places in non-cities; within a city, the economical long-term solution is to just replace the roads (save for cargo corridors) with some kind of pod rail network, i.e. self-driving four-seater trolleys. The exceptions become far fewer if there's nothing on the "road" but other rail-bound vehicles.
In such a future, if you lived in a city but owned your own real car, there'd be no road to use to drive it from your house to the city limits; and so it'd have to live at the city limits in a garage—just as people with personal planes have to keep them in a rented hanger at an airfield. You'd take this fancy pod-transit to the City Common Car Park, summon your car down from storage, and then get right onto the highway.
> some kind of pod rail network, i.e. self-driving four-seater trolleys
If you go that far, the problem is already trivial right? Just making roads specifically for self-driving vehicles would make the problem trivial. What the companies are trying to accomplish is self driving cars in current conditions.
• cities can afford to "change the problem" to fit the solution (as they already have huge transit-system and other public-works budgets, and extensions to cities are mostly centrally-planned.) Some cities in Europe have already entirely banned cars.
• everywhere other than cities—e.g. small rural towns only connected via roads—doesn't have the resources to "change the problem." (But also, roads are optimal in this resource-constrained context, anyway: you can spend very little laying down massively-long stretches of road, pushing the costs onto the people who wants to drive on it in the form of vehicle maintenance.) But "everywhere other than cities" also doesn't pose nearly as complex a problem in the first place as cities do (at least in AI judgement terms, not in sensor-requirements terms.)
Thus, it makes sense to choose a hybrid solution, where cities gradually de-car themselves, while self-driving cars get licensed to work everywhere except cities.
Cities have the problem (congestion, limited parking) in the first place while rural areas generally don’t. Many cities are limited on resources so can’t change the problem as much as they want to, so generalized self driving cars would be a huge economic boon in that case. It is the cheap solution to an expensive urban problem, but getting there requires software that is smarter than what we have right now.
Rural areas simply don’t need self driving cars, so the hybrid solution is a non-starter, economically speaking.
I don’t see why. Certainly, there’s the room for cars, and far more people outside of cities own cars and have room for cars—but these are just questions of the capacity for car ownership, not the capacity for driving.
Consider: being unable to drive yourself places in the country is a far worse problem than being unable to drive yourself places in a city. Every job is far away and expects you to commute to it; there’s little-to-no public transit; and far fewer, far more expensive taxi/ridesharing services. Given the distances involved, walking or bicycling are impractical.
The single real solution to this problem, for the class of people involved (people too young to drive, people old enough their faculties have failed them, teenagers living in suburbs who want part-time jobs in the city, disabled people currently relying on privately-operated minibus service) is personal or family-owned self-driving cars. It’s essentially the middle-class equivalent of the accessibility advantage granted by having a dedicated chauffeur.
——
Also, I feel obligated to mention that specifically in the Rust Belt in the US, there are a lot of people who have lost their licenses because disaffection drove them to alcoholism, which led to a series of DUIs. These people want to work, but they’ll never again be trusted to drive—so how will these people ever get another job?
Cheap driverless cars—ones that don’t have to be smart enough to drive in complex city conditions, only along country roads to about town limits—are clear winners here. (The current attempted semi-solution to these problems is electric bicycles, but they just don’t have the range if you don’t already live pretty close to town. Most such people end up having to move, usually away from their families, which takes away even more of their support-network.)
Just like “why not just give homeless people a house”, a very simple solution to persistent joblessness in these areas is to give the people without driver’s licenses a car that doesn’t require a driver’s license (because it drives itself.)
And the cheaper such a car is, the more of them you can afford to give away on a constrained budget; so you’d better constrain the problem domain as tightly as possible, and ship as MVP-like a product as possible. (Analogy: did the OLPC need to be a powerful computer? No, not for anything it needed to do. So you can cheap out on hardware, and thus make more of them.)
Or rather, SDVs are a solution looking for a suitable problem space - which may turn out to be imaginary (or, more charitably, significantly more constrained, e.g. "long-distance freeway travel")
You are only thinking about the USA, but there are places outside of the USA where SDVs can be transformative. Specifically, huge Chinese mega cities with really really crappy traffic and pollution problems.
Am not, but your point has merit, if you mean a networked system without physical rails. As a drop-in replacement - I don't see how a human vs. robot driver, on the same vehicle footprint, helps congestion.
Imagine 10 cars all in a line. Each car can only see the car in front of it, and has a delay of 3 seconds between seeing the car in front of it move and accelerating.
Given these numbers, we would expect car #10 to have to wait 30 seconds or so before they start moving.
As congestion is a feedback loop (lack of car movement results in other cars being unable to move), self-driving cars with a coordination mechanism and microsecond-level control would reduce the problem exponentially.
There usually is “enough” space on the road, when you account for all the unused space on the road in the opposite flow direction to rush-hour traffic. Cars just need to quickly take you to work, then slowly drive back the other way. Maybe even borrow a lane of the highway for parking, which shouldn’t worry traffic controllers given that they could send a command to all the cars to get them going again as soon as traffic is going to pick up.
In other words, this will generate pointless traffic - unless your system has strictly unidirectional flows at any given time, you have doubled the problem in an attempt to eliminate it (plus incidentally doubled the energy consumption, which translates to cost).
At a talk a few years back MIT prof John Leonard showed video he took on his commute back and forth to Brookline to highlight all the points where it was going to be really hard to get self-driving to handle. (He was very skeptical about getting to full autonomy before it was fashionable to be so.)
It's not just cities but spend a few hours walking around Boston or Manhattan and just make a note of all the crazy things drivers do. (Sometimes by necessity.)
(Unprotected lefts in busy traffic are one challenging area. There's a certain social aspect to it and the right decision in one set of circumstances is either way too aggressive or conservative to the point where you'll never turn in another.)
There's a standardized way for an officer to do this...alas, it will likely vary between states, and half of the officers will use some ad-hoc method anyway.
A civilian trying to direct traffic around their broken down car...now that would be harder.
> and half of the officers will use some ad-hoc method anyway.
A quick tilt/shake of the head is one I've seen often. Easy enough for humans to read that sort of body language, but probably a harder problem for computers.
For a mundane situation that's surprisingly difficult: toll booths.
The highway goes from N lanes to N5 (or more!) lanes and then back into N lanes, usually with the lane markings being completely absent in the process. You have to pick a lane to get into, with your choice being potentially impacted by such things as "are you a truck?", "exact change only", or "I have an EZ-Pass", not to mention the potential for some lanes to be outright closed. The signage is likely to be somewhat unique and potentially counterintuitive to normal meanings--the lane under the big yellow light is the kind of lane I usually want to take, especially since it's usually higher speed than the lane under the big green light. Drivers are going to be more erratic than usual ("oops, I don't have EZ-pass" SWERVE*). And they usually occur smack-dab in the middle of limited-access highways where people might want to use self-driving cars.
I'm a self-driving skeptic but the toll booth problem doesn't seem logistically all that daunting, since there are so few of them compared to overall road miles. It seems like they could be modeled individually and directly.
* Typical school pickup/dropoff situations where cars will line up, often in the parking lane, to get to the pickup/dropoff spot. Sometimes the line goes around a corner.
And the amazing part is there are millions of drivers on the road right now that have absolutely no idea how to handle those cases, and do an exceedingly bad job of it.
From the very new drivers, to people who simply don't drive often, to people who got their license in another country where the rules are entirely different (me!), to the very elderly who are losing their sight, reflexes and judgement and people playing with their cell phone or otherwise seriously distracted.
It's a wonder only 35,000 people are killed on US roads every year.
> It's a wonder only 35,000 people are killed on US roads every year.
vs
> do an exceedingly bad job of it.
No, they're doing an amazingly good job of it. Given the cumulative number of miles driven and the conditions driven in it should impress you rather than that you come to the conclusion they are doing a very bad job. If the current crop of AI would be unleashed in those same conditions the carnage would be unbelievable. Humans are very well suited to adapting to changing environments, it is the one thing that we have over the rest of the animal kingdom. From hanging from tree limbs to riding on the autobahn with the same software.
Respectfully, with about 30 years of driving experience and some years (fortunately long behind me) with crazy mileage, it sounds like you're wrong, and that experience includes Canada (including winter), a very large portion of the United States, Europe (both East and West) and Latin America.
People are great drivers if you take into consideration the conditions that they drive in, and that traffic deaths include all vehicle types, not just cars.
Yes, cars are safer than they've ever been. But there is also much more traffic than there ever was (both because there are more people, people are more affluent and some countries have been more or less designed around vehicle ownership), and infrastructure has not always kept up, and - again, in some countries - the car as a status symbol translates into 'right of way for the best protected', which leads to unnecessary accidents.
The number of deaths is - indeed - not the right metric to evaluate driver quality, instead, the number of deaths per total number of miles driven is much better, and further breakdowns into occupants, pedestrians, motorcycles and so on should be made before drawing conclusions. And on that scale - again, taking into account the conditions that people drive in - they are doing very well indeed, some countries excepted.
My own best tricks to avoid getting into accidents: don't drive when there is ice / snow / excessive wind, never drive when you're tired, stay away from countries where driving is a bloodsport, keep your car in excellent shape and never ever drive impaired, and that impairment includes cell phone usage.
FWIW I'd be totally for a law that punishes cell phone usage while driving with immediate vehicle confiscation when spotted, as well as instant revocation of the driving license of the driver.
>don't drive when there is ice / snow / excessive wind
For me, that's been one of the big wins coming from working from home at least some of the time for almost 20 years at this point. I'm pretty much fully WFH or traveling these days but even prior to that I rarely needed to drive in if conditions were bad.
Because of various circumstances, I've had to do a few fairly long drives in bad weather for personal reasons the past couple of years and it's not something I miss at all. (And I came way closer to having an accident because of ice last weekend than I am happy about.)
TBH, having driven in hailstorms, snowstorms and rainstorms, SDVs would refuse driving in such conditions - and that would have been an entirely sane decision. The conditions were literally insane for driving - yet most drivers continued, albeit slightly slower.
In other words, human driving is far more dangerous than we care to admit; that's not a technical issue, that's pure denial.
You can't always decide when you have to drive. If you're on the road and conditions worsen then it may be safer to drive until you can get off the road safely. I've been in rain squalls so heavy that you couldn't see anymore and that is when it gets tricky: you know you should stop driving but that risks being rear-ended by someone who disagrees with you. A self driving car would end up being rear ended just the same.
Stopping traffic only works if everybody starts out with the same playbook in mind.
But to start driving in those conditions (or even if those conditions are likely to occur) that's madness, unless you are a first responder.
Also, IIRC, most drivers are actually really good. A majority of accidents are caused by a small number of drivers. In other words, the median is far better than the average.
Not long ago there was a company which failed to perform the relatively impossible themselves, and when they ceased operations their message was clearly _we could not automate that which we could not do that well manually_.
I think there's an abundance of drivers that are almost always unsuitable for the most challenging situations, or for whom the roadway or conditions encountered are completely unsuitable themselves.
Collisions are everywhere and are dwarfed by near-misses. Fatalities are statistically limited by hazard mitigation, safety measures, and a seemingly larger portion of miraculous good fortune.
Some drivers are completely out of control and some even like it that way. There is a much smaller amount of mechanical or health failures.
These all can be the riskiest due to unpredictability.
We will have to admit that each out-of-control driver can exhibit perhaps completely unexpected and unique behavior from each other, because that's what they've always done.
That's not a very high bar for different out-of-control AI events by comparison.
Tragedies occurring from individual human driving deficiencies have been largely attributed to social effects, because the underlying mechanical engineering performance has been so overwhelmingly predictable by comparison.
With AI different unforseen tragedies are to be expected, and it will not be due to any recognized or imagined individual human driver deficiency. Attribution will be squarely on the engineering team deficiencies when it comes to the programmable electronics that were overlaid onto the fundamentally predictable 20th-century-proven mechanical platform.
There could be deaths that can not be dealt with using natural human social constructs.
Being killed by an out-of-control robot is always going to be something that's supposed to be nearly impossible to occur, compared to being killed by an out-of-control human.
Anyway, with emerging programming technologies sometimes actually achieving highly suitable goals which were only reachable when combined with ideal support, and a sales/advertising mentality that can make up the difference when the incredible engineering effort has done all it can and the goal is not quite met nor suitable, everything's looking better than ever so what could go wrong?
I ran into this on my commute just last night. Cop car in the left shoulder, lights flashing. Cop on the right side of the road, waving a flashlight. Oncoming traffic crossing over into my lane and blocking my way. Finally I figured out that the cop was directing traffic and I had to wait until something changed. Eventually the oncoming traffic dwindled, and the cop turned around so I had a harder time seeing the flashlight - I assumed that meant it was my turn to go. I approached very slowly.
I never figured out what the problem was, this was in the middle of the road and not at an intersection. I was more concerned with doing the right thing than trying to determine the root cause.
It should be possible for current technology to (a) detect a cop in the middle of intersaction; (b) stop safely before the intersection and (c) delegate control to a human - possibly someone in the car, possibly a remote one.
I think you can reasonably expect a FSD to perform on the level of a human. Asking it to go above and beyond that is too much. When it comes to situations that give humans trouble you would rely on the same extra tools and processes that humans use.
* Car double parked. Do I cross the center line to go around them?
* Semi with its hazards on in a left turn lane. Do I make a left turn around it?
You would err on the side of caution, but ultimately you would go with the help of 5G remote assistance so that the passenger can stay napping.
* Stop sign that's been bent and it's no longer obvious which street it refers to.
How would humans handle this? They would rely on previous knowledge of the area, aka maps could help.
* Cop directing traffic by hand at an intersection.
Not challenging at all imo, this should be trainable.
When I was starting my engineering career, it was right around when the first DARPA challenges had started. The hype was beginning, and my optimism towards technology was strong. I thought the predictions and timelines would be correct, and I still feel strongly that self-driving will be safer than humans in the long term.
Recently, I bought a newer Subaru, with EyeSight. It has adaptive cruise and lane keep assist. The LKA is fine - it'll beep if you sway outside of a lane, and automatically adjusts the steering, but it won't keep you centered. It's more of a safety thing, and it works well from that perspective.
The adaptive cruise is really good. It's camera based, and I have had zero problems with it. It works well at night and in pouring rain. It'll even stay pretty close to the car ahead of you if you turn the "tolerance" all the way down. I'm always impressed.
Since I've had this car, I've thought a lot more about the practical implementation details of actual self-driving. I more often notice situations when driving that are seriously complex.
The more I think about it while I'm driving, the more I realize how fucking hard self-driving would be.
I tried out a relative's Subaru on a 6h drive over the holidays. I really liked the adaptive cruise control for following behind folks who were not keeping a consistent speed. I just set it to a reasonable value at the maximum following distance and stopped worrying about my speedometer.
However, at one point the guy in front of me turned off onto a small side road. It was at night, and I don't think the car realized he had moved into a turn-off lane. It slammed on the brakes. I probably went from 90kph to 40kph before it realized I was not going to hit that car.
I completely failed to react to the situation. I was worried my erratic braking would cause an accident behind me, but in the moment, I didn't know how to stop it. That was not a type of emergency I had considered or prepared for.
Yeah this is interesting. My Tesla M3 does similar behavior, and so I often am ready to punch the accelerator. In the Tesla, this is how you solve that problem. The driver's push on the accelerator contraindicates the AI's decision to slow down, and so the car follows the driver's direction.
Where it gets dicey is the scenario where the "imminent collision" (hazards on, seatbelts tightened) detection is triggered, and the driver continues to push hard on the accelerator. Tesla has a fairly lengthy statement in the manual about this scenario. The bottom line is there are all kinds of heuristics at play that may or may not result in an override depending on the specific sequence of events.
I'm amazed by people like you. You're a programmer, you know how your code looks like. Worse still, you have seen other people's code, how they fail to account for corner cases, you've seen so many articles on HN's front page about security bugs found by fuzzing. Yet, you trust your LIFE to "heuristics"? Do you really trust that when the proverbial black swan flies in front of your car the software won't swerve you into the oncoming traffic?
The "collision imminent" scenario would occur whether or not the car is in self-driving. If the car manages to avoid a collision that is the amazing part to me. And there's plenty of evidence that the Teslas do, in fact, avoid quite a lot of collisions. It would be foolish, however, to drive like it's going to resolve all your collisions for you.
I view these as assistance to driving. It's a comfort that the steering and brakes are not overridable. And honestly, if the system messes up so badly as to go flying into a barrier, well, that's not so different from a tire popping, another car careening across into yours, or other catastrophic and unlikely events that do happen. We have seatbelts, crumple zones, airbags, pre-tensioners, cargo hold-downs, and emergency services to help us survive what even 40 years ago would be unsurvivable accidents.
If the car avoids 99% of crashes, but crashes happen 1% of the time, and it causes crashes 1% of the time, then it's making you less safe.
Those are just arbitrary numbers and a simplistic framework, but the point is, you can have a huge increase in safety by the numbers, and a very small increase in problems due to the safety system that cancel it out, because the prior underlying rate of crashes was pretty small.
I think this is an abstract pattern that comes up in other contexts and it doesn't seem to be intuitive.
Same thing, first time I used the Subaru EyeSight system. Now I know to pay attention for that particular failure mode, and override with the gas pedal and a little steering.
Definitely surprised the heck out of me though the first time the car slowed way down on the interstate because the car ahead of me pulled onto the off-ramp.
I see the current Tesla system in similar light, it does very well in every day common situations and some of what it does is damn good; driving down country roads with curves and such is exhilarating but still safe.
Now currently the Tesla system does give you a much more clear idea of what the car does see around it but still no option to see what it fully records; there are means to get this footage but its not something every driver can.
Now my TM3 goes in at the end of the month for the hardware 3.0 upgrade which will allow it to process more of what it sees and also relay that to me. The difference in what other's have show in just what the car relays back to the driver exposes just how much information has to be processed.
Then comes the simple fact, the real issue is that the hard decisions are ones we make all day, driving by exception. We make so many choices that are exceptions to the rule we are numb to it, it is nearly subconscious.
Then the other issue, other drivers. Not just people who drive badly but those who will go out of their way to cause self driving cars issues. with the number of people on the road you will find them with too much regularity. More might pop up if regulation comes down which demands self driving cars or semi autonomous cars obey all traffic laws, especially speed limits. On some roads I drive just obeying the limit is enough to impart rage on other drivers.
I'm inclined that improving systems up to full autonomy for many highways in many weather conditions is a fairly realistic maybe 5-10 year plan. Which would actually be pretty nice and potentially a big win for safety.
The problem with widespread L4/5 is that you need to get to a car that can literally drive itself between 2 points on a map with a high degree of reliability, in a wide range of weather conditions, on roads of varying conditions, with unexpected/unmapped obstacles that may require doing something technically illegal to get around, without human help. And that, as you say, seems really hard.
Ultimately, the right place for most carmakers to focus on at this point would be situational awareness and safety features, gradually improving the situations where the car can prevent a crash.
Put it this way: If a driverless car would be safer than human drivers, then that would imply that all the necessary technology would already exist to allow humans to be the driver while the car still keeps them out of deadly situations. If such tech is not possible to develop, then it seems unlikely that true driverless tech (which would need to combine that safety tech with a lot of other technology) will happen.
Earlier today I was driving on the highway with autopilot (I have a Model 3) and came to a section where the road is angled in such a way and the pavement is old enough that there is a fair amount of standing water. Driving manually, I steer to the right or left slightly to avoid the ruts filled with an inch or two of standing water. Autopilot, on the other hand, was perfectly happy to blast right through it.
That's the kind of weird edge case that makes me think we're farther from real self-driving than most people want to admit. I'd be hard pressed to define exactly how I'd tell the computer to avoid that. Maybe the answer is that it can't deal with that until it results in hydroplaning, and then it reacts however it can.
That's a particularly insidious circumstance since standing water can conceal hazards from any vision system these cars or humans have. I would expect self-driving cars to refuse to drive through water in any circumstance. There could be a large pothole in the puddle that would ruin your car.
Worse than a mere car-destroying pothole, what if the flooded portion of the road no longer existed at all? That's a common enough occurrence that student drivers are generally warned about it specifically, warned to never drive across flooded sections of roadways because your car might fall into 10 feet of water without warning. If a self-driving car doesn't avoid a scenario we teach teenagers to be wary of, I don't think it deserves to be called self-driving.
I guess no one on my dead end street will be getting a self driving car then. There's a low spot near the main road that causes a large puddle all the way across the street whenever it rains.
There's a few other places in town that often flood, including one on a main road that doesn't really have any alternative route. There's also a section of the road along the coast where high surf sometimes hits the sea wall and splashes up and over it on to the street. It's quite a sight, but I wonder what a self driving car would make of _that_.
You have local information that cars don't yet know (but they probably will someday -- cars can send detailed road conditions to a central database, or they can communicate with other cars, so the car in front of you can say "watch out, there's a big pothole 8 inches from the left lane line" and your car will try to avoid it.
What do human drivers do there when they are unfamiliar with the road? Seems like the auto pilot should be able to do at least no worse than human drivers.
> What do human drivers do there when they are unfamiliar with the road?
I assume the same as me -- I avoid water-filled ruts whether I'm familiar with a particular road or not.
It may well get advanced enough some day to pick up the difference between wet pavement and water rut, and car-to-car communication could potentially help, but that's a level of technology improvement that feels considerably farther away than just a few years.
>Recently, I bought a newer Subaru, with EyeSight. It has adaptive cruise and lane keep assist. The LKA is fine - it'll beep if you sway outside of a lane, and automatically adjusts the steering, but it won't keep you centered. It's more of a safety thing, and it works well from that perspective.
I have 2020 Subaru and it has lane centering on top of that. On the highway, with clear lane markings it comes very close to driving itself. It won't slow down to handle curves on its on though.
Today saying cars have "self driving capabilities" is like saying you're fluent in 3 words of a language. They have advanced driver assists but the insistence on the "self driving" terminology tricks enthusiasts and less tech savvy people alike into a false sense of confidence in the tech. Sometimes all the way to their deaths.
Indeed. The only way to call today's cars "self driving" is to narrow down the conditions of driving to the point where it's just as ridiculous as my "3 words" example, and then realize they're still not narrow enough.
> the majority of the miles
The "miles" metric that's not very relevant, not all miles are created equal. Driving in a straight line with almost zero challenges is nothing like driving as a whole. Would a power supply that's only able to take idle loads (majority of the time) be considered any good? How about a phone that can only make calls only most of the day?
Even a "dumb" car can be considered self driving by this definition. All you need is cruise control or if you want to get fancy, ACC and LKA. This would allow you to drive for hours on end or hundreds of Km on a highway with little (fractions of a second at a time, maybe a total of 1Km with hands on wheel) or no human intervention.
> Would a power supply that's only able to take idle loads (majority of the time) be considered any good? How about a phone that can only make calls only most of the day?
When the models without those downsides require constant strong attention and nearly unbroken eyesight? Yes, sell me those models now. I'll switch between manual mode and somewhat-limited-feature mode when necessary.
You may have missed the point and there’s not much room to dumb it down. If you have to narrow the definition of driving so much then it’s not actually driving.
Many cars spend a lot of time idling, especially in crowded cities. Would you call them self driving because they can perform unattended this very, very narrow task (but long) from the whole activity of driving?
As for the “yes sign me up” I call bluff. There are plenty of situations in daily life where you need to have constant supervision. You wouldn’t let your child operate in them under the assumption that “there’s a good chance they won’t die”. You will take the constant supervision in place of the device that only works for 1% of the features you need and even then it might kill.
Take an iron that can iron by itself but only small, cotton clothes, and once in a while it burns down the house. Do you leave it unattended? Do you even call it self ironing?
> If you have to narrow the definition of driving so much then it’s not actually driving.
I don't care what you call it. I just want to be able to ignore the road until the car beeps at me. (with at least 10 seconds of warning)
> Many cars spend a lot of time idling
But a car can't be idling while I use it. A computer can have idle power levels while I browse the web, and a phone that works in a certain range of hours is still very useful. If you wanted to make an analogy for "useless", that didn't come across.
> You wouldn’t let your child operate in them under the assumption that “there’s a good chance they won’t die”.
> once in a while it burns down the house
Who said anything about 'a good chance'? You said those products worked for specific uses. You didn't say they would also fail at random, even inside those limits. This is a different scenario now.
If the car can handle "a straight line with almost zero challenges", sign me up for that easy highway driving. It's only when you remove the word 'almost' that it becomes a cruise control missile / deathtrap.
Self driving systems today are the nurse but you're still the doctor. And you wouldn't call the nurse a doctor just because they can do the more simple or common tasks.
> I just want to be able to ignore the road until the car beeps at me. (with at least 10 seconds of warning)
You may want it but you are not getting that from any "self driving" system in use now. It's even illegal for you to do so. And that isn't self driving anyway, it's driver assists like ACC and LKA assisting you in very particular conditions as long as you're still in control and alert.
> But a car can't be idling while I use it.
If you're in traffic and the car isn't moving it's idling. In a traffic jam you may even idle more than you move, and many highways and crowded cities give you exactly that.
> You didn't say they would also fail at random
It was an analogy to self driving cars killing people even with those very narrow specific uses so I didn't think I had to spell it out. That was my whole point. Consider it spelled out now.
> If the car can handle "a straight line with almost zero challenges" [...] It's only when you remove the word 'almost' that it becomes a cruise control missile / deathtrap
If a system left to its own devices turns a degraded lane marker into a major life threatening challenge then it just makes my point that you're the driver and it can only assist.
To be a driver you are tested in all kinds of conditions. But you consider a car to be "a driver" because it can drive a straight line most of the times?
Self driving cars today are Silicon Valley's "not hotdog" app. That was meant as a joke but it fits the current situation of self driving to a T. Great if your needs are ultra specific relative to the whole spectrum of the task at hand and you're willing to take the risk of getting killed in the process.
For a narrow enough set of conditions you can define anything almost any way you want and be correct.
First off: I don't think current cars are good enough, and I didn't say they were.
> If you're in traffic and the car isn't moving it's idling.
Let me rephrase. A car does not idle continuously while being useful, it only idles for a minute or two at a time. A computer running at "idle power levels" is more like a self-driving car that only goes up to 15mph. Which would actually be very useful if you're in stop-and-go traffic.
> If a system left to its own devices turns a degraded lane marker into a major life threatening challenge then it just makes my point that you're the driver and it can only assist.
Depends on what you mean by "left to its own devices".
If you mean that I'm zoned out watching a movie, and the car crashes all by itself, then that car doesn't meet the standard I laid out.
If the car beeps at me, and I have to intentionally choose to ignore it to get into a crash, that's acceptable. Because I won't ignore it, and won't crash, even though the car can take over for hour(s) at a time. Go ahead and call it "driver assist" if you want.
> Self driving cars today
Are level 2, and I want a level 3 or 4.
Level 3 is the minimum I described, and I don't care if we call it "self driving" or "driver assist", it's useful.
Level 4 is just level 3 plus the ability to pull over when confused, but I would say it's unambiguously "self-driving" at that point.
That's a great analogy! For English (and autonomy levels), this works for orders of magnitude: Two and two and two and two and two, no good; ten times ten, good in easy things; with a thousand words, you can get things done and sound normal; a vocabulary of ten thousand words provides an adequate command of the language to discuss complex concepts; 10^5 or any higher exponent comprises capability exceeding that of a majority of users.
I do agree that we're still at level 2: it's relatively amazing (i.e. see what can be done with the limited toolset available), but not really that great: outside of predictable, menial tasks, the rough edges become acutely limiting.
> The adaptive cruise is really good. It's camera based, and I have had zero problems with it. It works well at night and in pouring rain. It'll even stay pretty close to the car ahead of you if you turn the "tolerance" all the way down. I'm always impressed.
I checked, assuming it's actually using a radar - and you're right. They seem to use a stereo camera system. Neat.
It's not impervious to bad weather, but pretty resilient. I'd say in the 2 years we've had ours the system has shut off maybe 3-4 times due to one of: a) very low and direct sun angle, b) very heavy rain, c) dense fog. Which, to be fair, are all difficult conditions for a human to drive in as well.
But I agree with the parent, the suite of driver assistance features is very good, but a long way from "self driving".
Disclaimer up front: I work for GM, I don't work on SDC.
As I see it, self-driving is a cursed[0][1] problem. If you could choose a different problem space, or to ignore some complications, self-driving would merely be very hard. But the requirement to handle ANY external behavior with ANY external conditions and navigate to ANY destination, while also maintaining safety is impossible* satisfy.
One cursed corner of the problem space is ML itself: ML is amazing in that it enables emergent behavior [3], but ML is terrible in that it gives rise to emergent behavior. The traditional engineering mindset wants a map of inputs to outputs, but you don't get to choose your inputs in the SDC world, and you can't specify all of your outputs.
Another cursed corner is the Always/Never[2] problem. You want safety features like Automated Emergency Braking to Always kick in when there is a problem, and you Never want them to kick in when there isn't a problem.
I really don't know how any of this gets fixed. I do think that sensor fusion and advances in AI can reduce the size of some of the cursed area of the problem, but the problem is also meta-cursed in the definition of Self-Driving: the solution to normal cursed problems is to reduce or change the solution scope, but if you reduce or change the Self-Driving solution scope, then "it's not real self driving".
This is spot on. The Always/Never is well articulated, it is exactly why I got rid of a vehicle with auto braking. It did it twice when it wasn't supposed to and that was more dangerous than if it had never triggered at all. And yet, in an actual imminent crash it would have probably responded a lot quicker than I ever could. Still, I'm not going to be scared twice in a month by my car doing stuff 'on its own' to my possible detriment.
Could you share which company's auto-braking had the false positives? What is the acceptable false-positive threshold for you to use auto-braking again? Also, did your system tell you why it decided to brake (albeit incorrectly)?
Sensor fusion? Absolutely required. Also, I think "AI Fusion" is needed, where different AI approaches evaluate a situation. Maybe a supervisor DNN AI which is looking at the output from the AIs below, inferring patterns.
DNNs (alone) are incapable of Level 5. OTOH, it's conceivable that in this limited domain, a "Society of AIs" might yield 'acceptable' performance ( 1K deaths per year? ).
I would also like to see a given driving system rated at maximum speed for autonomous use. If you're moving at 5 MPH and your stopping distance (reaction time + coming to a stop) is 5 feet, then your 'Zone of Despair' is the 5 feet in front of you (and some on either side). If you absolutely know this Zone is clear, you can proceed with confidence.
It's a hellishly difficult problem; but I don't think it requires AGI to solve 'adequately'.
Waymo and Tesla have both solved the problem, in different ways, by gating the feature.
Why would they need to force Level 5? They can operate with the promise of L5 in the future, until the moment the statistics show it’s safe to turn it on.
Between now and then they can provide a steadily increasing supply of value to their customers. And steadily ramp the data they have to improve the system.
I have no idea if it will be 2 years or 10 years or 100 years before they can turn on Level 5 without a geofence.... but what difference does that number make? It’s academic. It’s not an existential question for either company’s self driving value prop.
And consider that different jurisdictions will set different dates. All it takes is one government to OK L5 in some area, and Tesla’s ass is covered. And every jurisdiction that signs on will put more pressure on other jurisdictions to follow suit.
To me, that’s the brilliance of Tesla’s strategy. They have decoupled their forcing functions from the L4/L5 legislative question. Governments can drag their feet as long as they want but it won’t hurt Tesla’s ability to sell the tech or collect the data they need to improve the tech. They are already selling autonomy features, and they can make that value prop incrementally better year over year for as long as it takes.
I've been beating this drum for years (but I'm a nobody, so it's like pissing in the wind):
1. Anything less than 100% FULL automation is MORE dangerous than manual driving, because the "driver" will almost certainly lack any situational awareness. When the need for manual intervention happen, it will be at the moments where you need maximum awareness and split second reflexes.
2. There are SO many edge cases and never-seen-before situations that happen when driving "at scale" that the automation features will fail unexpectedly and in strange ways.
3. G and Cruise might be exceptions, but most of the companies in this space are cowboys with reckless disregard for public safety and terrible "iterate quickly" coding practices.
4. At some point there will be an accident that kills a photogenic "middle America" person or people and at that point the government will crush this industry with regulation, with the financial backing of automakers, UAW, and other people who benefit from the status quo.
The only way 100% fully self driving cars will ever happen is for the infrastructure itself to be built to accommodate them. Mixing regular cars, parking, trucks, bicycles, scooters, pedestrians, dog walkers, hoverboards, etc all together on the same roads ensures that the problem is unsolvable.
>Anything less than 100% FULL automation is MORE dangerous than manual driving, because the "driver" will almost certainly lack any situational awareness.
Something I worry about is that if SD became normal, then people would never get the experience of thousands of hours of driving a car in countless situations that is needed to develop good judgement, much less quick reflexes. And so when a rare situation arises when they need to take over, they won't be able to do it well.
We've already seen this in aviation actually. AF447 is a good example of flying a plane into the ground because of reliance on automation and lack of experience hand flying.
There's a really great YouTube video called "Children of the magenta" that's part of a lecture the chief training pilot for AA gave about 20 years ago or so as part of their continuing education. He goes over incidents and situations and the essential conclusion is that pilots are getting too used to turning dials and flipping switches when in many situations they need to just take control and fly the plane.
1. Anything less than 100% FULL automation is MORE dangerous than manual driving, because the "driver" will almost certainly lack any situational awareness. When the need for manual intervention happen, it will be at the moments where you need maximum awareness and split second reflexes.
Airline pilots already face this -- it's hard to stay engaged in flying when the plane can fly itself. By the time something bad happens and the plane gives up and hands control back to the pilot, the pilot lacks the full situational awareness he would have had if he was flying the whole time.
It's too expensive to built the infrastructure to accommodate them--current infrastructure spending isn't even enough to eliminate potholes and other bad pavement conditions.
> Anything less than 100% FULL automation is MORE dangerous than manual driving, because the "driver" will almost certainly lack any situational awareness.
With enough reliable bandwidth a decent workaround is to have the system fall back to human control by someone other than the local driver. I'm imagining a Car Traffic Control Center where your onboard robot driver sees a situation it doesn't understand and throws control to a remote driver wearing a VR rig with your car's video feeds as input. The remote human driver assesses the situation, steers you carefully past the weird obstacle/issue then returns control to your robot.
A system where robots drive automatically, say, 95% of the time while human remote drivers handle special cases 5% of the time still seems like a big improvement over the status quo - there's a market for that.
Throwing a remote driver into a dangerous situation with no context sounds like a terrible solution to me. See also: Cpt Dubois from AF447. Doing that deliberately and repeatedly just multiplies the chances of catastrophic error.
And the VR driver job would be so stressful that there may not be many takers. Who would take responsibility if they made a bad call and caused a crash?
I can possibly see an On-Star-like backup VR driver role at some point when there's full self-driving and there needs to be some sort of backup of last resort when a car with no human in it freaks out. (Or if there's just a child, etc.)
But there has to be an assumption that this is a rare event and that it takes place in a context where a VR driver has time to establish some situational awareness. (Oh, and in a lot of situations, there is no "just pull over option." I've gotten into some bad weather situations but there are often only sub-optimal options at that point. Pulling over can also be dangerous or may not even be an option.
The solution would need some massive breakthroughs in reducing latency...and time travel. "We need a decision within 6 seconds...4...2...hey human, watch the inevitable crash!" (This already happened btw)
> Throwing a remote driver into a dangerous situation with no context sounds like a terrible solution to me.
We might be thinking of different situations. I'm mostly imagining a car or truck that does great on the freeway but poorly on surface streets or poorly on particular KINDS of surface streets or even particular KINDS of weather...and we KNOW this and can recognize those situations. The remote driver typically jumps in BEFORE the part that is actually dangerous.
This isn't a new problem - consider a big ship that delegates harbor navigation decisions to a harbormaster and/or tugboat, or a big plane that delegates final runway approach decisions or parking at the gate decisions to a control tower and/or local guy driving a tow vehicle or waving directional flags. You could slice the world up into "regions we can reliably navigate without help" versus "regions where we still need a little help", with the latter group shrinking over time as technology advances and maps get better and edge cases are better handled.
The initial product offering might be for long-haul truckers - the truck drives itself for hours on separated freeways and then throws to a handler when it needs to navigate unfamiliar local surface streets for a delivery. But once you've GOT that sort of infrastructure - basically a map with geofenced areas where remote drivers step in - it's a logical next step to make the help areas dynamic and mark slowdowns or detours due to an accident or a landslide or a cow on the road for similar handling.
95% of that job would not be stressful. I'd be more worried about it being boring...but then, so is normal in-person driving.
> 4. At some point there will be an accident that kills a photogenic "middle America" person or people
I'm ready to guess what it looks like: Car on autopilot going through a residential neighborhood, playground ball bounces out into the street from between two parked cars, car does not brake in anticipation of a 6-year-old that is not currently visible.
I have some confidence we'll get there on limited access highways because there are a lot fewer of the things that you describe there. That doesn't really help the people who want to be driven around everywhere. But it would actually be a really nice feature for the majority of people who own and drive cars.
To your first point, I'm not convinced that's true. People augment their lives in lots of ways that don't seem to reduce safety. A few examples off the top of my head: simple "dumb" cruise control hasn't lead to more accidents. Parachutes have auto-deploy features if the cord isn't pulled by a certain height. Scuba divers use dive computers that basically eliminate the need to learn dive tables (and beep at you when you're doing something dumb). Apparently passenger jets are highly automated (I'm out of my depth on that one). These are all on the spectrum towards automation and have only been helpful. Do you think the problem occurs as you approach 100%? Like an uncanny valley in the 99 to 99.99% range?
The thing with other activities (diving, flying are great examples) is that when a problem occurs, you generally have minutes to analyze what's happening and decide on a solution. If my dive computer goes on the fritz, I can decide to immediately start an ascent, or go off physical dive table, or make an extra safety stop at say 20 feet just to be sure.
When you're going 45 in a curve driving along PCH, and a sudden fog bank obscures your cameras and LIDAR and the computer says "your controls, good luck!" you have maybe 2 seconds to react, if you're lucky. It might be a lot less.
Humans make really dumb decisions sometimes, but we are also outstandingly capable of reacting to novelty.
Not OP but to your last question I think there's proven danger in the "too familiar, too easy" zone, i.e. most car accidents for instance tend to happen in places you know pretty well — hence why you may get surprised when things happen out of the ordinary.
Whether it's an illusion of safety, a letdown of attention, the general idea is that humans should never trust that things will go well when there's real probability that they don't. I think it's not in the amount of automation, as you explained well, but rather in focusing users on the critical parts that they should watch out for — and there clearly automating helps us remove the unimportant from the equation, and also make us more responsive, more accurate for the important parts. But it's a lot of great UX, and that's one field where e.g. the military is usually great but commercial companies are abysmal if they can get away with it (read: sell enough to justify not spending a dime on more quality). That's worrying when security is involved, but it hasn't proven a moral or ethical problem for most industries absent of regulation (forced ethics, ha!), so... I think there's valid concern by OP.
As for passenger jets, the Airbus A320 (late 1980s) was the first commercial plane to have a "full" autopilot; all systems were electrical¹ (manoeuvering, thrust control, etc) which allowed the computer to integrate and manage it all. :)
It was tested a number of times by pilots for fun, from taking off to landing entirely on autopilot — ofc they're standing right there ready to take over if anything goes wrong but I've seen it first hand many times. We're talking commercial flights with passengers, it's 100% safe and actually quite "smooth" because the computer is so accurate.
Honestly, the problem is much, much, much easier for planes: a good GPS and it becomes quite the closed problem, and obviously 100% of autopiloted planes are simultaneously piloted by real humans... ready to take over. Yet a plane could technically land itself just fine if pilots were incapacitated, it really could. I suspect it did more than we know for many reasons. And when flying by instrument (means you see s__t), an autopilot is basically just a computer doing what a human would do slower by reading the same data (and maybe cross-checking with physical/manual instruments, but an autopilot doing the grunt work of stick-holding gives you more time to double, triple-check everything incidentally).
_____
[1]: Note that all systems are also doubled (even tripled) with mechanical (hydraulic etc) failovers, because obviously you can lose electricity in catastrophic situations, hence why it always seemed crazy to me that a planed requires software to fly properly instead of plain old good physics and mechanics).
> The only way 100% fully self driving cars will ever happen is for the infrastructure itself to be built to accommodate them.
I think we'll see 100% automation for freeways within a short time. I think the only way we'll see 100% automation for arbitrary point a to point b in 50 years, that a standard human could "safely" do, is if we get flying transport.
I agree with you. I think freeway travel can be 99% automated in the very near future. I also think that's where the MAJOR wins will come from. Long haul trucking can flow 24/7 at that point, with the drivers keeping their jobs and doing last mile delivery.
How do you define 'near future'? If I want to reliably autopilot any significant distance, I do it by camping in the middle lane. Even then, AP isn't any better now than it was a year ago, it still ping pongs, turns late, cuts off merging traffic, etc. The freeway certainly seems like a good first candidate for automation, but I don't feel like we're anywhere near 99% automation of it.
I am a traffic engineer and one of the things that I do regularly is write 10 year traffic plans for small to medium sized cities, think 60,000 people max. When a lot of the self driving hype was really kicking off, I was told to make sure to incorporate them into the plans. This mostly consisted of a few sentences about self driving and that was it. When modeling future traffic growth, the client would always say, "these numbers a pretty high, do you thing self driving cars will lower them?" To which I would respond that it is doubtful. I am in the rural west where cars are a large part of life. Ride sharing and public transportation are rarely a thing outside of the core of towns. The idea of a self driving car that one doesn't own would be very odd to a lot of people in this area.
On top of the social side of things, the roads are not in great shape in northern climates and many of the visual cues that we use to drive can be missing or very hard to see for many miles. Striping delineating the edge of the road often gets worn away over the course of a few years and doesn't get re-painted for a few more. Some major roads connecting two towns may not even have a paved shoulder, just 24 feet of asphalt with a stripe down the middle. (For reference, 12 foot lanes with 4 foot shoulders are the general norm for this part of the US.) All of this and I have still yet to touch on weather.
I look forward to self driving cars. However, I don't think that they are going to solve many of our traffic issues outside of urban cores. For me, the incremental steps to reach self driving will result in fewer injuries and fatalities on our roadways, and that is a win.
I'm always baffled by folks who assume self-driving cars would reduce traffic. No matter what, even if they are privately owned, truly self-driving vehicles could only increase traffic. If they can be dispatched without a human driver to pick up groceries or take-out, for example, there'll be a lot more trips. And sharing self-driving cars rather than owning them would surely be more likely to increase the number of cars on the road since they have to make a trip to pick you up and another one after they drop you off.
The rationale I have heard is that a single car could be used to make a multiple chained trips instead of multiple cars making multiple trips. This is the same idea behind ride sharing. I feel this breaks down with privately owned vehicles. I do see an increase in traffic in areas with expensive parking. Specifically I would sent my car to a cheaper lot and that would result in an additional trip. Even worse, that would be a zero occupancy trip.
I'm also pretty sure it will increase commutes and this applies even if self-driving isn't fully door to door. There are probably a lot of people who would hesitate to do an hour car commute today who would be a lot more open to a 60 or even 90 minute commute of something else is doing the driving.
Yeah just collision detection and lane guidance are already helpful in modern cars. After driving in snow in Colorado I have to wonder how self driving cars perform under such conditions. I couldn't make out where the lines are and no one really does. A four lane road can easily become a two lane road under such conditions, as people cut a worn path through the snow.
I really think self driving could improve public transportation in urban cores by tracking preset paths. I could also imagine buses that are mostly autonomous, but where a remote driver could override the controls for exceptions.
A lot of people are incredibad at driving in snow. I would not be surprised at all if self driving cars vastly reduce the accidents attributable to snow on the road. Some of that might be due to the car deciding something is too dangerous to attempt, and frankly that seems fine to me. If your problem is “the car can’t handle x condition”, there’s really no issue so long as that’s because the car refuses to try, rather than trying and failing with non trivial probability
> "these numbers a pretty high, do you thing self driving cars will lower them?"
My answer would be no: the opposite, if self driving cars work out, people will probably just drive more! The only traffic gains would come from 100% self driving cars that could then be optimized somewhat globally.
It will also restore mobility for many people who cannot drive yet or not anymore. Where I live, Taxis and Ubers are still too expensive and not ubiquitous enough to replace an owned car.
This is one of the most important use cases I can see for self driving cars. As it is, on demand transit is expensive and in rural areas not the most frequent. For example the northern portion of Montana has a very low population spread out across a large distance. For many elderly to get medical care they must travel 100-200 miles to the nearest hospital. When they have to make this trip every week, I can be a massive burden on their loved ones. On demand transit is an option but may only run a couple times a week due to limited drivers. A self driving vehicle would help this situation a lot.
I'm counting on self-driving cars to be viable by the time I'm too old to drive -- it was very hard for my grandmother to give up her car and it took a lot of coaxing (and a little sabotage of her car to make her think it needed expensive repairs) to get her to stop driving. But it was necessary because she was becoming a dangerous driver.
hey Takk309 could I reach out to you to ask a few question? I am researching what it takes to validate the safety of these cars on the street in relation to other cars, and pedestrians, and your background would be immensely helpful! Feel free to email me at lingxiao@seas.upenn.edu
That is going to be a tough but to crack. Crash statistics are tricky because they are either underreported or very infrequent. Two good resources for just about anything traffic related are ITE [1] and TRB [2].
Hm for sure that's why it's exciting and super important from a technical standpoint! I'm interested in learning more about how city planners think about traffic in general, so that's why I think learning from your perspective would be super helpful, and obviously super happy to share my perspective on the field, most of my friends work at one of these storied companies so I have a slightly different vantage point.
It amazes me how Tesla can continue to just outright lie in their sales material about this. Right now you can purchase a brand new Model S with a "full self driving" package, even though no such thing exists, nor is there any timeline at all to when it will (if ever). Autopilot as it stands is nothing more than an advanced driver assist system with blindspot monitoring and automatic lane changing. Not even close to a level 3 system, let alone full autonomous.
This meme of shock and surprise that self-driving cars haven't appeared overnight is just media BS. Yes, all technology development is a gradual process and we may not know how long it takes. Companies working on it have pitch decks and investors so they give their best or most optimistic estimates.
Go back a couple of years and look on just about any thread about self-driving on a forum such as this one and you'll find no shortage of people arguing that they're just around the corner. Because, after all, that's what the SV hypesters were saying and they'd never lie.
> you'll find no shortage of people arguing that they're just around the corner
Ya, but around the corner no one thought that would be 2020. The strawman is always to place around the corner as tomorrow, but many people just mean in a decade or two.
I've been watching Lex Fridman's youtube podcast and there is a recent interview with Jim Keller [1]. Keller is a chip designer famous for his involvement in multiple chips at Intel, AMD, Apple and he was co-author of the x86-64 instruction set. He also worked for Tesla.
There is a point in the conversation where Lex and Jim clearly disagree about how "easy" self-driving AI should be. Lex is clearly pessimistic and Jim is clearly optimistic. I have to admit I was more swayed by Lex's points than by Jim's, but it is hard to discount someone so clearly (extraordinarily) expert and working directly in the field.
My mistake - I should have checked his bio rather than assume based on the content of the discussion. I've updated my comment to change his association to Tesla to the past tense, Thank you.
In the debate over self driving, there are two schools of thought. One says that you can't solve it without machine learning because it's impossible to hand-engineer a system to cover all edge cases. The other says that you can't solve it with machine learning because any solution must have zero glitches. Both sides are correct.
>some researchers have argued we won’t have widespread self-driving cars until we’ve made major changes to our streets to make it easier to communicate information to those cars.
It feels to me like those working on the concept are expecting that if they keep adding sensors and twiddling with AI they can avoid that.
I get why. It's a vast and expensive undertaking that is out of their control and they want to sell their product asap. But if we started with major city streets and highways it could be a quicker and safer route to get it to market.
Years ago (around the late `90s) I worked on an "Intelligent Traffic Systems" project for the city of Branson, Missouri. The 3M Corporation demonstrated a magnetic tape for street lines and a snow plow truck outfitted with sensors that could detect the lines connected to vibrators on each side of the drivers seat. When the truck got too close to the line the seat would vibrate on the side they were close to. I got to ride in the truck for a demo of the tech and worked well.
We also had street cameras that detected autos and could estimate speed and traffic congestion. These sent video and data to the local 911 center. I created a "traffic congestion map" that ran on a web server using that data that worked pretty much the same as Google Maps that show congestion.
We need "smart streets" to really make this work. Without adding that to mix corporations could be banging their heads against the wall and spending billions of dollars to try and never make the last mile.
It’s really just a matter of time. Those that started out optimistic (when it was hyped) but are now saying that they don’t see self driving cars can do the complex situations we humans supposedly can should have some patience. As the tech improves, and more data is used to train models, it’ll surely surpass humans in driving ability.
I have this argument constantly. Humans make mistakes too, so we will eventually have tech that makes fewer mistakes than humans. The problem is that humans have a unique ability to make sense of situations they've never specifically encountered before. No amount of training data will be able to make up for that, because there will always be situations so rare that they never made it into the data.
But what really matters is the overall statistics:
The self-driving doesn't know how to handle this one in a 10,000 situation.
But in the other 9,999 of 10,000 situations the self-driving vehicle is equal or better.
Thus the self-driving vehicle averages out to be a lot safer.
Everybody says this, but they simply take it as a matter of faith. I don't see any statistics to back it up. The article itself points out that there simply aren't enough self-driving miles at this point to make a valid comparison.
This is what makes hype cycles so dangerous. Most people are too simple-minded to understand nuance, uncertainty, or cautious optimism. So you end up with either "this is terrible and worthless and impossible" or "this is incredible and game-changing and imminent". I've seen it happen for everything from SDC to politicians that my social milieu pin their wildest dreams on that end up disappointing them by being decent, competent and non-messianic.
It's probably worth ignoring what most people think about topics like this, except to the extent that their insane thrashing affects the situation (votes, funding, etc).
It still feels kind of like we're in the "Apple Newton" phase of self driving with the iphone in the indeterminate future.
One day we'll look back and say "aww, they tried so hard with the limited tech they had and got so close but what they needed to make it good just didn't exist yet".
Than expected? I am pretty sure none of the people who actually works in the related industry expect self-driving car to come out anytime soon. The hype is created by media and investors, with intention other than creating a viable product.
Driving is simple but still logical. Though it doesn't seem to involve too abstract of processing like what is required for language, that doesn't mean it is just looking the road ahead and making turns. Say if an accident happens and the road condition is messy now the one-way road is changed to switch to allow once direction then another, how would AI understand this? No it can't.
Current NN based model had huge problem guarantee Robustness, while human can be incredibly resilient against adverisal scenarios, because we are superfasr few shots learners.
I've said it multiple time on HN, and I'll say it again: Self Driving cars are impossible until we have Artificial general intelligence (and not the limited AI that exists now).
The only exception might be specially instrumented road tracks, with limited access (like train rails).
And yes, I fully expect such a thing on interstates. It will never happen inside cities.
Want to make money? Design a vehicle agnostic system, primarily aimed at long haul trucks. Install it in a ton of cars, but don't switch it on.
Then instrument some highways, and convince governments to only allows these cars on those lanes. By being vendor agnostic, anyone with a car could get this.
It will require deep pockets, and maybe you would need government to mandate this (some kind of open standard).
Self-driving stops being a (difficult) AI problem if you force all cars to be self-driving and network with each other. It converts the behavioral/theory-of-mind problem into a distributed computing problem. We could have had self-driving in the 80s with this approach. Imagine where we would be today if we had decades of 24/7 trucks and ubiquitous robotaxis. It would be a completely different world. It's a shame humans are so bad at cooperating. But still, this is not out of the question today. Obviously the only way to pull this off would be with massive government subsidies. It would be worth experimenting in a small country like Luxembourg.
relying on networked cars is monstrously fragile. One critical error in the system is going to tank everyone, disconnecting from the system is going to drastically reduce the safety of both the individual and the system, as it now has to deal with rogue elements, and the potential for malicious attacks, like terrorism or simply some sort of natural freak event is huge problematic.
I think the direction of the thought is reasonable though, you should just take it a few steps farther. If networked infrastructure is a good idea, then maybe cars are not a good idea. We already have driverless, well defined, organised modes of transportation, they're called trains.
modern subway systems already pretty much drive themselves, they also come with the added bonus of not having everyone carry two tons of steel around.
Those concerns are legitimate. My proposal is not a purely centralized hub-and-spoke network topology where one data center drives a million cars. Yes, you can have the central brain too, but I'm thinking more of a mesh network among nearby cars on the road. The network can be fully connected so that one nefarious car cannot take down the whole mesh. There are a lot of cryptographic safeguards to prevent bad actors too. Each car is able to drive on its own, or pull over, only relying on networking for a stream of sensor output from nearby cars. The upstream servers can go down without doing anything worse than making all its client cars pull over and stop. All the steering is done on the client side.
What if terrorists spoof phantom cars or rewrite maps to send people off cliffs? Assume they stole the master signing keys, have root on the central servers, exploited 0-days on the client car software, etc. One car misbehaves somewhere, triggering sensors of nearby cars which tell every other car in an N kilometer radius to pull over en masse. If anything, it is more robust to hacking/terrorism than the independently-self-driving Tesla or Waymo approach because those do not have the benefit of the herd. One gazelle in a herd who gets tackled can yell out to save the rest of the herd.
> I think the direction of the thought is reasonable though, you should just take it a few steps farther. If networked infrastructure is a good idea, then maybe cars are not a good idea. We already have driverless, well defined, organised modes of transportation, they're called trains.
How do I take a train/subway from my apartment to the front door of a McDonald's? People go from building to building. It's not practical to do this without cars or buses outside of maybe 5 cities in the world like HK or NYC.
Also there is an ungodly amount of cars in the world. It's a lot cheaper and efficient to retrofit cars with self-driving modules than to recycle all that metal into trains/subways.
>How do I take a train/subway from my apartment to the front door of a McDonald's? People go from building to building. It's not practical to do this without cars or buses outside of maybe 5 cities in the world like HK or NYC.
Mostly by taking the subway to the nearest station of the mcdonalds and walking. I've lived in more than 5 cities without ever owning a car. Walkable cities on the planet are the norm, not the exception, the US is very skewed in that regard because it built most of its cities around the car, but that is only a fraction of the world population.
Much more important for the future is to ask what all the places do that still have the decision to make if they want to expand their usage of cars, like the African continent and much of Asia, or if they want to invest into mass transit and built their cities around alternative modes of transport.
Outside the vacuum of technology (can it be done?), it seems to me that the role that policy plays when it comes to the actual rollout of self-driving cars (should it be done and how?) is vastly underrated. This seems to be the real bottleneck in mass adoption - improvements in technology will have diminishing returns after a certain point.
For example, things like getting companies to agree to a unified standard at a government/industry level & determining frameworks for liability all seem to be as important (and perhaps difficult) as eking out another 0.00001% increase in safety.
Yes, harder than hype and deluded optimism expected. Plenty of people have seen through the hype and delusion for years now. Level 5 requires AGI. AGI is probably nowhere close to being realized - humans may well be centuries away from reaching the necessary technological level. Humanity might never get there at all. In any case, if humanity ever developed AGI, it would revolutionize the world. Self-driving cars would be one of the most boring and mundane applications.
It's only the end of a wave. 50s saw some ideas, 80s a few more, this one was a massive push [0], I honestly don't think the road to SDV is long and windy. It's just a natural bubble bursting backpush for now.
[0] I'm not even a fan of the idea anymore btw, just trying to assess the technical hurdles. Sensors have become numerous and cheaper, compute power is immense. Investment won't be higher to make new steps in 20-30 years.
There are problems where we know how difficult the problem is. These problems, you can just throw money at them to make them work and the skill is in figuring out how to do it cheaply. There are other problems where we don't even know what we're missing. I see people, all the time, try to extrapolate current progress on problems where we don't know what puzzle pieces we're missing. Nobody knew what it was going to take to build self-driving cars, nor do we know what it will take today, but the hype machine that sucks in funding and grants and produces nothing pretended like this was a problem we understood and just had to throw money at it. I've found that asking "do we know what we don't know yet?" has been a surprisingly good way to cut through the bullshit over the years. I'd say "I told you so" if anyone knew who I was or had any reason to listen to me.
If you could create instructions efficient enough for a computer to drive a car, it always seemed like you just created the best instructions for a human to drive a car too, and a human who had similar instructions would always be a better driver of the two. "Pay attention to this, prioritize these things, brake when this occurs, turn when this occurs, safest speed is this given these parameters." and so on. You would think silicon valley would be obsessed with learning how to drive really well given the amount of time so many engineers have spent on automating it. We should have our own F1/WRC team by now sponsored by these companies.
The more self driving advances in lab, the more safety features for regular cars are invented which in turn makes it harder for fully self driving systems to computer with human+safety features and justify their value...
That's an excellent point. Why hand over complete control to a computer when we can have both? Each has different strengths, so an almost-SDC with a human at the wheel will surely be better than a true SDC. I think that's a much more likely future.
Lived in Russia and have some knowledge about programming. Unless some breakthrough happens in general purpose artificial intelligence research, I do not believe self-driving cars that can drive here are possible at all.
>One study attempting to estimate the effects of self-driving cars on car use behavior simulated a family having a self-driving car by paying for them to have a chauffeur for a week, and telling them to treat the chauffeur service the way they’d treat having a car that could drive itself.
>The result? They went on a lot more car trips.
That's kind of pointless 'study'. Of course I will take a lot more trips for some time after getting chaffeur/self driving car, just for the novelty of it. One family for one week does not really tell anything
As the article said, we're in that awkward transition period. The problem is that it puts people's lives at risk, even though most deaths could be avoided if the drivers hadn't been so negligent, like watching their smartphones while the car AI operated the car, sleeping etc. It's a turbulent transition, but the tech is here to stay, and it'll not only keep improving, but the advancements in AI due to self driving research will surely benefit several other areas;
It's hard to fully understand the challenges of SDCs, because most of these players are extremely secretive about their approaches. The fact that we rely on disengagement data from California as some proxy, that's just not great data when so many of the players are doing 1000's of miles in other states. It's sad, I wish we knew more about the inner-workings of these systems, and I would love to see them collaborate for the benefit of society.
I have a feeling that self-driving will be one of those things that everyone will be opted into. Self-driving becomes much easier if everyone is playing by the same ground rules (with either the same set of cars or a common protocol through which cars communicate). Trying to engineer away the entire problem space of driving seems intractable (think: you have to engineer for drunk drivers doing damn well nearly anything on the road)
Even if everybody in every vehicle (including motorcycles?) is forced to opt in, you still have pedestrians, bicycles, scooters, animals, and who knows what else on the roads. This feels like an impossible problem.
What I think is interesting is that none of these articles talk about one specific implication of self-driving cars.
What happens to car-insurance companies?
Seems like with self driving cars the form of car insurance we have now wouldn't be really necessary. I expect car-insurance profits to decline. Isn't there an incentive ( read: lobbying ) for car insurance companies to discourage self-driving cars?
I think car companies try to rely way too much on machine learning. You get some promising results fast, but it is all inside a black box, both verifying the correctness and changing what is wrong are almost impossible jobs.
Maybe machine learning can be used to tell the difference between a dog and a plastic bag, but you'll need some hard code to describe how to react to either.
>ut you'll need some hard code to describe how to react to either
My understanding is that's largely how it's done. The ML part is mostly about recognizing objects. But the car doesn't "learn" how to drive. It's told how to drive depending on what's happening in its field of view.
Which is why there's probably misperceptions about the importance of miles on the road. It uncovers un-programmed situations but it's not like the car runs over someone and reinforcement learning leads to it not doing that next time.
I've become increasingly jaded about the tech hype cycle. We had 3D Printing, VR/AR, crypto, and self driving cars hyped to no extent in 2-4 year cycles. We were told that they were going to change our world in the next few years.
Turns out that the underlying technology was far from maturity in each case and the use cases limited to a handful of enthusiasts.
I worked at a car marketplace company. The number of time people told us back in 2014-2016 that Americans would no longer own cars in “2-3 years” and that consequently our market would vanish over night ... btw this group of people included very respectable folks, many of whom have a lot of fans here on HN :)
That's trivially stupid though; if a perfect SDC came out tomorrow, it would take more than 2-4 years to turn over the vast amount of capital out there, and probably another decade or two to shake people (and thus policy, in a democracy) out of their status quo bias, no matter how terrible the status quo is.
Who would have thought that it might be tricky to replicate the capabilities of an organism exquisitely designed by evolution over more than 10 million years to dash through the jungle canopy with break neck speed, split second coordination and decision making?
We could make driving fully automatic by going the train route, i.e. filling roads with tracks and letting the cars move on those tracks only, fully automated.
But that's not really sexy, is it? it just doesn't sell as well as fully automated AI driving does.
There are lots of edge cases that human can adapt to easily but would confuse machines. The risk is that when that edge case comes up if the car go full bat sht crazy and kills someone, that will bring down the hammer.
Where did they find people who thought this was going to be easy, and more importantly why were they given billions to screw around with after grotesquely underestimating the complexity of the task at hand?
I have a feeling the adoption of AI will be like the early years of the internet. Massive hype, then a massive crash and then slowly it finds its into everything, fulfilling most of what was promised.
harder than you pundit expected. nothing in the news cycles ever warranted such optimism. been a skeptic of current tech long enough. they'll come eventually, but they haven't yet even started to define the boundaries of the domain problem, just piling heuristics and people literally died because of corner cases that weren't covered.
well unless you want to tow the line of "that wasn't true autopilot" to wich I say yeah, that's the point exactly, none of them is doing it.
My impression (not in the field) was that Uber wanted to believe in this out of wishful thinking, and then everyone scrambled because they thought they were going to be left behind.
And they are just getting started with in Developed countries.
The real test for a self driving car would be on roads in third world countries.
That would be a real test of capability.
Meanwhile, I just wish I could drive my car without moving my legs or hands so often. Why can't somebody make a car such that it can be driven by, say a simple joystick like thingy.
as long as software is based on clumsy "if then else when for" statements and computation relies on binary switches - self driving cars will remain impossible.
To anyone with a detailed understanding of the current limits of machine learning, this should be no surprise; unsupervised learning is far from solved and ML in its current state will always be plagued by the cat and mouse game of edge cases. The reality is the industry has decided to go this way anyways because it has a good profit outlook if you can get it to work, which is the only thing funding the endeavor. Consider two ways of going about automated transportation:
1) AI. The car independently makes decisions and drives itself.
2) Networks. The car communicates with a grid to make decisions.
Why did we go for #1? Well, thats easy, capitalism. Consider:
AI:
- Company gets to own the intellectual property to form a temporary monopoly.
- Easier to sidestep governments involvement.
- No need to build large infrastructure
Networks:
- Shared, less opportunity for monopoly formation.
- Will need the government to cooperate. Governments are slows.
If I had to guess, we will eventually go the network route. The research used for AI will drive safety features and failsafes, but not the meet of it. Anyways, why accelerate a line of stopped cars one at a time with autonomous vehicles when you could accelerate the entire line of cars simultaneously with a networked setup?
Oh, far worse. 90% of the effort will get you 90% of the way there. Another 90% of the effort might get you another 9% of the way there. But that last 1% of "getting there" is going to take many more 90%s of effort.
You mean the output of VC firm and Silly Con Valley marketing centipedes isn't strictly true? Perish the thought! Next thing you know, you'll be telling me the advocates of strong-AI, 3-d printing, Virtual Reality and Drones as revolutionary technology might not be exactly accurate.
I do wonder how much of the self-driving narrative at the peak of inflated expectations was the result of
1. Wild optimism fueled by how rapid the progress of ML had been in certain domains over the course of a few years combined with a lot of hype and general SV techno-optimism. (And the fact that a certain demographic so desperately wanted a robo-chaffeur to drive them around.)
vs.
2. Hypesters and scammers who knew it was mostly smoke and mirrors but it didn't matter so long as they got their payday.
In the case of 1. I'm certain a whole bunch of it is people getting high on their own supply; aka pay a marketer to hype your thing, your competitor does same, then you become afraid of all the progress your competitor has made (which is imaginary marketing hype). I've actually seen this dynamic at work in "AI" land; it's gotten me work.
I've even seen it happen within the same company: marketing dude talks to engineer, gets it all wrong and exaggerates capabilities; then CEO demands to know why the capabilities in his sales literature don't exist in the product.
I call it "human informational centipede."
Hypesters and scammers always around; excepting in Theranos type cases, they mostly don't really push the needle. Pretty sure Irene Aldridge didn't change perceptions of HFT much, for example.
You definitely get into feedback loops. And if "everyone" is saying self-driving is right around the corner you also begin to doubt your own skepticism especially if you think others should be in a better position to know the reality than you are.
To your first point, I agree there was something of a big game of topper going on for a while. If anyone came out and said that they weren't going to have production self-driving for 10 years (much less 20 or 30). A lot of people, including on boards like this, would nod their heads sadly about how far behind $COMPANY was compared to a certain other car company that was already supposedly selling self-driving-capable vehicles.
Turns out you can't make ridiculously complicated autonomous tech that interacts with the real world in an incredibly nuanced and varied way out of pure VC hype.
There’s a huge bubble in the autonomous vehicle space. I foresee Waymo being able to do it, only because I know some of the crazy smart people that work there. It won’t be until the end of the decade though.
I guess that isn't as pithy but it's closer to the truth.
When it became clear that Uber's strategy (under Kalanick anyway) was premised on replacing drivers with AIs before the cash ran out I couldn't see general self-driving vehicles coming within 20 years. I still say that's true. AI assistance? Sure. But there's an uncanny valley there too where the AI will be good enough in most circumstances that drivers lose attention and people will die. You already see this with Tesla autopilot.
Here's a simple counterexample to the idea that self-driving cars are "just around the corner": in NYC, quite a few buildings have doormen. This is great for residents. Part of this is dealing with deliveries and so forth but there's also an issue of general security. People can sneak in (and I'm sure do) but just the fact that a human is there acts a strong (but not complete) deterrent. Just like having a dog is one of the most effective burglary deterrents.
What prevents a lot of bad actions on the roads is actually fear. Fear of what other drivers might do. Fear of road rage by other drivers. That sort of thing. This is just how humans work.
Once a driver knows the car next to them isn't driven by a person it changes their behaviour. They will do things they wouldn't do if it were a human behind the wheel, particularly because they know an AI won't ram them, cut them off, yell at them and whatever. There's no fear there. Even if there's a passenger in the car, it's still (psychologically) different.
How do you program around humans changing their behaviour to take advantage of there being no driver in your car?