I think it would happen this way: firstly you would automate things that go on a track - like trains - which I think we already have. Then secondly you could have certain lanes or roads that are designed for specific self-driving vehicles. This could help buses and trucks (in certain areas) get around. Then you would keep adding more of those, until many roads have those and very few manual lanes. It wouldn't immediately work in a big established city, but some existing highways or interstates can be fitted easily. Anywhere that has an HOV/Carpool/EV lane can be refitted to be an automated driving lane. Think of it like the Cash Only vs. Easy Pass toll lanes. At first there was 1-2 Easy Pass lanes, then the cash lane was just 1-2 - now we automate them reading license plates.
So to sum up, I think that the self driving will start in very specific areas, and then those specific areas would be expanded until they are the only areas. Especially if those roads contain automated taxi (Uber, Lyft, etc) which you can call with your mobile device. The main reason a lot of people drive is because public transportation in their area is junk and you have to go by it's schedule and route. But having a point-point self-driving option is way more convenient. It's like the subway, but it doesn't need tracks.
Seriously, please automate highway driving first. You get all the benefits for shipping and efficiency and logistics, so the money is there. The problem is so much more simple, and there can be convergent infrastructure (embedded sensors, mesh network updates, higher standards of road quality).
On long trips on highways it just kills me that I have to stare at the same image for hours basically, and that a program couldn't do that more safely than me.
And if this happens, the airlines and all their fees and cramping and security and bad customer service can be brutally competed against for any trip under 800 miles, and probably longer.
Even a one hour 500 mile trip in a plane is really 3-4 hours, and you don't get a car at your destination, while driving there is about 5-6 hours. If I could surf that whole time or nap, then I'll drive.
If I can sleep overnight, then 8-10 hour drives become far preferable to flying.
If I have friends along the way to visit, or interesting places to vacation in, two or three day trips are more preferable than 1500-2000 mile flights, especially if you have a family and it is way way way cheaper.
Airlines have been reorienting themselves to shorter hops over the last couple decades. Self driving on highways will decimate that business, and only long haul/overseas will remain.
It seems there is a solution for your problem already: Busses.
Although not strictly automated, they are externalized. That is you are not the one doing the driving, and therefor you are free to sleep, read, work, etc, while you wait to get to your destination.
The beauty about busses is that they exists. There is no unsolved technology problems with busses. And they can be (and most likely already are) implemented without any or minimal additional infrastructure in your area.
If busses are overcrowded your local government can simply buy more, so they scale really well. If they become congested, we also have a solution called trains. Trains also solves your problems with driving and as a bonus get you to your destination far quicker then driving.
Many people want privacy and control over the vehicle they're in, and will pay a premium for it. When you take a bus you're sharing it with random people, can't stop to take a break whenever you want, and can't go directly door to door.
Trains can be a superior alternative depending on what country you're in. In the U.S. we're far behind some other countries in high speed rail infrastructure, so over long distances trains are usually slower than driving.
If there is political will to fund the research needed and create the infrastructure for autonomous highway driving lanes, surely politicians in the US can find the incentives to fund the infrastructure needed for extra buses (including luxury lines with first class seating; for those willing to pay the premium) and train lines.
Edit: As an example of political will for the established and proven solution of mass public transit system, over the unproven non-existent solution of autonomous highway driving lanes. Look at the steam (pun intended) the idea of establishing high speed rail between Portland OR, and Vancouver BC is getting from the public and politicians alike.
> There is no unsolved technology problems with busses.
They have similar physics problems as cars.
1. They can't go very fast nor carry a lot of weight before they become uneconomical or unsafe.
2. They especially can't go very fast for very long if they are electrical.
3. They need a driver which makes them expensive to run often or with few passengers.
If you get to choose, what you want is something like:
A. Medium speed trains. Cheaper than high speed ones, but still twice as fast as cars with more comfort and less attention. But they need to have good infrastructure, so they can go that speed well and consistently.
B. Local buses with fixed routes.
C. Better golf carts for local transport.
Eventually you would automate all of them. Which would be relatively easy because the fast trains go on tracks, the buses go known routes and the cars, that are the most complex to automate, go slow. Slow also wouldn't affect for example automated deliveries, or repositioning, over longer distances.
Also even before automation as the cars would be "underbuilt", relative to today, they are cheap. So the don't get the sunk cost of a car. And since they don't go outside the local area, each municipality can choose their own infrastructure more freely.
Of course I don't see it happening as things are today, but this is in my opinion more inline with what should be discussed. Since things relative to physics isn't likely to change quickly.
Indeed buses or by no means the optimal technology of transporting people. But they often are the cheapest and easiest, and therefor often the smartest choice for urban planners. However once buses start to become congested (Seattle, I’m looking at you), or forseen to become so, they really are a terrible option next to some sort of a train (light, or heavy, underground, elevated, or on ground).
Buses could probably work in urban areas if there were less cars. Today cars come from far and wide "bunch up" in cities. Much of the reasons why trains are better in urban areas today is because of the dedicated space and being electric. But train don't get to really use their speed, easier environment or carrying capacity much in urban areas.
With less cars in cities buses could be competitive with trains since they are street level and can go in different directions, but the would probably still have to be electric and automated for that to be true. (Of course you would still need subways and commuter trains anyway, but you wouldn't be as dependent on them). Trams could probably also be an option, but I am not entirely sure on the future of self-driving trams.
Your comment is how things "should work", but really not how they do. At least in the U.S.
The drive from Las Vegas to Los Angeles is about 3.5 hours. It's a 90 minute flight.
If you try to do it by Bus it takes 12 (often for about 1/2 the price of the plane ticket), and for some reason a train with NO STOPS took 16 hours. Which was discontinued just a couple years ago, because no one was using it.
Maybe someday we will get a high speed Mag Lev train for this commute, because mass transit has completely failed this commute.
> Your comment is how things "should work", but really not how they do. At least in the U.S.
Since this is a comment on a thread about the possibility of building the infrastructure required for highway lanes reserved for autonomous cars, i.e. a thread about how things "could work" I see no problems with giving my self the same leeway for the established technology as the parent does for any future technology.
I think the highway freight problem is also a good place for humans to act as a backup, since you more or less know when that will be needed (the beginning and the end). Also, a human driver on their 7th hour of driving is often not as alert as they normally would be, so the level of comparison is easier for AI to meet.
Unfortunately, what the people paying for this want to achieve is the ability to not have to pay for a human driver, and if you need a human for the beginning and the end of the drive, you have to pay them for the whole time. So, while this would be an excellent application, I fear the business incentives are not aligned for it to happen first like it ought to.
I wonder if "launch zones" will be one of the first "modify the environment" solutions for public roads. A staging area where a driver can leave the truck that is to go on the highway. Some place very near the highway (even better if it is something like weigh station simple to enter and exit).
If there were enough of them and someone like Elon Musk with lots of eyes on the road via teslas could identify when the vehicles need human drivers (wrecks / construction / weather) then maybe the trucks approaching the human-need area could get off at one of the staging areas for a human to take over. It would help with the "human from beginning to end" problem that would require level 5 automation.
You might not even need to team up for eyes on the ground if you have lightweight drones regularly flying the routes back and forth.
One problem might be enough people to take over at a moments notice. Another problem might be drivers who only get periodic experience handling the truck. But it seems like a semi-feasible solution to get a step closer to highway freight automation.
As someone that rides a motorcycle, I'm deathly scared of self-driving cars and the tech companies that will try to automate highway driving, especially with their agile 'move fast break things' 'ask forgiveness not permission' mentality
I think your prediction will turn out to be as closer or closer than any other. Certainly more on the money than the breathless “won’t be a problem in a few years when self-driving...”
But as I read your probably-on-the-money prediction, I was thinking “here we go again, dumping money for the two-ton wheelchairs while alternative transportation goes begging again.” Where’s my fucking barrier-protected bike lane? Or any bike lane at all?
Frankly, if that’s how it goes, I wonder if it happens at all. “Separate lanes for wealthy tech workers” is how that’s going to go. You know why we can’t even get bike lanes? Because people bitch and moan because they don’t personally use it. Expand that to an empty lane of moving traffic while the plebs sit in traffic.
> Frankly, if that’s how it goes, I wonder if it happens at all. “Separate lanes for wealthy tech workers” is how that’s going to go. You know why we can’t even get bike lanes? Because people bitch and moan because they don’t personally use it. Expand that to an empty lane of moving traffic while the plebs sit in traffic.
It’s bad policy to spend scarce resources (in this case road space) on something few people use. Just 0.6% of people bike to work at leas once a week. Bike lanes are a phenomenon completely out of proportion to how many people use them.[1]
Self driving cars will be that way at first, of course. But the big difference is that self driving technology has the possibility of becoming mainstream. Biking, by contrast, will never become mainstream. The average American commutes 16 miles one way. That’s an hour of biking each way (in weather that, in most of the country, is too hot or too cold most of the year).
It’s bad policy to spend scarce resources (in this case road space) on something few people use.
You mean like self-driving cars? How about we spend "scarce resources" on things people actually do use right now, like bicycles, scooters, electric skateboards, whatever.
You're also arguing, "let's not spend anything on infrastructure for $ACTIVITY, then when no one does $ACTIVITY, we can say, 'yeah, but nobody does that'."
We're already doing that in my neck of the woods with toll express lanes on the insterstates. The moneyed skip the traffic at the expense of widened roads which would help all. These express lanes will logically be the first to allow automation, but at least in that case the affluent will be paying to prove the technology which will eventually benefit everyone.
A problem with this is that I don't think there are very many places with available, extra lanes.
That prevents a smooth, gradual transition. Essentially you're proposing a massive public transport project that will progressively take over and replace existing transportation infrastructure.
I guess that's possible, but we could have done that without autonomous automobiles and it hasn't happened yet (well, not on a pervasive scale). Actually, if we could do that, I'm not sure we'd actually choose autonomous vehicles, as generally envisioned, as the mode of transportation.
Put another way: I think the whole lure of autonomous vehicles is the idea that they can adapt to existing infrastructure rather than the other way around. That's exactly what seems to make makes it possible for them to catch on in a big, general way, and eventually replace human driven cars.
This has been my line of thought as well, autonomous vehicle will slowly emerge over time. We are not waiting on 1 or 2 big tech breakthroughs alone, but also on a lot of infrastructure and regulatory changes, which take a lot of time to enact.
I would wager we are at least a decade away from self-driving cars being a common part of daily life, in the sense that the average driver would see them on the road somewhat regularly. And probably 2 decades away from self-driving cars being an "everymans" vehicle and/or delivering on the concept of not having to own a vehicle and instead summoning one when needed, or putting your autonomous vehicle to work for you while you are not using it.
> Then secondly you could have certain lanes or roads that are designed for specific self-driving vehicles
I have a friend in mapping at Uber ATG. Uber Eats is only available in areas she has mapped which makes me think that they aren't waiting for dedicated infrastructure and relying on their mapping database for now, at least for early development. Granted, Uber Eats isn't using AVs, but that makes me think the GIS crew is doing some heavy lifting there. Pure speculation on my part though.
The issue with the intermediate 'specific lanes' step is three-fold 1) those buses and trucks will still require a human driver for most of their journey for dealing with the non marked sections so there's not too much gained because you're still paying someone the whole time. 2) the infrastructure is useless without the vehicles and the added capability in the vehicles is useless without the infrastructure so there's a chicken and egg problem with getting either started. 3) On top of those unless the lanes are completely segregated the only real thing adding hardware to lanes adds is making lane keeping easier. If not you still need all the accident avoidance cameras/lidar on the car the same as you need on a more 'free range' auto.
I don't deny that this is a thing that is possible to do, but what's the value add of the cars being self driving - why wouldn't we invest public money more rationally than that? The only political way this could happen is if people in powerful political positions were being paid by the manufacturers or patent-holders of this technology.
I would expect full self-driving to come through remote operation capability - at first a single person can remotely operate one vehicle. After some improvements it would be possible for a single operator to control 2 vehicles and so on.
Your vision is idealistic, but I doubt it's going to work like that in advanced capitalism America. We don't even really have automated trains in the USA, though admittedly the business-case for those is much lower given the high worker:rider ratio. It stands to reason, in our very greedy and polarized nation, anything that requires infrastructure and policy changes is going to be VERY slowly rolled out. The most likely scenario is tech companies bust out advancing features in regulation-light states/municipalities in the USA.
Look at Uber, Lyft, Bird, AirBnB, and many other companies who just implement whatever they feel like and dare cities to challenge them. The path of corporate-profit-seeking SDC's seems far more likely than your vision of limitation and safety.
In fact, if we actually look what's happening on the ground, Waymo is advanced in Arizona, a regulation-light state. Tesla is pushing advancing features, hardware, and software across markets, though there are some exceptions made. Still, these companies are going to push for autonomy-everywhere faster than most would deem comfortable.
Traffic lights are automated. They detect how many cars are waiting to know when to change, they adjust their cycle length depending on time of day, etc., they synchronise along long roads so people generally get a string of green lights.
Only a small proportion of traffic light signals are adaptive and/or connected today. Most rely on manual signal timing studies to set a timing schedule. The up-front cost is typically prohibitive. Full disclosure I am founder of a startup changing the economics on this.
Most ones I regularly go through here (Christchurch, New Zealand) seem adaptive, as you can see the detector positions (often separate ones for cyclists as well). The lights clearly respond to them. The behaviour along long busy roads here may well be just timing to line up greens.
In the USA the latest statistics indicate less than 3000 of these adaptive lights are in operation (“less than 1%”). I am curious what New Zealand did to fund the six figure cost per intersection.
Hm. Maybe we're talking about different things. The detectors I mean are fairly simple: inductive loops in rectangles cut into the road, that detect the metal body of a car (or metal frame of a bike). They don't seem technologically advanced or expensive (but I don't know the cost). This article claims they're common: https://auto.howstuffworks.com/car-driving-safety/safety-reg...
Sorry, now I see what you mean!! Yes underground inductor loops are already pretty common. They make it possible to do things like preventing a change of lights if nobody is waiting on the red. The limitation is it can only detect the car that is physically on top of the inductor loop. They can also be less than ideal in some situations e.g. someone on a side road turning right who triggers a change of lights, needlessly stopping the main road. It’s typically a great improvement over nothing at all, but there is additional (massive) improvement.
I thought you were initially referring to Adaptive Traffic Signal Controllers [1]. They detect vehicle congestion in real time and adapt routing strategies to maximize throughflow. They also coordinate directly with other traffic lights instead of relying on manual timings. Which are often determined through lengthy and data intensive “signal studies”. These have shown to provide, for example, substantial time savings, fuel and emission savings, and there have been measurable positive effects on safety.
One of the things I do when I'm the only car waiting at an otherwise empty intersection is to think about all the ways it could be better automated rather than timed.
A good many people stay off the road anyhow during poor weather, and those who do go out usually stay on familiar routes, such as to work or their favorite grocery store.
If self-driving cars/buses stick to a subset of streets and don't go out in big storms, they can still be widely useful. Set up (require) a universal network/database of road conditions on the main streets that all self-driving vehicles can tap into. If a problem arises, it then only directly affects the first bot-vehicle that encounters it instead of all of them that use the road. It's internet-like packet switching.
Thus, bots may get "confused" easier than humans, but they can also take advantage of automation to work around confusing areas. The upsides of automation thus counter the down-sides.
The biggest cost of taxis and buses is the driver. If you remove that, then "hitching a ride" is a lot more affordable to those who can't or don't drive.
By admitting this fact you dramatically reduce the value of self driving cars and thus the inflated valuations of companies trying to create them. It is an uncomfortable truth.
Once their quantity reaches a threshold, it will quickly become the primary mode of transportation, and expand onto more streets.
Many youngsters don't even want to drive these days, I've noticed. They'd rather sit in a bot car/tram and surf social media. The desire for the product will grow.
> A good many people stay off the road anyhow during poor weather, and those who do go out usually stay on familiar routes, such as to work or their favorite grocery store.
How many people? Folks in Atlanta will happily drive 55 on 75/85 in torrential downpours where you can’t even see the lane markings.
Should everyone own a self-driving car and also a normal car they can drive in situations the self-driving car cannot handle? Keep in mind that public transit is nonexistent in many of the places that have weather conditions self-driving cars can't handle.
For me personally, if a self-driving car cannot handle 100% of the driving and I have to own a second car, I might as well just make that second car my only car and save a bunch of money. In that sense, it is all or nothing.
I don't really think treating self-driving cars as a ridesharing service is a good way around this. People don't want to share cars with strangers who trash the interiors and vomit in them after a night at the bar. People want to keep stuff in the trunk, go to Home Depot and buy a bunch of stuff, transport their pets, etc. -- all things that are difficult to do with a ridesharing service.
Bot-cars in snowy places may be better equipped for bad weather. They could be more like snow-mobiles than cars, or the bot co. could have a fleet for snow-mobile-like-cars for the winter (or have winter wheel-train add-ons).
This has been obvious all along to anyone who has ever driven or used a computer before and thought about the reality for more than a couple of minutes. But the press and VCs bought into the hype from the likes of Uber who needed to keep generating huge new investment to stay afloat and Tesla which is run by a delusional snake oil salesman who had a single hit with the Model S.
The shocking examples of crazy unexpected behavior in the article like street sweepers that do exactly what they are supposed to be doing and cyclists who don’t follow traffic rules blow my mind. Next we’ll learn that some streets have poorly painted lines or that road construction exists or that most human drivers exceed posted speed limits or that there is weather other than sunny and clear.
> Tesla which is run by a delusional snake oil salesman who had a single hit with the Model S
Being that dismissive and willfully ignorant discredits your entire argument. A snake oil salesman produces nothing and hoodwinks people. I drive my "snake oil" every day and not only is it the best car I've ever driven, it's the coolest thing I've ever owned. I routinely watch his "snake oil" launch huge payloads to orbit and land the booster(s) autonomously for cheap re-use. Hate the guy personally if you want, but slander like yours is simply holding back progress.
Nobody said the snake oil salesman doesn't also offer aspirin and stuff that is actually valuable... in fact they'd be a shitty caricature of a snake-oil salesperson if they didn't.
I agree it's overly strong language and I admire Teslas for what they are (fancy toys, his words) and SpaceX more than I can adequately communicate... however, his claims about Hyperloop, the Boring Company, Neuralink, and yes, even those about the future of Tesla self-driving could all be considered hyperbolic claims that fall into the "fake it till you make it" category, just adjacent to the actual frauds.
People challenging these claims and holding hyped up individuals to account are not holding back progress, it's the blind faith people put into hyped individuals and their claims that is holding back progress. Elizabeth Holmes has done far more to hinder progress than the people who were naysaying her claims and calling her a snake-oil salesperson before it was commonly known to be true.
It means that when you defraud investors over outlandish claims, even if you believe you can do what you are claiming, then you have destroyed trust and made it harder for everyone else in the industry (who isn't making shit up) to get the funding to do the real work that creates real progress.
You seem to have skipped/ignored the whole paragraph where I give Elon lots of credit and then the part where I how people were treating Holmes before it was common knowledge that she was making shit up.
I don't believe she did that because she wanted to defraud investors, she just believed her own hype and was willing to lie more than most to maintain that hype... if you don't think that these exact conditions affect Elon, or the people that feed him information, then I think maybe you're suffering from some the negative effects of excessive fandom.
I just think it is disingenuous on your part to act like their output puts them in the same boat. You are lumping them together and I find that to be a completely inaccurate comparison. How you can compare someone who literally made nothing with someone who took his millions of dollars of worth and risked them on a space company and a car company ... it just blows my mind.
As far as self driving is concerned I do think that we might be suffering from early apply iphone prototype syndrome: at best we might be a couple of decades away from a solution to this problem. However, if you take the time to listen to some of the presentations given about how Tesla is actually trying to solve self driving, their approach sounds about as sound as I could expect to be.
Please note that defraud is something that applies to people who intentionally deceive. Of the two I would say that only Elizabeth falls in that boat: she went as far as to attempt to alter her voice intentionally among other things. When it comes to Elon, he is just too eager to see results happen. I have difficulty faulting him for that.
I lump them together in one dimension, the one where they can extract millions from investors based on hype.
Yes, Elon deserves the hype more because he has delivered in the past and he certainly has more skin in the game, but it doesn't mean he isn't still playing the "fake it till you make it" game to some extent, and because failures are more impactful than wins, I think it's a dangerous game to play with his reputation.
I want Elon to cut the hype BS because I think it will hurt him in the long run. It's all about trust for me, it's a resource that is being depleted at a rapid rate and it's extremely important to a functioning civilization.
Until the relatively recent backlash, Tesla’s Autopilot page was carefully worded to trick the average reader into thinking that Tesla was far closer to autonomous driving than it really was.
That’s snake oil salesman 101.
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
It's hard to judge Tesla at this point, most automotive companies really can't be judged until their vehicles are
on the road heavily for over a decade. I've said this before but I can't imagine a Tesla on the road today being on the road in thirty years. Hopefully I'm wrong. Otherwise we're going to have a lot of 'good for the environment' vehicles in landfills like cellphones. While the gas guzzling 30+ year old Toyota will be chugging along.
Tesla Roadsters have been on the road for over a decade. I have a 2012 Model S and a 2018 Model 3. Best cars I've ever owned, and my almost seven year old S keeps getting better with software updates. I totally get people hating on Elon, but it must largely be jealousy of what he has accomplished.
Well yes. Most people's current car is the "best they've ever owned" because people generally amass more wealth and drive nicer cars as time goes on (people who begrudgingly buy minivans notwithstanding).
I'm not gonna declare Tesla a "mature automaker" I see scrappers strapping stolen I-beams to the roof of 20yo Teslas.
SpaceX is the more mature and financially stable company by far IMO.
There are Teslas with 900,000 km. It's a bit less than 3 times the distance from the earth to the moon and a lot of 30 years old card have less km than that.
The first model S has been released in 2012, about 370 weeks ago. 900 Mm would be would be 2432 kilometres per week (1511 mi/wk), approximately 24 hours of highway speed driving (100km/h) per week.
Do you have any additional information on this claimed 900000 km figure? Or did you mean 90000 kilometres, which is just standard for such a car type.
A lot of 30 year old cars with less km than that? I think that's a pretty big assumption unless you're only considering non daily drivers or garage cars.
I also imagine most buyers who own a Model S today probably aren't going to keep that vehicle for 30 years because they're technically savvy and will want a newer vehicle. Hopefully I'm wrong but what I'm getting at is I don't want a new brand of vehicle that is treated like a cellphone.
Cycling cars every few years for hardware improvements cannot be good for the environment. Tesla also has a pretty bad reputation with self repair and rehabbing damaged vehicles which I imagine will deter the used market quite a bit.
21.5k km per year is the average driving distance in US: https://www.fhwa.dot.gov/ohim/onh00/bar8.htm
Multiply it for 30 years and you get 645k km.
It seems to me that it’s well below 900k km.
Do you have any official data that contradicts the US department for transportation to prove your assumption?
if you dont think elon musk is a fraudster with this robotaxi bullshit sir you might have drank the kool aid
repeatedly pretending that full self driving was anything more than a pipe dream is dangerous. everyone knows that except people that enable his scamming
I don't think he's a fraudster. I think he's a visionary who sometimes sees things more the way he imagines them than the way they really are.
Things aren't as black and white as a bunch of people on here make them out to be. It's possible to deliver great products while dreaming too big and failing on others. The quality of being able to take risks and fail repeatedly until finally succeeding is one that most entrepreneurs possess.
I don't know why everyone gives this guy such a hard time. And I don't know why he has to be either a genius or a crackpot but not both at the same time. He can be enormously successful - like getting a new car company off the ground, making electric cars mainstream, putting unprecedented driving aids in the hands of consumers, rocket launches at an incredibly affordable cost - and not hit a home run with other ambitious projects like hyperloop, underground tunnels, robotaxis, etc.
Personally, I think the robotaxi idea is stupid. I get that there's a market for autonomous fleet vehicles but which individual car buyers are asking for a taxi service? I didn't buy a 70k car so that it could drive a bunch of strangers around and make me a couple bucks on the side. And, assuming they pull it off (which I doubt), I'd be pretty pissed if the cost of new Teslas went up because of the availability of some service I had no interest in to begin with. It would alienate me as a customer.
That said, I love my Tesla. I use Autopilot every day and it has been life changing. It's not perfect, but it works well enough for 90% of my driving. I rode in one of those self driving Lyft cars recently. It required 2 operators, the driver had to repeatedly take over and it was very similar to my Tesla in terms of ability. The difference is the Tesla is in the hands of consumers. It would be very difficult for me to go back to a regular gas car and I don't think I could go back to daily driving without advanced driver aids like Autopilot.
I have my doubts about city self driving. When I'm coming up to a light and it's obstructed by a big truck that's in front of me I wonder how they will solve that problem. Or railroad tracks, pedestrians, animals running into the road, etc. Those seems like insurmountable problems to me. I'm willing to wait and see though. They've already achieved more than I thought I would see in my lifetime. I won't begrudge them the occasional failure.
>I don't know why everyone gives this guy such a hard time
I literally wrote that hes advertising the dangerous idea of any kid of self driving
the other reply to me even used "autopilot" in their comment. if you dont think branding lane assist as "autopilot" is dangerous and still defend elon then alright. have fun writing walls of text worshipping techbro jesus?
>repeatedly pretending that full self driving was anything more than a pipe dream is dangerous. everyone knows that except people that enable his scamming
Drove my Model 3 on autopilot today. Entered a roadway that had recently been repaved, and lacking lane markers. The Model 3 couldn’t handle it and wouldn’t let me engage autopilot. There aren’t robotaxis until that trivial problem is solved.
It’s surprisingly good at 99.9% of scenarios. Unfortunately I encounter one of those 0.1% events each day, and they’re always different. For example, yesterday there was a tractor driving down the road in the lane next to me. It had axles which stuck out almost two feet from the wheel. Suspect that scenario isn’t going to be handled properly as the appropriate bounding box is non-standard.
I know this is far out there. But I believe the problem of fully autonomous anything via software is fundamentally impossible. The divide between analog and digital is just too big.
You are basically saying AI is fundamentally impossible. Could you explain what kind of magic in humans beside our simulateable physics is fundamentally impossible to imitate? I understand we might be a long way off of understanding how our minds work and there is no guarantee we can massively simplify those processes. But even an inefficient imitation of a human mind as seen in earlier sci-fy could bring lots of benefits. Saying fundamentally impossible seems like... well, wishful thinking, to say it nicely.
Rephrasing it as "fundamentally impractical with Silicon transistor-based computers" might make the statement quite a bit easier to defend without substantially changing the meaning. There's no magic in the human brain, but we're nowhere near capable of building one in the lab and certainly not simulating one in a computer. We're so far away from doing either of those things, that we really have no frame of reference to even talk about whether AGI is possible or practical.
So how hard are you willing to short the entire software industry over the next 10 years? I definitely wouldn't bet against automation in the next few decades.
You get the feeling that they're solving the wrong problem though.
No-one can can predict what other people are going to do, and what other drivers do is also affected by what you do. Most drivers aren't even paying that much attention. Realistically you can't model that to any useful level of precision.
I think maybe vendors are trying to build a more deterministic system that can justify its actions, but to make autonomous driving actually work, I suspect you have to make it drive like human: taking actions "confidently" (i.e. slightly recklessly) assuming the world will roughly follow a rational model, while also driving "defensively" to cope with the general unpredictability of reality.
The price to pay for this strategy is familiar to human drivers: the occasional fender bender.
The 'wrong problem' might include trying to achieve everything with smart-as-an-ant cars on dumb roads. It may be possible to make any important road smarter inexpensively, so that the car's sensors could 'read' the road. Then the car has external, deterministic 'input'.
By reading the road, pinging dumb targets, the car has input to help it decide if it can drive safely. If the targets go missing, it quits safely. The dumb targets should ping thru snow. At certain intervals, the road is smarter, with powered, networked targets that can update the car about 'danger, Will Robinson' conditions in real-time.
> Tesla which is run by a delusional snake oil salesman
Dude, you may not like Musk personally (which is glaringly obvious), but at least stick to the facts - he triggered much needed car industry transformation more than anybody else in recent times. industry itself wouldn't change itself so rapidly even if planet would be burning, that's obvious. And mankind gravely needs it now. Plus small detail about revolutionizing whole commercial space industry in extremely effective way.
Mankind now gravely needs more of these 'snake oil salesmen', even with their missteps
Ah yes, because it's not like the Prius demonstrated that there was a market for green car, or the CA or federal green car credits and HOV access provided incentives to people to buy said green cars, or federal regulations regarding fleet emissions standards created the market for trading emissions credits that has singlehandedly kept Tesla afloat everytime it's come closer to running out of money.
It's silly to shun emissions credits that are desperately needed to electrify transportation considering the trillions of dollars of subsides petroleum has received and the severity of climate change. The market for an underpowered hybrid like the Prius is not the same as a market for an EV with no compromises (how does one compare a Model 3 to a Prius?).
Any automaker gets ZEV credits and federal tax credits for their EVs sold, Tesla is the only automaker selling EVs people want to buy (in quantity). Tesla has already sold more Model 3s in a quarter than Chevy has sold Bolts ever. So why is Tesla the one selling hundreds of thousands of EVs per year and no one else is?
"How dare Tesla take advantage of these regulatory and market advantages anyone else could be taking advantage of!" /s
Sorry to hear that's your opinion! Besides waiting a bit longer to Supercharge vs getting a tank of gas (which is rare, only when traveling, I charge at home every night), my experience has been much better with a Tesla than any internal combustion vehicle I've owned.
That's not really the only compromise Tesla has. It got terrible ergonomics, lack of what is now isn't considered particularly luxury features, mediocre build quality, repairs that can take months etc. But it got fart jokes! So it all evens out. /s
For my personal usage, when I am traveling (which usually is something like driving 2000 miles with a single long stop) filling/charging time makes all the difference between getting there how I want it or not, too.
You might have a point if there wasn't any demand, but Tesla is shipping almost 360k vehicles a year (and Gigafactory 3 in China is about to turn up). Someone likes the cars they build.
Very, very few people use their car as you describe (2000 miles in a single sprint). If you must perform such a trip, most will fly or rent a car just for that trip.
Oh, of course there is demand, that's quite obvious. (Well, there was demand for Juicero, too, for a while)
Not too many people might use the car exactly like that, and I do it only when I actually need to get the car there. Otherwise flying is more pleasant. But there are quite a few use cases where recharge/refuel time, usually for people who need cars to make a living.
Personally, I just will not, ever, buy anything from Musk, because I think that he is a terrible (even by SV standards) person, but yeah, sure, Teslas obviously work for some people. But saying that they are the bestest, uncompromiziest, never before had the world seen anything as awesome super-vehicles is just ridiculous.
If anything, driving across Southern US, even if you are on a leisurely road trip, in a car with cooled seats (a $30K Hyundai works) is far more pleasant than in a car with fart jokes.
It's silly to shun emissions credits that are desperately needed to electrify transportation considering the trillions of dollars of subsides petroleum has received and the severity of climate change.
It's disingenuous to claim that subsidies for petroleum are remotely the same thing as subsidies for EVs. You're comparing corn to apples here. The proper comparison would be subsidies for renewable energy like solar and wind to subsidies for petroleum, since in both cases the subsidies are indirect to the market that's were talking about: automobiles.
It's also disingenuous to claim that subsidies stretching out over a century, and partially rooted in global geopolitical politics, are the same thing as subsidies that have been around for about a decade.
Finally, it's disingenuous to cite the "trillions" of subsidies worldwide for petroleum while leaving out the hundreds of billions of subdisidies that green power like solar, wind, geothermal, hydropower, and nuclear have received worldwide.
Tesla has already sold more Model 3s in a quarter than Chevy has sold Bolts ever....So why is Tesla the one selling hundreds of thousands of EVs per year and no one else is?
The Chevy Bolt is supply-constrained; total global production is 30,000 vehicles a year. There are currently waitlists at every dealership selling a Bolt. Competing EVs like the iPace are similarly supply-constrained and also have months-long waiting lists. Unlike Tesla, other automakers launch models slowly and scale up production as demand proves itself and production hiccups reveal themselves and are addressed. Last I checked, there's no months-long waiting period to get a Chevy Bolt fixed because (a) they don't need fixing straight out of the factory like so many Teslas do and (b) the supply of repair parts is readily accessible due to Chevy's mastery of basic automobile logistics...
Tesla made it cool. Prius was never primarily known for being eco-friendly, it was known as the car that dorks and hippies drove. Ignoring the cultural impact of Tesla is why plenty of green vehicles failed in the past.
Outside of SV, Teslas aren't viewed as any cooler than Priuses. They're actually seen as much worse--elitist vehicles--since Priuses are affordable for most families and Teslas are not, plus require lots of expensive infrastructure just for basic use.
I know a few people who drive Teslas. Not one of them is someone who would even be remotely described as cool. I know a lot of people who drive Priuses, and they range the gamut from dorky to cool.
Ignoring the cultural impact of Tesla is why plenty of green vehicles failed in the past.
What cultural impact? Tesla's influence operates largely in a self-made echo chamber. Outside of the echo chamber, it's had literally no impact on car sales or car culture.
Green cars failed in the past because they were (a) super expensive and (b) had no marketing spend. Tesla's innovation was the same innovation that Toyota made a decade earlier with the Prius--green cars will sell if you market them to customers. (And despite Elon's claims that Tesla spends $0 on marketing, Tesla spends roughly $100m/year or more on marketing, per their SEC filings.)
The Prius succeeded in spite of itself. They are ugly as hell, didn't offer plug in capability for years despite customers begging for it, so slow you can barely merge onto the highway and not the most ideal family car. It's the kind of car you put up with because you want to be green or you have a shitty commute and you're tired of paying $4/gallon for gas. Nobody buys a Prius for any reason other than that it's a hybrid with great fuel economy. Nobody. Then there was the Honda Insight which was worse in every way.
Teslas appeal to both the customers who want to be green and the ones who don't give a rat's ass about the environment. They look nice, they go fast, they hardly require any maintenance and they have a badass infotainment system which no other manufacturer has been able to get right. The first Model S really popped and people were buying it in spite of the uncertainty around it being electric. Then they started loading them up with tech and driver aids.
I get that it's not everyone's cup of tea, but I don't know how anyone who's into cars can look at one and not find something cool about it.
As for "expensive infrastructure", you do realize the first car manufacturers didn't decide to start building gas cars in the beginning simply to take advantage of the gas station infrastructure that was already in place? Electric cars are here to stay. Sooner or later somebody had to start building charging stations just like somebody had to start building gas stations.
> Nobody buys a Prius for any reason other than that it's a hybrid with great fuel economy. Nobody. Then there was the Honda Insight which was worse in every way.
I don't understand this comparison. Yes, it's a commuter appliance. Any car you spend 90k on is going to look nicer and go faster than a Prius. So will a 7 series, who cares?
What the Prius did is mainstream the idea of hybrids and "green" vehicles in general, and they've sold a zillion of them. Go ahead and call it boring (it's super boring), but so is essentially every other commuter car it's competing against. The Model S on the other hand has only "popped" among people who can buy luxury sports cars to begin with. It's fast, it's impressive, and it's a niche luxury product whose entire fleet is a rounding error in Prius sales figures.
What did he trigger exactly that wasn't triggered by regulation? He just exploited some regulations to keep his structurally bankrupt company afloat long enough. Tried the same with Solar City and failed BTW.
No other automaker is building EVs at the scale Tesla is (nor has built out global EV charging infrastructure to support those vehicles being built and sold). No EV being sold today meets the standards set by Tesla's vehicles sold in 2013.
I'm not agreeing at all. BMW just lost their CEO because they've lost so many sales to Tesla. Other automakers are desperately trying to play catchup to them.
I have never had people come up to any other car I've owned (Chevrolet, Mercedes, Jeep, Toyota, Lexus, Infiniti) besides our Model S and Model X with questions, tell us the car is beautiful (in an Aldi parking lot no less), or want to go for an impromptu ride along. I've never had kids run up to any of our cars besides our Teslas and go "omg it's a Tesla!!".
The transformation was making electric cars better than internal combustion vehicles in every way, sexy, and desirable. Mission accomplished.
We don't need self-driving cars that can go anywhere (ie. level 5) for them to be widely useful. A car that only worked in bright sunshine on a preprogrammed set of roads would be immediately useful to me.
Semi trucks, solely on well done roads (Interstates in the US) seem more likely to happen sooner. There's an obvious strategy for "ports", dealing with exceptions, etc.
Tesla frequently talks about platooning when touting their Semi. "Self-driving" by almost blindly following a human-driven lead vehicle doesn't seem that far off. But I'm not looking forward to the day when I see a "train" of semi trucks speeding down the interstate only a foot or two apart from each other!
That could work, for the platoon, while on the interstate.
It might not work for other vehicles on the interstate, though. If there's not room to pull in between them, then you have to pass all of them at one time. If you're a semi that wants to drive 1 or 2 MPH faster than them, that's going to be a very long, slow, pass. It will be even worse if it's in hilly country, where the semi attempting to pass may have higher speed on flat ground, but less power for climbing. That could block both lanes for a really long time. Human drivers in cars (some of whom want to drive faster than semis, and have more horsepower per ton at their disposal) will be very annoyed.
Then you get one "train" trying to pass another, and things get even worse.
Then the train gets off the interstate. They hit a traffic light. Less than all of the train makes it through the light in one cycle. Now what?
Yeah good luck making your exit with those on your right. I cant believe people think that this would be allowed on the road. You thought one jack-knifed truck on the road was bad wait until the human driver makes a mistake for a platoon.
But it's a far cry from the initial promise. For instance, Uber can't replace their drivers with self-driving cars if they need to bring all their drivers back when it rains.
At that point, autonomous driving is a convenience/safety features for autos/trucks that are more or less as we know them today. You can't build transportation systems and ownership models around vehicles that only work in some places in some conditions. (With some exceptions, but they're relatively narrow.)
This. An unoccupied car should be able to find a parking spot at the mall or in a parking garage, or go from the airport terminal to the "economy" or "cell phone" lot by itself. And come back and pick me up. If it gets stuck, the app on my phone should alert me to come to the rescue.
The following will happen before self-driving cars:
- Robotic trash cans that perfectly sort recyclables and compostables without any contamination
- Drones that fly around and kill invasive plants
- Landscapers that show up with robotic lawnmowers
- A dishwasher that can load and unload itself automatically
- My Tesla can keep in its lane when it passes an on ramp
(ect)
The problem, IMO, with self-driving cars is that there are other AI problems that are easier to solve. Until we start seeing more "consumer AI" in lower-risk products, self-driving cars will always be something coming in the future.
I wish there was much more research in that area. It’s a very hard problem and the benefit would be enormous. All the brain power that goes into ever better surveillance and selling ads would be much better used there.
Good points. Sorting waste and other stuff in a controlled environment is a problem domain that will provide huge increase in productivity and material efficiency when it becomes common. There are companies working on it.
You need to have an economic payoff to justify the AI development/training. Sorting trash perfectly in trash cans? That's a consumer $50 appliance with robotic arms, sensors, and software? No way
Drones flying around killing invasive plants? Capitalistic economics doesn't give a shit about the environment, because bank accounts are rarely affected by environmental problems, and certainly not in the next fiscal quarter. There's no funding for that.
- Landscaper robots need to beat immigrant labor in the US. That's tough.
- Dishwasher unloading to cabinets with robotic arms? The gain here is maybe 20 minutes a day. Payoff isn't as good as with driving which can be hours on a commute, or much more on long trips, and there isn't the safety improvement since dishwashers don't kill you in their use.
You definitely wouldn't use robotic arms for the dishwasher unloading.
You would make it the base of the cabinet. After the dishes were cleaned a dummy waiter system could correctly route dishes, cups, and silver ware to their correct apartments directly above the dishwasher.
But you would need to buy dishes that were designed specifically to be a part of your dishwasher package.
And I'm not sure how a robot could help you load a dishwasher. Unless you are talking about some sort of advanced roomba following you around picking up after you. That would actually be great for my fat pig of a roommate. I hate that son of a bitch.
It is phenomenally simpler to have autonomous driving in an area where the only things on the road are autonomous and there is a standard for interacting cross manufacturer.
Many new cars, and in some countries all new cars (by law), have enough sensors to be made self driving.
Standards are already being worked on. A basic level of autonomy - good enough for an area without human drivers - is being worked on by every car manufacturer. At some point, the vast majority of cars on the road (at least in cities where leasing and turnover s highe) will have the capability to be autonomous.
At that point I expect the car manufacturers to work with the larger cities to make central city areas autonomous only. Software updates will be pushed to all those existing cars, and overnight there will be self-driving on a level which is sustainable.
Once that happens, it will push people to upgrade to newer cars, expanding the number of autonomous-capable cars, and by extension allowing for the expansion of autonomous only areas.
As autonomous becomes more standard and accepted in the public conscious, solutions that will look obvious in hindsight will deal with many areas that we consider fringe now.
I read that Fedex originally claimed there would be areas they would never service. Even if they were right, it is less of a deal than they thought it would be. Same thing will happen with autonomous cars.
I can see self driving cars and trucks on long haul freeways, but to me it seems like self driving in city centers is going to be the hardest, not first problem that is solved. Even if other drivers are taken out of the equation, there are still so many variables in a city to account for. Pedestrians, bicyclists, Lime scooters (I guess these will be self driving too?), dogs, construction, etc...
The only way I see self driving cars dealing with all this in the near future is if the roads were totally fenced off from everything else so that only self driving cars were on them more like trains. However, this seems like such a big infrastructure investment (and just a big change overall) that I just don't see it happening within the next few years.
Our cities (at least in America) can't even put in bike lanes, or provide reliable trains, which is a 200 year old technology, so how are we to expect them to completely overhaul our infrastructure to help a technology that barely even exists?
Nothing makes a huge group of us cringe more, than people saying it's just around the corner. The worst part is it also comes from people who should know better.
>Waymo’s CEO, John Krafcik, has admitted that a self-driving car that can drive in any condition, on any road, without ever needing a human to take control—usually called a “level five” autonomous vehicle—will basically never exist. At the Wall Street Journal’s D.Live conference, Krafcik said that “autonomy will always have constraints.” It will take decades for self-driving cars to become common on roads. Even then, they will not be able to drive at certain times of the year or in all weather conditions. In short, sensors on autonomous vehicles don’t work well in snow or rain—and that may never change.
Well said. Autonomous navigation is still a huge open research problem, in the sense that we don't even know how it _could_ work, there is no AI work capable of integrating all of the necessary pieces. Heck, we don't even know how simple bird or lizard intelligence works, much less be able to replicate human cognition that goes into operating heavy machinery in a busy multi-agent social context.
So promising that it will be not just solved, but productized into a polished consumer gadget, in 5 or so years, was an astonishing, unbelievably foolish promise.
And worst of all, yes, it came from people who should have known better.
And as a quick aside: there's much talk about "ethical AI", but here's a more common AI ethics failure: taking money from funders, shareholders, and customers, with promises of imminent deliverables, while knowing full well that the AI advances required for that haven't happened yet, and there's no evidence for when they will, if at all.
Indeed, it's unclear if anyone has yet to develop a path planner that can overcome "the freezing robot problem". Even with perfect perception performance, which doesn't exist, the path planning to deal with congested environments is an unsolved problem, and the path planning to deal with ambiguous multi-modal intent might be an unsolvable problem.
To add insult to injury, the manner in-use today of modeling humans as essentially just another kind of dynamic object breaks down extremely quickly once humans are normalized to the presence of autonomous robots. The humans change their own behavior model to achieve their own objective function. They're not dynamic objects, they're learning objects, and now your AV models (which will include human interaction and reaction behaviors on purpose if you're it right and on accident if you're not) breakdown along an entirely new longitudinal axis.
I think the AV market fundamentally broke when GM acquired Cruise for $1B seemingly out if nowhere.
1) Because the reality is that the DARPA Urban Challenge result was not assuredly generalizable, and it may be the case that there's still basic science to be done before the domain is just better, cheaper sensors & high-performance compute away from being productionizable.
2) It's not at all clear that deep learning is really iterating toward a solution to this problem either, but the improvements to methods and hardware have produced the ability to make very compelling demos, that even the purveyors of themselves might believe in, and show iterative improvements on the previous result, thus fueling more investment into something that isn't even known to be possible.
3) But, none of that would matter if GM hadn't decided to make a power move in throwing $1B at an acquisition and in one fell swoop turned an early stage unproven scientific research market into a super, super frothy capital market where it looked like anybody at any moment could be a unicorn without any clear reason or any fundamentals.
Investors went insane and the expectations and promises followed them.
We have multi player poker breakthrough so it seems possible. I don't think planning is the core issue, I think that's solvable in time, it's the hardware and reliable general perception that is the blocker
In poker, the computer knows, that none of the players will suddenly start playing chess instead of poker.
On the road, the number of possible situations is way way way bigger.
Not just bigger. Bigger can be fixed with more cpu time. It is a fundimentally different problem. Nobody has a good theoretical answer even with infinite cpu time.
I say this while on a car ferry. Getting on this boat required me to navigate several strange road markings and obey a half-dozen hand gestures from staff, including several that were contrary to the painted lines. No AI is even contemplating car ferries.
Right. And, sure, that's an edge case for most people. But even if such edge cases only crop up every now and then, you're now at the difference between a reliable automated door-to-door system and one that can drive you around most of the time but every now and then forces a hopefully sober/licensed/competent driver to get behind the wheel to take some actions.
That's a huge difference. Maybe you can address it with some sort of remote OnStar-like system but now you're forcing a remote operator to jump into an unknown context and take actions that were too hard for the AI.
Computer integration in poker works because we simultaneously upgraded the infrastructure to support computer players while upgrading the computer players.
I don't think we will ever change road infrastructure in the same manner on any appreciable scale. (Hell, there's a pothole in front of my street that's been there for 5 years).
> much less be able to replicate human cognition that goes into operating heavy machinery in a busy multi-agent social context
This holds if you expect self-driving cars to be able to work like humans in picking up "body-language" cues around other drivers. But we can also approach the problem the other way around and set better fixed rules for how driving needs to be done and adjust the road and car infrastructure to make the problem simpler. Human drivers get away with breaking so many rules that the problem becomes much harder than it needs to be.
>Someone should redo the old "Spam Solutions" form except for self-driving cars.
Automating being dismissive of discussion does sound like a great idea...
> Requires immediate total cooperation from everybody at once
It requires setting stricter traffic rules and having them be followed. Something that's done everywhere in the world every year. It can be done as gradually and as locally-specific as needed.
> Asshats
We already have those on the road causing accidents. Traffic rules are there to punish this exact behavior. The fact that we don't enforce a few of them makes self-driving harder. I'm not "failing to account for asshats", I'm specifically targeting them.
> Jurisdictional problems
We already have different jurisdictions where different driving requirements exist. Having self-driving cars that are only allowed to drive in country X would be nothing new.
> Technically illiterate politicians
My suggestions was specifically to make the road rules stricter and enforced. This is not a technical issue, it's the boring old issue of what the road rules should be and who should follow them.
> Countermeasures must work if phased in gradually
Enforcing road rules works fine even if done gradually. It also makes self-driving gradually simpler.
>> Requires immediate total cooperation from everybody at once
> It requires setting stricter traffic rules and having them be followed. Something that's done everywhere in the world every year.
But you mentioned two somethings: "Setting stricter traffic rules" and "having them be followed." The very first examples in the TFA were from Argo's testing i Pittsburgh:
Recently, one of the company’s cars encountered a bicyclist riding the wrong way down a busy street between other vehicles. Another Argo test car came across a street sweeper that suddenly turned a giant circle in an intersection, touching all four corners and crossing lanes of traffic that had the green light.
Setting stricter traffic rules is certainly done everywhere in the world every year. People failing to follow traffic rules is also done everywhere in the world every day, and that's the big problem. Fully autonomous driving requires the ability to make snap decisions that may have little to no precedence in your past experience. This may be a solvable problem for AI, but it's a really, really hard problem that self-driving aficionados seem to consistently underestimate.
What I'm saying is that it helps, not that it's a silver bullet. But we seem to be stuck in a mindset that self-driving needs to be perfect within the current practice on the road. The more I drive the more it's obvious how human drivers are horrible and often in ways that are extremely easy to police automatically (e.g., tailgating can be checked for with the current toll infrastructure in some of the roads I use). If we want self-driving sooner we may very well have to attack it across the whole system and work on regulation, enforcement, road-design, etc, in parallel with working on the flashy technology bits. If instead we decide that self-driving has to work in the complete mess that are roads today then AGI is quite possibly a requirement.
Back in the late 90's, early 00's, the segway was released to much the same breathless prognostications that we hear today about self driving cars. Doerr said it would be more important than the internet. Kamen claimed it would restructure global transportation.
I don't need to go into everything that was said, you can google it. There was one thing said, however, that I think would be appropriate to touch on here. Kamen, just like you, claimed that traffic laws and the streets could be restructured to better accommodate the segway. That was the instant I knew the segway would not be taking off for a long, long time. If you need for people to restructure their laws and cities solely to accommodate your new technology, you should probably work a little bit harder on perfecting your technology. Because restructuring society's laws and transportation infrastructure simply to use your technology is probably not going to happen. The only time you get a radical restructuring of that nature is when the invention frees you from needing the infrastructure at all.
What I'm describing isn't reshaping your city to fit in self-driving cars. It's doing the things that we should be doing anyway to make road fatalities less embarrassingly huge and with that helping self-driving be an easier problem to tackle. Human drivers should be keeping much larger following distances and indicating properly. Road markings should be much clearer. Signage should be unambiguous. We should do all those things even if we don't care about self-driving cars just because they avoid accidents in general.
Yea but now electric scooter are all the rage and he was basically right (execution was wrong). Cities are even thinking about restructuring roadways to accommodate them (and more bikes).
Electric scooters are barely a blip in a city's overall transportation scheme. The biggest regulations going on is how to keep them from blocking the sidewalk.
Even restructuring a citys roadways for bicycles--a long-term, proven good technology--has been incredibly slow going. And this with something governments are actively trying to improve.
It will be decades until there is substantial enough change that self driving cars are viable as described in the great-grand-parent. And that's if it moves quickly.
My point is humans are already behaving extremely poorly as drivers and that already causes accidents. There are ways to help that be less of an issue that also helps self-driving become possible. But instead we accept those ridiculous risks when driving ourselves and yet expect to hold self-driving to a much higher standard while also complaining if it's not aggressive enough in traffic. That may very well turn self-driving into AGI but that's a choice not a characteristic of the problem.
You're attacking a strawman. I'm not saying the solution to self-driving is to change the environment completely to make the problem trivial (i.e., walls on sidewalks). I'm saying that some of the poorly defined and even worse enforced rules of driving could be worked on and that would help self-driving as well. How much would be enough to bring it out of being AGI I'm not sure, but it would definitely help.
I personally don't think enforcing rules around roundabouts so I don't have the near-death experience I had the other day "a road to China". I actually love driving myself and don't particularly care about self-driving technology. But as far as I can tell it's not even possible for me, a reasonably fit human, to drive in way that prevents those risks. If we are not willing to tackle that and at the same time expect airliner reliability from self-driving cars then we've defined the problem as impossible.
The rule already exists but is not enforced. If tickets were issued for not following it in the cases where the rule is ignored at low speed routinely it would be less likely to happen at high speed with risk of life. All the infrastructure and manpower as well as most of the rules are already in place. We just choose to ignore it routinely and then it's no surprise AGI is required to do self-driving within that mess.
By writing tickets to offenders in the cases where an officer witnessed the ofence? I thought that part was implicit. These are not cases where we need more policing or resources. Just rules that have been chosen to not be enforced and a few others that should be written. You don't need a police state to have better rule following. Just actually enforce the rules in the cases you catch and behavior changes much more broadly.
Right. Except we are talking about corner cases where pre-programmed computers on wheels kill people. Merely better rule following does next to nothing, and that's expected. The idea that we can follow rules to make computers on wheels "work" is false, unless one wants to go the China route where near-everything is monitored and punishment is extracted automatically (and even then... it still wont work).
My whole point is that if you make the environment more predictable you make the problem easier, not that the only thing you need to do is that. You seem to be attacking a strawman where somehow the whole environment is tailored towards self-driving cars by creating a police state and then the software is very simple. What I'm describing is using all the resources we already have to design and maintain roads to also help with solving the problem, together with all the technology that still has a long way to go. And if we can do that by doing things that also lower risks in normal driving I don't really see the downside. We already see that in the world. There are countries where it is much safer to drive because there's been a continuous focus on solving exactly this type of issues.
Nope. As I just said, even going the police state route _wont work_. Your premise, that we can make the environment more predicable to make pre-programmed cars easier to program is wrong. That would be attacking the tiny minority of the real world problem. I'm entirely comfortable with that prediction. You disagree, that's fine. I suspect we will both be around to see.
If you want to instead talk about making it safer for human drivers, fine, that's a different subject.
It's not just making the programming easier, it's making the problem actually possible. We have a much higher tolerance for fatal accidents with human drivers than we ever will with automated ones. So if you have an environment where many thousands of people are killed today and then expect self-driving to work in the exact same context with airliner level reliability you've defined the problem as practically impossible. If someone builds a great self-driving technology that applied to the total US fleet only kills 20 thousand people a year I doubt it will ever be accepted. And yet that would be half the fatalities that currently exist.
I had never been in an accident in 22 years of driving a vehicle until several months ago when I was rear-ended while sitting at a red light, in bright, mid-day, dry conditions. My car was the only one at the light even. This extremely poorly driving (or distracted?) human is presumably still out there on the roads somewhere.
That's a stereotypical engineering solution and fails to account for how humans actually act in the real world. Humans will always break some rules. As a taxpayer I don't think huge infrastructure changes or lots of increased traffic enforcement are a good use of limited public resources.
I can think of far worse uses for that money than trying to make our roads safer. The current system costs us 40k in lives and millions of serious injuries per year - there is a lot of headroom for improvements.
https://www.nsc.org/road-safety/safety-topics/fatality-estim...
With unspoken assumptions about how much an increase is needed things become arbitrary.
IMO, if allowing self driving cars on highways costs less than 100k per mile it’s a rather trivial expensive at 16 Billion in the US. Extending that to every road would be much harder to justify. Similarly, developing something like a set of more clear hand gestures for directing traffic would not be a major issue.
Honestly, the need for expensive changes seems unlikely, though some changes such as paint choices for lane markings would probably increase safety or efficiency.
Here's Krafcik reacting to how what he said there was quoted and interpreted:
"Yeah some context missing here, but it did make for a fun headline.( said the same thing about my own driving.) The point is that autonomous driving, like human driving, will always have constraints."
In other words, most people are not aware how dangerous certain conditions are under which they are occasionally driving (snow, fog, etc.). And that most of the time the only reason they didn't have a crash is luck. It might even be socially unacceptable to refuse driving to work because of fog.
Perhaps. But how often are such conditions an issue? Heck, humans, if they venture out, don't do well in snow. Accidents also increase in rain.
To move the needle - towards a true tipping point? - autonomous vehicles don't have to be everything to all people all the time. Certainly for a significant number of driver miles it's moderate distances at moderate speeds.
Finally. Something as simple as a vehicle being able to deliver itself (from some central hub) to your door and you drive it (non-autonomously) from there would be significant. That changes the ownership model. It changes parking requirements, etc.
Yeah, the holy grail might be a ways off. But there's plenty of (pardon me) disruption between now and level five.
Agreed. Navigating the world is basically an AI-complete problem; we don't let 12-year-old kids drive and these current enhanced cruise control systems are nowhere near the AGI of that.
If a human can do it, I see no reason why an autonomous system can't, eventually, do it. It might take a very long time, but "never" seems short-sighted.
Imagine you're driving down a street when a kid playing on the sidewalk runs behind a truck and disappears from your view.
If the kid appears in the road from behind the truck, the computer can handle slamming on the brakes very easily - probably faster than a human can - without understanding the kid any better than a group of LIDAR points or a rectangle of pixels labelled 'obstacle' by a neural network.
But if you want to brake before the kid appears from behind the truck? For that, a fully attentive human driver will be making a bunch of estimates about what the kid is doing, whether they seemed to have noticed the car, how old they were, and so on. In other words, applying a theory of mind.
Needless to say, if the latter is a must-have feature, that's a pretty hairy problem.
Of course, it's possible the decrease in deaths from being fast on the brakes in simple situations will outweigh the increase in deaths from lacking a theory of mind in complex situations. If that's the case, the self-driving car programmer's job would be a good deal simpler!
There's no need for a theory of mind - the self driving car can identify the kid as a pedestrian, recognize that it started moving in a possible collision course, then disappeared, thus prompting either slowing down to a non-fatal speed until the truck has been passed.
Children playing in the street are a common occurrence in residential areas, I see no reason why you would not develop a set of rules and heuristics to handle them. Identifying a pedestrian as child, and knowing whether it is running or playing, is well within the capabilities of modern computer vision.
It's worse than that. That small thing heading toward the road... was that a leaf blown by the wind, or was it a ball? If it's a ball, you'd better be already braking, because a kid is likely coming right behind it, and paying attention to the ball rather than the road. If it's a leaf, though... you don't want to hit the brakes for every blowing leaf.
> There's no need for a theory of mind - the self driving car can identify the kid as a pedestrian, recognize that it started moving in a possible collision course, then disappeared, thus prompting either slowing down to a non-fatal speed until the truck has been passed.
Shouldn't it be possible to train for these scenarios using imitation learning and expert demonstrations? For example, Tesla seems to save replays of when its vehicles would have behaved significantly differently compared to how the driver actually behaved, and this data is supposed to be quite useful.
Is this type of crowdsourced driving data a crucial part of achieving Level 4+ self-driving? If so, it seems that driving around the same six city blocks of SF or cruising down Central Expressway in MV is going to produce diminishing returns in terms of producing measurable progress.
> For example, Tesla seems to save replays of when its vehicles would have behaved significantly differently compared to how the driver actually behaved, and this data is supposed to be quite useful.
I don't think such a system would catch a false negative like the above, where the human would slow down cautiously but the self-driving system would do nothing. That situation is indistinguishable from a human slowing down to read house numbers.
To realize the problem, the system would need a full model of "what would the car be doing if not for the human input" in order to find a later point of alarming divergence.
Humans can't do it. When your stopping distance (including reaction time) exceeds your visibility there's no way to drive safely, but humans do it anyways.
> When your stopping distance (including reaction time) exceeds your visibility there's no way to drive safely, but humans do it anyways.
this is bad, but not quite as bad as it sounds. most of the time, you only need to stop as fast as the car in front of you. it's pretty uncommon to encounter a stationary object in the middle of the travel lane. in fact, outside of driving on surface streets in the city, I can't remember the last time I had to avoid a stationary object in my lane.
I'm much more worried by how closely people follow the car in front of them, regardless of visibility. many leave barely enough room to react at all.
They typically leave enough room to stop for their estimate of how long it would take the car in front of them to stop + the distance to that car. That initial estimate does NOT assume the car in front of them will stop instantaneously obviously.
While this usually will be fine, there are definitely issues when something stationary does pop up.
While caravan-ing to Yellowstone with 3 vehicles, all traveling in the center lane of the freeway, we encountered a small car with a passed out passenger in the middle lane. My bro-in-law swerved with some room to spare, immediately behind him I swerved with basically zero room to spare, and my father behind me (luckily for them in a Suburban, but bad for the man in the VW) had no chance- I was immediately looking in my back mirror knowing what I was going to see.
There is a decent likelihood that a machine could have swerved in time, but nearly zero that a typical human could/would have in our typical following patterns.
Humans route around inefficient practices. Just as human driving speeds are typically unaffected by posted traffic speeds, they will optimize for their typical experience over written codes for how they drive.
Step outside of the US and Western Europe and things can become quite interesting pretty damn fast, for example I had to avoid cows nonchalantly walking down the road like in this YT video [1] at least once every year for the last 3 or 4 years (mostly when I visit my brother in the countryside). Horse-driven carts are also still a thing in these parts of Europe, and they’re basically stationary objects (fun thing when you end up behind one just before a blind curb, preferably with a lorry driving up just behind you). Just like other people have mentioned in here, driving is a AI-complete problem.
I've had a similar experience driving around rural Mexico. The laws are far more lax and you run into all kinds of unexpected obstacles, but as a whole it actually works out pretty well because people understand and adapt to the situations.
One example is that there are a number of small towns with two way streets that are parked on but only the width of two cars. That creates bottlenecks where cars can only travel in one direction at a time. This might seem like a disaster and it certainly wouldn't work on a busy city street, but in these locations everyone adjusts and when two cars approach a choke point from opposite directions drivers are really good about being cautious and pulling to the side to allow the other party to pass.
There are probably thousands of these local quirks around the world. Handling all of these situations effectively in a fully self-driving car will take an advanced AGI.
Accident rates in the US on the highway are on the order of one per million miles. And in a million miles of driving you will encounter quite a few stationary or otherwise unexpected objects in the middle of the travel lane.
Object sure, but generally it’s not really an issue. Simply avoiding or driving over broken tire bits etc is generally a non issue.
A object would need to be substantial enough to cause an accident and then roll into or fall onto a highway. That’s far from a 1 per million miles of driving situation. Remember, something falling off a truck would also take a while to slow down.
But human drivers hit deer all the time, and it's often unavoidable. An autonomous driver is probably more likely to miss a deer than a human, due to substantially better reflex time.
And one would think infrared vision that could be useful (except possibly when it's a magic number between 98.6 amf 104 degrees somewhere)
To me, every brown mailbox with a white reflector could be a deer coming to the road, to infrared, with a larger lens, it should be able to tell a lot better.
Which is why I keep saying autonomous driving doesn't need to be perfect, it just needs to be better than humans. And that's a much lower bar, because we are lousy drivers.
I vote for neither. We don’t need to drive, and we certainly don’t need computers to do it for us. There are other—much more easily automated—systems that are several orders of magnitudes more efficient then driving and—if implemnted sufficiently—almost always faster.
Autonomous driving doesn’t need to be perfect because we don’t need it. With sufficient alternative systems the only reason for driving will be for hobby (and we don’t want that automated anyway) and heavy load work (like agriculture, mining, or logging) which is already heavily automated.
We need transportation. A world of sitting on our butts in front of a computer isn't a solution to much of anything. Maybe we don't need to actually operate the vehicle, but we need the vehicle.
I'm not walking halfway across the country just to visit my mother.
Whoa. Misunderstanding here. I’m not talking about eliminating transportation. That is just stupid. Alternative transportation from driving include: busses, trains, bikes, walking, trolleys, bicycles, ski-lifts, airplaines, taxis, boats, rollerskates, escalators, etc.
The sum of these alternatives will almost always outweigh driving in terms of benefits with a notable exception of convenience. So if you are willing to sacrifice convenience when you want to visit your mother, you will almost certainly get there faster and more economically (with the right systems in place) then driving.
Note that I'm giving my self all of the advantages of all of the alternatives. And I’m also painting this scenario in a world where all of these alternatives have all of the required infrastructure in place[1].
With that said, yes there are faster ways (albeit still less convenient) of getting you outside of said city. You might have to change your mode of transportation a couple of time (I said it was less convenient) but it will still be faster with the right systems and infrastructure in place.
1: This is not an unfair scenario because this is already almost the case for all of the non-alternatives.
But this is where HN is blind; a non insignificant portion of America doesn't have access to a reliable automobile. So, there are already 10s of millions of Americans (and Europeans) that transport luggage and a baby just fine.
Actually, no. I know people who are too poor for a car. I've been people too poor for a car.
Kiss an extra two hours of your day goodbye just to get to and from work (assuming you have a job). Going to the doctor is a nightmare (assuming you have medical insurance). It sucks a lot, unless you're living in a city so dense that cars are impractical.
From your situation it seems like the system in place that provide alternatives to driving could benefit from being expanded, increased, and optimized.
I hope your local politicians agree with me that expenditures going into making these alternatives are money better spent then waiting for the technology to dedicate highway lanes for autonomous vehicles.
That is I hope they agree that your situation of not being able to go to the doctor within a reasonable amount of time takes precedence over people wanting to sleep during their 8 hour highway trip but are unwilling to take the bus for some reason.
Will a computer ever write War and Peace or compose the 9th Symphony too? How about solve an unsolved math proof? No reason it can't be automated, right?
People were saying this in the seventies before the long AI winter.
It's always just around the corner, and has been since the dawn of AI. Maybe we will get there some day (I never say never), but cautious realism has never been the AI field's strong suit.
There's an emotional component to art, and I don't see nearly as much work on artificial emotions as I do on task-oriented artificial intelligence.
A lot of people are working to create a car smart enough to drive humans anywhere. How many people are working to create a car smart enough to refuse to drive because it's feeling inspired to write music instead?
It is shocking to me how, we (people) CONSISTENTLY fail to understand where the bar is for accepting autonomous vehicles.
They do NOT need to be perfect. They need to be BETTER than humans. That means less accidents, and by extension, less deaths and injuries as a result of automobile errors.
You'd be surprised how well the lidar systems on these actually do see in the rain. Snow and ice is another story, and hydroplaning as well, but again it's not the sensors hampering it here. It's the logic. It's much more complicated than it first appeared, and right now the AI for detection and recognition is light years beyond where the self driving car logic is. You can watch a few YouTube videos of people training networks to drive cars (and also look at how well every company is currently doing) to see how difficult it is to get an AI solution for driving behavior.
They work well when they are working. Things a different when something blocks the lens, when a mirror bounces the laser back on itself, or when the laser's view is obstructed by something (ussually the road) that isnt up to code. Then the concept of lidar as reliable or even useful comes into question. Forget the math of driving, it is beyond today's lidar to tell whether there is frost on its own lens, a very basic safety proceedure.
>Nothing makes a huge group of us cringe more, than people saying it's just around the corner.
I thought you referencing the media here. Build it up, tear it down. They quoted and amplified so many people who suggested it was possible only to print articles later that point out the it just ain't possible.
> Waymo’s CEO, John Krafcik, has admitted that a self-driving car that can drive in any condition, on any road, without ever needing a human to take control—usually called a “level five” autonomous vehicle—will basically never exist.
Cold fusion has a 100% chance of realization. It's a sound technology. There's no doubt that it will be realized and is achievable. The only doubts are the timescale -- those who work in the field often say it's always '30 years away', but some recent research seems promising to make it much closer than that. I'd say level 5 and cold fusion are both equally achievable, though level 5 automobiles may be airborne -- as then they'd have less obstacles just birds, trees, and other airborne vehicles, but pedestrians, cats, dogs, bicycles would be off the table.
Super AI's I think are achievable and will be developed. I just don't know which will come first level 5 or cold fusion that's a toss up. If everything though has sensors including bikes, pedestrians, cross-walks, roads, etc that can detect what is on the road, send out a warning signal to cars travelling towards that intersection then that also takes out some of the problems - but that requires a very smart grid.
On a long enough timeline though there's probably not much humankind can create. Though it could be 10 years could be 50.
> Waymo’s CEO, John Krafcik, has admitted that a self-driving car that can drive in any condition, on any road, without ever needing a human to take control—usually called a “level five” autonomous vehicle—will basically never exist.
“Never” is kind of a strong word in this context... My personal belief is that we could very well have level five autonomy within the decade, using only two video cameras and two microphones behind a windscreen. But this would of course require major progress in A.I., of the sort that may also not happen in a hundred years.
Getting to level 5 will require "V2X" technology - vehicles communicating with other vehicles and the road/traffic infrastructure itself.
We will need a forward-thinking government to start building sensors and communication tech into roads, stop lights, parking spots, and a secure/interoperable internet for all these sensors to communicate autonomously.
Currently the cost of implementing such a system is way higher than the return to drivers who can...what, work on more powerpoints or facetime while they're in the car instead of driving? Not to mention the huge security and safety and liability risks this would open up.
That's why we solved this problem with public transportation in the past - hire one person to be the "V2X sensor" AKA bus driver.
Not really more expensive. If we have x deaths per year..let's say 10k for easy math. Over a decade that's 100k people dead. If self-driving cars can bring that to 0..that's 100k more people in a decade able to pay taxes because they didn't die. It's 100k families that maybe didn't lose their primary bread-winner and don't need to live off government assistance... So there's economic value in less people dying. It's also less in medical costs. Imagine what the bill for medicare for all would be if there were 0 traffic accidents across the entire U.S.A? Not to mention it could ease up congestion and make changes to the way they do construction or how often by maximizing speed and travel to make roads last longer somehow... Also less need for police to monitor traffic, I'm sure we get that sort of grid we'd also have a lot more big brother surveillance baked in too to make police jobs easier. Which I guess is good/bad depending on which side of the privacy ethics you are on surveillance and how 'honorable' our government is at the time (hopefully more honorable and less like china and it currently is)....
When read literally, there is no way that any system ever, human or automatic, will be able to drive on "any road" with "perfect safety". Crazy freak accidents happen, and human drivers regularly aren't able to handle them. On top of that, most people just can't safely drive in thick fog, icy conditions, or pouring rain. No amount of intelligence will allow a car to drive through a sufficiently flooded street.
Just like with human cars, the important metric is just whether they can drive acceptably safely in the environments that they attempt to go. There's enough low-hanging fruit from faster reaction times and 360 sensing to make that feasible without needing to solve the AI-complete problem. Nobody's asking them to be able to drive at 70 MPH through a fog bank in Alaska with ice on the road, even if that is technically part of the requirements for Level 5 that even humans can't obtain.
If the standard of self-driving becomes something akin to how airplanes fly, then weather will be a delay, bypass, or do not fly precondition to the execution of the flight plan.
All this stuff about "it's right around the road" vs "it'll never happen" is all unspoken assumptions around what it means to take a "safe" "trip" "in a car" "driven" "autonomously".
Safe? Compared to humans that are alert, humans with smartphones, drunk drivers? Airplanes?
Trip? Distance? Rural vs Urban? Highway? Speed?
"in a car"? Smart fortwo? Motorcycle? RV? Sedan? SUV?
"driven"? All by software all the time and no windshield? implicit backup if uncertainties are too great and can be manually overridden?
I think what needs to happen is that you need programs tailored to specific routes/roads. As you drive any distance, you download/cache the programs for the routes and execute them.
You aren't going to be carrying around a "general driving AI", except as emergency backup to specific route downloads.
Humans work this exact way. You have the idiot tourist drivers versus people that commute on a route. Commuters know how different parts of the road's concrete sound differently, how fast they can take curves if they had to, which lane to be in to anticipate merge backups.
Those commuters know how to drive those routes in winter or summer as well, deer season or not, rain or shine. So that implies conditions-specific programs as well.
> I think what needs to happen is that you need programs tailored to specific routes/roads. As you drive any distance, you download/cache the programs for the routes and execute them.
Wow. That's so obvious in retrospect. I wonder why I haven't considered it before, nor why I haven't read of it before either.
I imagine having stationary "traffic controllers", semi- or completely automated, that keep real-time information about conditions on the segments of the roads they monitor, and which continuously assign "travel plans" to cars. An autonomous car wouldn't have to recognize weather conditions or static obstacles in fraction of a second, because there would be a static sensor network and processing centres responsible for this. All a car would have to do is follow assigned route at assigned speeds, and monitor its environment for dynamic obstacles.
This makes more sense than trying to pack all the intelligence into the car, and doubly more sense than having the car hooked up to the cloud. Unfortunately, I feel companies of today may find it difficult to coordinate on designing such a system.
"Wow. That's so obvious in retrospect. I wonder why I haven't considered it before, nor why I haven't read of it before either."
Get ready to have your mind blown because the next leap in this line of logic is to physically fix "tracks" to the road and run the cars on these "rails" - possibly on a schedule.
It's a future star-trek world we in the United States can only dream of ...
Trains work incredibly well, but they don't solve the last mile problem. Hi-rail trucks are pretty common in railway maintenance. There is probably some sort of steam-punk past that could have happened where we're all driving hi-rail cars on almost all highways, and just using cars for the few last mile trips.
Am I the only one who's terrified of the security implications of such a system? You're talking about running (from the car's perspective) untrusted code on a fleet of cars, dynamically and in real time. Even if it's cryptographically signed or whatever, you're one or a handful of exploits away from remote attackers having the capability to crash every autonomous car, simultaneously, at least in a geographical region and possibly (inter)nationally.
If security matters today, it's going to matter orders of magnitude more in a world with autonomous cars.
I read the other day about geofencing being used to enforce speed limits for rental ebikes and scooters in some jurisdictions. It's not hard to imagine it progressing from speed limits to top-down traffic control programs. This probably is the future we're hurtling towards, as unprepared as ever.
Who would be responsible for maintaining static sensor networks and processing centers? The current transport authorities who do such a great job maintaining dumb roads?
A lot of the problems with AVs are not just technological, but political and social in nature.
The responsibility for these systems obviously falls to the transportation authorities.
Scaling this capability up may not be easy but it's absolutely possible. As we expand our fleet of autonomous vehicles that can respond to these inputs it will become more and more useful and necessary.
The "safe" part is a huge one. We need a definition of "safe" on a formal level of ISO26262 which is also feasible. From what I have heard the validation would require billions if not trillions of miles driven. If you retrain the neural net, you start from 0. Currently, it is not feasible to sell a safe self-driving car.
We keep forgetting that in the 1920s they said -all- cars would be flying in the year 2000, and it never happened... Heck, some could say we still don't have cars that can fly realistically, we only have flawed prototypes, and they aren't safe. It's not being pessimistic, it's being realistic. Most of the hype surrounding the self driving car discussion is generated and promoted by people and companies on the profit receiving end of the discussion.
Right now it's primarily based off of cameras watching painted lines on roads. Also not a solid practice. If someone wanted to cause accidents all they'd have to do is paint the lines off the road on a section above 60 MPH.
Another problem is maintenance of these vehicles. Accidents are caused a lot by owners not properly maintaining the car's subsystems like brakes, fluids, and even keeping sensors and windshields clean. Too much performance unpredictability is introduced into the equation by things like this to make self driving cars a reality, and makers cannot reliably answer who will own up to responsibility in case of an accident.
There's also the concept of "free will", i.e. how will these cars work around human drivers, what about everyone relinquishing their personal rights to own cars, will drivers be able to go off the maps and radar routes, etc. And none of those questions can be answered. For planes, boats, large haulers and buses maybe, provided they stay in designated lanes, but for cars? I don't see it happening any time soon unless all of the questions can be answered acceptably.
Faster reaction times are "low-hanging fruit" until the reaction is wrong...
Even taking Alaskan fog banks out of the equation, a machine need to be pretty near AI-complete to not be fatally wrong about how to react more often than every few hundred 100 million miles, which is the human benchmark (including inexperienced, tired, drunk and stupid drivers) if it's driving in normal road conditions without a human failsafe. Or for the road environments to be very different, or for the 360 sensing and autobraking to be primarily driver aids, which are the real low hanging fruit for all that investment in AI processes to understand roads and control vehicles.
That’s an impressively difficult number, but I wonder where you got it from? What is a “time”? A second in dangerous conditions? A trip? How do you get a hundred million iterations of anything at human driving scale?
The rate of fatal car accidents in the US is about 1 per 1e8 miles traveled. Of course, that's a terrible measure of driver error rate: Less than 1% of all collisions are fatal and with maximally safe cars that could ensure all occupants will survive any possible collision at any possible operating speed, the worst possible driving system would receive perfect marks.
In safety-critical systems, failures are usually measured in (severity) x (probability) (and sometimes including a 'detectability' measure).
So a resulting 'acceptable' metric could factor in those less severe cases even if they occur at a higher probability. Scores outside this range would then trigger a redesign to bring it within acceptable boundaries.
I think the difficulty will be in 1) getting a consensus on what the resultant score should be and 2) getting enough information to estimate it in a statistically significant sense.
It's actually not a very useful quote because, as you say, taken literally it's an impossibly high bar. What is required to be useful is either
1.) Demonstrably better than human (whatever that means exactly) for a well-defined subset of roads and conditions such as interstate highways under some subset of weather or
2.) Demonstrably better than human for any roads and conditions that a typical adult human would typically be able to navigate door-to-door safely.
1. is a very useful driver assist system, and likely a big win for safety, but you still need a sober adult driver available to take over with reasonable notice. 2. is what you need for robo-taxis to be practical.
There are open questions about whether or not intermediate levels of automation might actually be more dangerous because they lull human drivers into a false sense of security, so I don't even know that 1 is necessarily sufficient.
My assumption with 1 is in the vein of "we're approaching an exit in two miles. Please be ready to take over." i.e. planned disengagement. Yes, any automation system that may fail by going "OMG. Do something now!" is worse than useless. I'm not sure the question is even open.
>> Nobody's asking them to be able to drive at 70 MPH through a fog bank in Alaska with ice on the road, even if that is technically part of the requirements for Level 5 that even humans can't obtain.
What is required is a vehicle that can decide not to drive at 70 MPH when it's going though a fog bank in Alaska. Humans can drive that fast in such low visibility, but we (often) have enough brains not to.
> Humans can drive that fast in such low visibility, but we (often) have enough brains not to.
I think your "(often)" is doing a lot of work in this sentence.
There are many unsafe conditions that don't require an icy road plus fog plus highway speeds: any bad thunderstorm is likely to combine poor visibility and the risk of puddles/hydroplaning, for example. But the highways don't clear out during summer thunderstorms; people seem (from their actions) content to take the risk.
If self-driving cars become common and maintain a high safety standard, I think we'll also need to see a culture shift. People will have to become comfortable saying (and hearing) "the roads aren't safe right now, so I'll not be there on time."
>But the highways don't clear out during summer thunderstorms;
Part of this is that people (and I include myself in this) tend to have a mindset of slow down but power on. (Although in my experience not enough people slow down enough.) But it's often also a reality that pulling over isn't really safe. And, even if you can get to an exit, in the case of something like a snowstorm you may have a long cold night in your car if you decide to wait it out.
>> If self-driving cars become common and maintain a high safety standard, I think we'll also need to see a culture shift.
Good idea. In fact, at the moment self-driving cars aren't anywhere near as safe as humans (even when we forget our brains home) so I think we could benefit from such a "safety culture" tremendously already.
This is the key thing here - Uber for instance doesn't need to solve L5 self driving. They need to figure out what routes are frequent in their network and what routes can be driven by their self driving car and route a ride appropriately.
It will never be Uber flicked a switch and replaced all their human drivers. It will be a very gradual transition over to self driving that may never reach more than 25-30% of all trips.
Is that worth the billions of dollars they're pouring into self driving? https://www.uber.com/newsroom/company-info/ says they complete 14 million trips a day. That is 5.1 billion trips a year (and growing). 10% of that is a 500 million trips a year. Assuming the average trip is 13$, that would be $6.5 billion per year in revenue, I think that's more than enough to turn a significant profit even accounting for R&D, Capital expenses to maintain the fleet etc.
The reason self driving will be a reality is purely economic. Noone's developing it to change the world or reduce the number of people dying in accidents or whatever shpiel they cook up.
IFF autonomous cars can drive in certain conditions and on certain roads substantially more safely and cheaply, I think that'll cause roads and commuting patterns to adapt to the cars.
So even if it's a long way away from being able to replace the majority of cars in the way we use them today. The roads and way we use cars in the future might be different.
I'm not saying it will be. It's just something I rarely hear discussed.
The important things that self-driving cars can't do aren't freak occurrences. They're elementary things like responding correctly to hand signals from someone directing traffic, or reading and obeying street signs written in plain English.
> It’s much more difficult to prepare self-driving cars for unusual circumstances — pedestrians crossing the road when cars have the green light, cars making illegal turns. Researchers call these “corner cases,” although in city traffic they occur often.
These are difficult to deal with as human drivers too. This article is pretty light on details and basically pins the lack of progress on a couple of Tesla crashes. The thing is, regular people crash cars every day. I'd much rather share the road with self-driving cars than cars driven by humans. Especially with the advent of smart phones, not a day goes by that I don't see multiple people heads down messing with Instagram while driving.
This technology is a lot closer to reality than the article makes it seems I think.
It needs slightly more than that: when it fails, it needs to be able to demonstrate that no human would’ve succeeded in its place.
This isn’t a moral or a technical issue, just public relations. If a system reduces road deaths from ~30,000 per year to 365 per year, but one of those deaths was it mistaking a grey skirt for an open road [1], there will still be calls for its use to be outlawed — basically what’s happening now with antivaxxers, and they’re hard enough to deal with when all the evidence is against them.
> The added insurance costs for luddites will price them out of the market.
What added insurance costs? It's not like adding self-driving cars to the road will make manual driving more dangerous.
No safety technology could make insuring self-driving cars any cheaper than free. Would free car insurance act as enough inducement to drive the fearful away from their steering wheels?
When people realize the self driving cars are better the damage claims from human accidents can go up. Right now if you kill a kid with your car you get 7 months in jail and your insurance company pays a $150,000 (these are real numbers from a 3 year old that I was really close to). If self driving cars are really that much better courts will start handing out much larger judgments.
Because corporate profit is maximized by their accuracy,
actuaries are the factual island in a sea of marketing proclamations, out of control media and silly regulations.
And if there are no examples of machine-only failure? Then they will be invented, good ol' fake news.
Because our anxiety about AI driving isn't about whether or not the tech will work.
edit: Our fear of autonomous driving is akin to fear of flying. Flying is, by any measure, far safer than driving, but people are terrified of flying in a way far different than how they feel about driving.
Sure, people are scared of lots of things that they shouldn’t be scared of — I’m more worried about politicians being given a high visibility reason to outlaw something which is strictly an improvement, with emphasis on “high visibility“, and thanks to the availability heuristic one per day is all you need.
I would be immensely happy even if it was just on interstates/major highways in clear weather. That's at least 80% of the worst and most mind-numbing driving right there. The rest I don't have a problem doing myself.
To be honest, 80% of what I want is a self-driving car that can drive itself on the highways during regular weather conditions. Getting it on and off the highway, I don't mind doing that manually. I would love to be able to read or work during the hour that I spend on the highway everyday. I would love to take a overnight road trip where I can go to sleep in SF and wake up the next morning in LA. I'm aware that accidents/bugs will happen, and as long as the car's safety record is better than that of humans, that's fine with me. I've never spent more than $20k on a car before, but for a car that can drive itself on the highways, I'd pay a whole lot more.
Could someone explain to me why any average Joe with minimal intellect can drive a car using only a pair of eyes but it's difficult to build a fully self-driving car even with 8+ cameras, GPS, LIDAR and other sensors us humans don't have?
Human brains are just context gathering pattern matching machines. We can't even count without all kinds of context being automatically bubbled up into our consciousness.
When we see other drivers, we can take all kinds of subtle hints about their behavior. We can easily tell if they have an unsecured load (not just if it's currently shaking). We can see the way someone is looking at the road to know if they are going to go, if they're hesitating, if they're high. We can know where the road is even though it's snowed out because of the approximate distance from the ditch you remember being 15 feet out alongside it.
There are zillions of cases like this in the long tail. Self driving cars leapt forward by being able to answer the vastly important contextual question of "what is this in my sensor?" -- but to do the rest, it's hard to overstate how much a computer would have to "be human". To be able to apply past experience, psychology and complicated inferences about "why is this in my sensor? what can I do about it?"
In the end, self driving cars will thrive, just in an environment that poses these kinds of questions as little as possible.
Or we'll actually get a breakthrough and be able to create NNs that allow machines to learn and apply a vast breadth of learned context to sensory input. Teach them vast amounts of unrelated things just as every human learns in the 16 years before they drive (and then some), and then effectively put extra-sensory humans on the road.
We're using a lot of sensory data to compensate for the fact that the "AI" driving the car is really dumb. It lacks any meta-level understanding of the problem domain and is essentially a stimulus-response system similar to a very simple insect or even a single celled organism.
We can drive a car with much more limited sensory input because we have a very high level cognitive model of what we are doing. Better model equals better decision making on far less information.
Slight tangent:
Having kids was an interesting chance to observe how far we are from "real AI." You can show a 3 year old one example of an object and after a few seconds they can subsequently identify that object in any lighting condition or from any angle. They can identify it with one eye closed. They can identify variations of it, pictures of it, and line drawings of it. One example, seconds. The human brain absolutely destroys any AI/ML we have.
not quite true. when you show a kid something new you are literally exposing the kid to the thing thousands/millions of times. different angles, rotation, heck even if you’re just looking without touching from a fixed position your eyes will sample it a lot (saccades of your eye). you also connect/map the object to your prior knowledge. with an AI/ML model you’re starting from scratch. what makes learning “easy” for humans is the incremental nature of it + millions of years of evolution
also: humans can drive a car better than a machine because the roads are designed for humans and the other drivers are humans. the machines would destroy us at driving if the roads were optimized for machines.
Because it's not just the sensors, there's a lot of purely human interaction going on on the road. Looking at the other driver at the intersection and trying to understand what to expect from them; same for someone crossing the road in front of you - is the guy drunk? Eye contact - nope, he looks fine, he will give way. Etc. etc
Come try to drive in Paris or Rome at peak hours. No way on Earth AI can handle this kind of traffic.
I always say autonomous cars will require infrastructural change, similar to the one happened when we transitioned from horses to cars. How exactly - I don't know yet, but with the current infrastructure Level 5 seems impossible.
Because we really don't know how to emulate "thinking." Some overconfident people think they can, but it's a much harder problem than making an ACID-compliant relational database.
Also, because we tend to forget that technological innovation that takes a few decades, or a century, is extremely fast in a historical context.
Our ancestors aren't going to care if we have self-driving cars in 2020, 2080, or 2120.
Because any average Joe with minimal intellect nevertheless has an intellect. It's impossible for a computer to actually understand or comprehend anything. Pattern matching, what computers are good at, has its limits.
Pointing a camera at something isn't vision. Making something recognize what they're looking at, and making it create a reasoned response to it, is wickedly hard.
hmm. i have to disagree. driving a car is a walk in the park compared to AGI. they’re not even in the same ballpark. for driving we at least have an idea how to make it happen and we need better tech, more training data and maybe alter the infrastructure to solve some edge cases. AGI? we’re guessing at this point (at least what’s public)
In the average situation, not the edge-case. The way you should think about it is, "If I had a black-box oracle that can drive a car exactly like a human, could I use that to simulate an artificial general intelligence?"
The answer is probably yes. For example, in order to "ask" the AGI a yes-or-no question X, you could contrive that the car find itself at a fork in the road with a roadsign that says "If the answer to X is 'yes', then the road to the right is closed. Otherwise, the road to the left is closed."
what does drive a car exactly like a human mean?
i am going to assert that you don't need AGI for self-driving cars. in your example: that's not how driving works. the driver - even a human one - is not expected to answer random questions as they are driving.
>the driver - even a human one - is not expected to answer random questions as they are driving.
In the same way, the C++ compiler was not expected to be able to emulate arbitrary programs (i.e. to be Turing complete), but some ingenious people found a way to use it to do just that. The way they did it was by writing some extremely unusual and edge-casey code, but you can't just wave that aside and say "C++ isn't really Turing complete, because the proof that it's Turing complete involves C++ code that nobody would really write!"
A lot of those sensors and cameras are less accurate during the night or in weather or with sun glare.
I think one of the biggest issues is that people will not accept self-driving cars that are as safe as human drivers. They want perfect driving. If someone hits a pedestrian that jumps out from between two cars unexpectedly, it might get written off as an accident and the insurance company may not even have to pay if it was clearly the pedestrians fault. If it was a "robot" car that hits someone, there is going to be a suit for $500M because "Google killed my son!!!!!!". This also leads to people trying to get non-lethally hit by robot cars just to get money. The "robot" car has a big trolley problem that isn't easily solvable.
> The "robot" car has a big trolley problem that isn't easily solvable.
I really think this is overblown. it's not hard to think of a solution to these issues because we already have it in traffic laws for humans. there are rules for accessing the right of way. if you follow the rules and make a good faith effort to account for people who don't, it is pretty hard to be found at fault for an accident, or even "cause" one to begin with. rear ending someone starts with too close of a following distance (or too high a speed) for the conditions. tbones and head on collisions can only happen when one or more people are moving without the right of way. pedestrian strikes on the street can only happen when the pedestrian or the driver is not observing the other's legal right of way.
sideswipes are kinda ambiguous. it's possible for two vehicles to check that there is space in the middle lane simultaneously, then merge into each other. I don't think it's legally required, but I never merge into the middle lane when there is a car to the far left/right.
> A lot of those sensors and cameras are less accurate during the night or in weather or with sun glare.
But for relatively minimal cost, those sensors can easily 10x better than humans. Whether it's visible light or infrared, whether it's microphones outside the car, or networked effects to other vehicles or highway camera.
> there is going to be a suit for $500M because "Google killed my son!!!!!!"
I'm pretty sure that Google, Apple, Uber, Ford, Gm, Chrysler, and all of their supply chains could get lobby for the tort reform they need, especially witb much of our current "representation"
The average Joe is the descendant of millions/billions of years of evolution. He is designed specifically to navigate this planet.
The fact that humans will be teaching computers to do the same kinds of navigation, with just a few decades of work, speaks to how impressive the above-average Joe is.
Humans actually aren't that great at driving - tens of thousands die on the road every year. Autonomous cars do great in ideal conditions (the suburbs of Phoenix AZ for example) but in conditions where the model lacks training data (i.e. bad weather) they have the same struggles as human drivers.
A) We can't put a brain-equivalent computer in the car, and can't program it to be as good as a brain at driving
B) We have higher standards for autonomous cars. The people saying it can never happen are saying it will never be perfect in all conditions, not that it will never be better than humans.
Having seen the evolution of AI in fits and starts from the early days to now I don't understand why the skeptical stance is still default. We have made enormous progress in the last five years and I don't see why it's not possible to get to human-like performance in the next decade. I argue it would be surprising if we don't get there relatively soon.
The state of the art is discouraging for the money and years poured into it.
Recognizing solid obstacles is still unreliable. On the one side, there's Tesla running into stationary objects multiple times, and Uber running down a pedestrian. On the other side, there's a false alarm rate which causes sudden stops.
Low-speed self-driving vehicles ought to work reliably by now, but don't. Google's cute little bubble car, top speed 25MPH, was discontinued. Voyage has some cars in a retirement community. Some, as in 3. With safety drivers.[1] Local Motors has been issuing press releases for years, but not much is on the road. There are some self-driving shuttle buses, but they all have "safety drivers". EasyMile has real autonomous shuttle buses, but they had to drop the speed to about 10 MPH.
Worse, all these systems have a huge engineer to passenger ratio. Nothing is close to being financially realistic. That's not a permanent problem; in the early days of the Internet, it was said that the ratio of PhDs to packets was too high. But this is a long way from profitability.
The Uber did recognise Elaine Herzberg as a pedestrian in the last 2 seconds before hitting her, after failing to do so for the 4 previous seconds [1]. It could have activated its breaks and she may have had a chance to survive (though probably not unscathed).
However, the car's auto-breaking decision had been disabled because it was considered too conservative. So the only agent who could have reacted in time was the woman driving the car, who, as we know, was on her phone.
Much as I find the hype around self-driving cars brain-dead, in this case, the car's AI was not at fault. Even if it could have made the decision to stop in time, the agency to act upon this decision was removed from it.
> Much as I find the hype around self-driving cars brain-dead, in this case, the car's AI was not at fault. Even if it could have made the decision to stop in time, the agency to act upon this decision was removed from it.
The agency to react was removed because its reactions are crap. If the AI is braking due to false positives all the time, and the the only way to fix it is to disable its ability to react, then I would say that indeed, it is the car’s AI at fault, albeit indirectly.
Like I say, _in this case_ the car's AI was not at fault.
I don't know how good or bad is Uber's car AI. If by "crap" you mean that image recognition in general is brittle when exposed to real world conditions, as opposed to the controlled experimental conditions in published results, then I agree.
It still takes a team of highly paid engineers to build and maintain an authentication page. Seriously - I guarantee you there is a 100+ person team at Google that handles login. Self driving cars are a very, very long way away. The problem space has unquantifiable and insurmountable complexity.
At the same time, we're able to launch huge rockets into space with people in them. We build massive buildings that withstand 9.0 earthquakes. We build and maintain massive systems of roads, the power grid, deliver clean water to every home in the country. Its a problem of resources and infrastructure - we could rebuild every highway with a self-driving lane that'd be much safer than not.
You're right - the distance to being able to navigate just about any area whether built for it or not is a long way away, and the complexity may end up insurmountable. But we have the tech to move us incrementally towards that.
I can't help but notice that the hyper-enthusiastic media talk about self driving cars died down right around the time of Uber and Lyft IPOs.
Like, if you want to sell something, you want to create a sense of urgency, right? "If you don't invest into U/L you're missing out on the self-driving revolution, and it's going to be huge! Just look at all the media going crazy!".
The US does actually have railroads, but it's so huge that it's utterly unreasonable to build enough railways to be a major contender for general transit.
At least, that's my perspective as a US citizen who commutes by train daily.
I US actually has ok railways ... for fright. Commuter trains are admittedly underdeveloped, even in regions that have similar size and population density to European countries.
How would that be different from railroads? Aren't those already dedicated to moving products and large amounts of people? And wouldn't they be much easier for machines to navigate?
Strong but brittle is how you could describe most automated systems. They are superhuman in the right context but fail comically if you alter the context just slightly out of what the system can handle. I think the best way forward is to set up infrastructure on roads to help the partially autonomous systems. Its important to not let great get in the way of good.
Every time I read about autonomous cars, I wonder if these points have ever been discussed:
- Yielding to an emergency vehicle with sirens on.
- Moving backwards to a safe and large enough spot when the route is too narrow to fit self-driving car and oncoming huge lorry (and there is no line marking the limit between road and ravine).
- Upon instructions from authority, recognizing that the highway is closed due to an accident and, no matter what the driving code says, you actually have to make a U-turn on the highway and follow the crowd. Alternatively, just take that route (yes, the one with the large no-entry sign at the beginning) or that narrow path in the wood (yes, it exists, even if Google Maps isn't aware of it). At the bare minimum, park yourself off the road and let the others move on.
- Verifying whether a queue is forming behind you. Listening to the honkers, they may be right. When you are an obstacle to the most part of traffic, moving to the side and letting others pass from time to time is sincerely appreciated.
- a broadcast radio signal and microphones can detect sirens. That seems easy.
- that's just flow analysis and basic perceptions of the environment. At a minimum, just tell the driver to take over.
- highways will benefit from convergent infrastructure, and alerts like this will be part and parcel of highway driving automation. Flying does this already, basically.
- sensors have 360 degree vision, and mesh networking between cars would help solve this, and automatically call towing/traffic control emergency vehicles to help with disabled cars
I don't worry about any of those situations, those can be readily incorporated into certification and testing of the algorithms/systems.
I can't read the article, so only going off the headline, but how does the conjecture that self driving cars are far off jive with the fact that my understanding is they are already deployed in production in Phoenix, AZ? And are being used regularly in the bay area in testing?
Obviously it's incremental. I fully expect in the next 24 months I will be able to call an autonomous car via an app on my phone in the bay area under certain known conditions, like good weather and common, low-risk routes. Or, specifically, that I'll be able to direct my model 3 to drive me to the San Jose airport (a 10 minute drive) without the need for me to actually take any manual interventions along the way. (Though I will still be asked, by Tesla, to be ready to take control.) If true, that's progress, and I expect the progress to continue until eventually this technology is widespread and covers the vast majority of routes and conditions.
Detractors of self-driving cars are always eager to lap up a story like this as proof that it will never happen or it would take decades...."Years" doesn't mean decades, even 10 yrs is a relative short time, i owned my last care for about 13 yrs.
Some proponents of self driving cars might have gotten too ahead of the hype train but nevertheless, self-driving is here and soon would be common place.
It would obviously take some time to turn over the current inventory of non-self-driving cars but even that would happen faster than most people anticipate. Americans already replace their cars quite often and it is hard to think of a more compelling nudge to replace your car than the prospect of not having to actively drive it.
As for the outstanding technical challenges, I am often baffled when people point to that as a reason for skepticism...it is the whole point of the endeavor, to solve those problems.
The question of "how much and how fast" matters more than you give it credit for, I think.
Take drones - drones are "here" in some sense. You can buy one, they work. But there's a big question about how common drones will be and what they will be used for. How much we care about drone regulations, safety, tracking, etc, will depend a lot on how often we see them. Right now they're rare-ish, but if delivery-by-drone works they could be very common.
The same is true of self driving cars. Of course we will have some number of self driving cars and self-driving features will probably exist on most new cars eventually. But how many and how much they can do matters a lot! I think people point out existing problems to suggest that the magnitude and universality of the technology is still unknown and impactful.
> As for the outstanding technical challenges, I am often baffled when people point to that as a reason for skepticism...it is the whole point of the endeavor, to solve those problems.
Right, but that's an absolutely massive issue. I could say that I am embarking on a project to enable me to flap my arms and fly, you might say that's not possible, but I say that's the whole point of the endeavor, to solve these problems!
The article outlines many very real problems that do not have answers. We can hand-wave and say "well, we'll solve them" but until we have any idea how, and the time it'll take to do them, "years" absolutely does mean decades, if not longer.
> self-driving is here and soon would be common place.
Ehh, "soon" and "common place" are both hedge words large enough to represent any future. Heck, you could say it alraedy is common place—you can see them all over the place in SF.
My instinct is that this entire self driving hype is to distract from how direly we need to invest in public transit.
Instead of individual cars we need a highly dense network of rail cars that can merge and split and allow u to travel from any one block in the city to another. Should be able to ship moving trucks via this system where it would deliver them to the near address from where an operator autonomous or otherwise takes over
I fully agree. If everyone knows that self-driving cars are the future, why would we not just retrofit our roads to use some fixed transportation scheme like trams or street cars? Self-driving cars seem extremely wasteful, like some bad idea of public transit
There's a professor at Oklahoma State with a popular Youtube channel who has a proposal to create an Autonomous Truck Corridor as a way to fix a lot of our infrastructure problems. His proposal works with the autonomous driving technology available today--not level 5 autonomous driving that may be available sometime in the future.
I'd be quite happy if they could just drive up and down the motorway themselves in reasonable conditions. Seems like we're pretty close to that already, and that hasn't even required wiring them all up to talk to each other.
I believe that self driving cars are here whenever they can take a standard driving test in whatever conditions we feel are normal for the legislative area they are employed in.
Until then any kind of self driving should be given the same penalties that we would impose on a party operating a vehicle without the appropriate license. This is such an obvious thing to me I really wonder why it has not been implemented.
After all, if 'AI' is allowed to drive why aren't kids aged 12 allowed to drive, they are obviously more intelligent. It's because they have not passed a driving test and won't be eligible until they are of the right age.
After Musk's announcement that Tesla will stop selling cars to consumers after cracking Level 5 autonomy, I've stopped even looking forward to it because he's right. Why would a manufacturer sell a self-driving car for $50000 when they can make $300000 off of it as a taxi over its lifetime? Why would anyone make a non self driving car when for a couple thousand dollars in parts they can make money off of it for a decade?
Level 5 self-driving is the death of private car ownership. That may sound great to people living in a city but for those of us to whom visiting family means dirt roads it's stomach-churning.
I don't understand why independent full automated self-driving is the goal. This just leads to competing "standards" so to speak. I think auto manufacturers should be working with governments to create smarter roadways and define an automated driving standard. You'd have smarter cars that would be able to link up in trains on the roadways. They'd be more efficient aerodynamically. It would be safer. You'd have "road ranger" style e-vehicles that could link up and charge you without ever stopping. And you could help fund it all using tax payer money.
The ugly part of this is that if drivers get used to self driving in reasonable weather then, in the one percent of the time (or less) when conditions get really horrible, their driving skills won't be up to the challenge.
However as self driving becomes common investment will be made to make roads easier for self driving technology and extend the technologies usefulness.
Even now in snow storms when temperatures are just below freezing human drivers have a hard time coping. If self driving just has to do better than that. It doesn't have to be perfect.
I wonder if this is partially because the people working on AI/ML intentionally don't put enough efforts into this. I'm one of those people and while I'm to increase accuracy of an existing ML model by 0.3% and collect a fat paycheck, I'd never release any really novel ideas if I happen to come up with some. The reason is our society isn't ready for AI: it's bent by greed and doesn't even recognize this as a problem. Bezos makes billions out of literally slave labor and is cool with that. The US ships containers with firearms to middle East and Africa for profit. The first world people are ok with slave labor in Africa and China as long as it makes the next iPhone cheaper. China and UK are proud of making fast progress in making the survelliance state happen. Australian billionaires open new coal mines and don't care about the rest of the world as long as they have clean air where they live. American healthcare mafia extorts outrageous prices for basic meds and services and would rather let millions die than cut it's profits. What can AI give to this world? Superpowers for the tiny ruling elite? Private armies that can't possibly be challenged by common people. I don't want to live in that world.
Id take this article with a large pinch of salt, considering their previous predictions about technology.
case in point below:
"Hence, if it requires, say, a thousand years to fit for easy flight a bird which started with rudimentary wings, or ten thousand for one which started with no wings at all and had to sprout them ab initio, it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years--provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials. "[Emphasis added.]
The New York Times, Oct 9, 1903, p. 6.
That last point about materials is interesting. Commercial extraction of aluminium was only discovered in the late 1880s and Duralumin (an early alloy) wouldn't be invented until 1909. Thanks to these discoveries and the Great War the prediction was way off.
well if you see the link I posted, there are 11 examples from various points in the past. I was just making the point that their track record with technology predictions isn't stellar.
On highway is where it's at. City corner cases are going to be Impossible for a long time, but caravans of self driving trucks could easily happen in the next couple years. It already happens with a person in the front vehicle that is followed closesly in a train by robotic vehicles.
And moose runs out on the road, what's then? No matter how fast the data will be passed down the line, following closer than the braking distance will not be safe.
And there are black ice, puddles, etc.
I honestly think that even when we do get full self driving cars, you won’t be able to own one. They will be rideshare only. The reality is that autonomous vehicles are so complex that you need advanced mapping of very well known conditions to make this feasible. Companies will make these sort of maps for major cities where they run their ride service but anywhere else cars will just not be advanced enough to handle unexpected conditions. Self driving cars will only ever work under controlled circumstances such as well known cities and suburbs.
It is also the case that the downtown area of cities is often the place where parking your own car is the most problematic, so being able to use a self-driving rideshare would be really helpful for that reason also.
I'm going to be the contrarian here and say that they quoted people and events from some of the laggards or less advanced companies (like Uber's crash, which was totally preventable). I see much more optimism from the leaders in the space, although it's not like they are planning 100% self-driving taxis next year.
Also, keep in mind that the fair comparison is whether the cars outperform the average rate of human drivers. It doesn't have to be perfect.
How about just programming human driven cars to obey speed limits (downrated in bad weather or visibility conditions). Many accidents are caused by people driving at an unsafe speed (who has not gone into a turn a little bit too fast). No AI needed, just a speed limit overlay onto existing GPS maps, plus a bit of logic to reduce speeds in certain conditions (night, low visibility, etc).
Unexpected (by the driver) braking, possibly while making a turn, or controls failing to respond (the accelerator, say) seems more dangerous than the status quo.
Plus you'd have to roll it out everywhere at once, or my merge onto a 60mph limit highway where flow-of-traffic is closer to 75, in my software-crippled car that thinks I shouldn't exceed or maybe even match the speed limit in the on-ramp, is gonna be way more dangerous than having no such feature.
Then there's GPS often not quite knowing where you are most of the time (the route-finding apps just guess a lot—watch what happens when you make a wrong turn and it takes a few seconds to correct, because it's just assuming the diff between where you are and the road you're supposed to be on is an error, because there often are errors that large)
The problem is that the mistakes even a much-better-statistically-than-human driverless car will make are going to be different types of mistakes than a human would make.
A driverless car that kills half as many people per mile as normal driver BUT the ones it does kill are in ways that people feel sure that a human driver would not have been killed then it’ll still be a problem.
We need to regulate corner cases just like we did with trains, regular cars, medicine, e.t.c
Someone supposed dedicated lanes - that is goog thinking, dedicated intersections is the next step. No pedestrians allowed, but no lights whatsoever. Then, a section of city where regular cars cannot drive, and pedestrian areas are automaticaly driven at reasonably low speeds.
Sure, it is an empty highway. I guess that is what you mean by "the level at which autonomous vehicles operate today". Still, people surely were thinking about Waymo like performance.
Level 5 AV will never exist, because there isn't even a Level 5 human.
How about we just see if an AV can pass a license test to operate in a jurisdiction? They probably can already- but we do have higher expectations on AV then we have with humans I think. Which is good.
>Waymo’s CEO, John Krafcik, has admitted that a self-driving car that can drive in any condition, on any road, without ever needing a human to take control—usually called a “level five” autonomous vehicle—will basically never exist. At the Wall Street Journal’s D.Live conference, Krafcik said that “autonomy will always have constraints.” It will take decades for self-driving cars to become common on roads. Even then, they will not be able to drive at certain times of the year or in all weather conditions. In short, sensors on autonomous vehicles don’t work well in snow or rain—and that may never change.
This reminds me of the time Bill Gates allegedly said: "640K ought to be enough for anybody", taking about RAM memory.
Why would you say it will never happen? That is just silly. It may be decades out, today sensors may have a long way to go. But it's just silly to say it will NEVER happen.
> In short, sensors on autonomous vehicles don’t work well in snow or rain
This is a curious line of reasoning for me. Why are humans allowed to drive in snow or rain though? Your sensors work considerably worse in snow or rain, and there is abundant evidence over the entire history of cars that humans butcher each other with motor vehicles in snow or rain constantly.
The bar is simply that an autonomous car has to be significantly less likely to cause an injury in snow or rain than a human is.
The problem is that with current technology (as I understand it), it doesn't just make a self-driving car worse in the rain or snow, it makes it so it can't drive at all.
It seems like the easy answer is to kick human drivers off the road, right? That will eliminate a great deal of the unpredictability. Start with a small section of a city, and if the results are good, people will want to expand it.
Yeah, no. I will become one of those people who beats up robots. I live in San Francisco, and the reason I put up with the cost and garbage and stupidity in large part is because I don't need a car.
Cars already own non-urban everything, and can stay there. We need to be moving in the other direction. How's that ban on cars on Market St. going?
The really interesting thing (to me) is that Waymo operated self-driving cars on public streets in Arizona without safety drivers during November 2017 but stopped within a month. (google waymo "november 7 2017") Why did they stop? I'm sure it was partially because they realized there were some situations it wasn't handling properly, but my conjecture is mostly that expectations have changed.
I bet that Waymo cars are massively safer than human drivers in many situations, and that they're not as safe in a few others, and that there are a bunch of situations they don't handle well and "freeze up" or otherwise behave unpredictably. Waymo has probably realized that the bar isn't "as good as a human" but "significantly better than humans".
Just give me platooning on highways and approximately 95% of my personal use cases for self driving cars are satisfactorily fulfilled. And the problem is vastly simpler to solve.
the only thing I see in this thread are hundreds and hundreds of people who have never had the opportunity to drive a car with Tesla Autopilot or Comma.ai EON (this article included)
Those two products are literally the best purchase I've made in years, and have enabled hours of mental energy in my life freed up from the confines of soul crushing traffic in major American cities.
remember that slashdot comment about iPods being lame?
people will look at articles like this, and threads like this in the same way in 5 years...
Can Tesla's autopilot really navigate "major American city" areas with no human interaction? I thought it was just a slightly smarter cruise control for highways only and the driver still needs to be somewhat alert?
I have been saying this for years. Even if self driving cars handle 99% of situations correctly, they don’t become feasible until they can do that last 1%, which is 100x more difficult then previous 99. The reality is, if the car drives itself, people will be doing other things in the car and will be unable to respond to emergencies.
If they get in a situation they cannot handle, the car can stop and pull over to the side of the road before asking the human to take over. And it can refuse to navigate to areas where it wouldn't be able to do that. (And if you're in an area where an emergency stop is unsafe, it's unsafe for human drivers too. Getting rear-ended for an emergency stop is that fault of the following car for that reason.)
Or you'll need to call a tow truck when it gets disabled, or you'll need a stage 5 vehicle rather than stage 4, or you'll have to subscribe to an emergency low-speed teleoperation service that has been hypothesized.
You will never define a complete and consistent algebra. You will never exactly measure the momentum and position of a particle. They're not brutally hard problems, they're not possible. We know that there are a set of things that are impossible. Why is recreating human intelligence in silicon beyond that consideration?
Things are impossible till they aren’t. It’s not like we solved Go the board game, but we did build some killer AI which has now surpassed all human players. And that was within a matter of years: first it was a fluid game of human intelligence impossible for a machine, then it was obvious machines could be better.. but look at this other thing humans are still better at.
Google's AlphaZero AI went from not knowing chess at all to beating the best current software (Stockfish) in mere hours. It's a whole new world out there.
Stockfish isn't an AI, just an expert system. It was designed from the ground up to play chess.
AlphaZero is generalized AI. It was designed to learn how to do things - like play chess. The fact that it easily beat a custom-designed expert system with only a few hours of learning time is incredible. It's a different order of complexity altogether. It may not be human intelligence, but it's a great deal closer to how humans function than how machines (like Stockfish) function.
mhermher's examples are 1) proven impossible, and 2) impossible until we get a completely different theory of physics, respectively. "Until they aren't" makes a nice glib dismissal, but is fails to address the actual impossibility that mhermher has pointed out.
Now, neither of those examples is actually relevant to the topic at hand, namely, AIs driving cars. The question then is, what category are AI's driving cars in: The go-and-chess category, or the logically-or-physically-impossible category, or something in between? My guess is "something in between". But "until they aren't", while it may apply to that category, may still be "longer than your lifetime".
That assumption may be technically correct (I don't think it is but that's a different conversation), but it's very likely to be practically wrong in the "galatic alg" sense. By that I mean that the complexity is so many magnitudes off that it's intractable with traditional binary computing. The pre programmed car would melt it's way through the concrete.
But we actually understand so little about how the brain works, let alone how intelligence emerges from it.
What I am saying is that AGI may be impossible, but people are so determined that it's just around the corner with enough hardware and clever enough software.
It's just around the corner, and we don't even know whether it is possible.
Perhaps it is perhaps it isn't. But we can now equal and sometimes beat human perception on some standard machine learning datasets. Perception is one very large piece of the problem. The other large part shows some signs of cracking, consider the recent success in go and poker. AI can now beat professional players in both of these games. Agreed these are a different class of problem to realtime learning in a dynamic environment, but there is interesting progress here. I'd wager that we're less than 20 years away from a useful general intelligence that could say, earn money on mechanical Turk. And the recent succes with transformer networks understanding text. Incredible.
I think that's a misinterpretation of the quote, Gates didn't say "More than 64kb of Ram may never be possible" he simply argued against its practicality at the time.
I think there is no reel need for driverless cars. Having technological difficulties aside we should question whether there is really a need for self-driving cars. Life is not that bad without autonomous cars. I don't hear anybody complain about in real life not having driverless cars. Basically a driverless car is same thing having a private chauffeur and I know quite a few people who don't want to use their private chauffeurs lots of time.
There's no real need for anything in life other than food and shelter. If driverless cars work and don't cost much, some people will use them and enjoy them and that's enough.
There are levels above food and shelter. Having a car is a real need and it solves a problem for lots of people. However having a self-driving car may solves some people's problem but that don't seem to big part of human population. I think that makes the development of self-driving cars slower.
I complain about it. I've lost good friends to bad drivers. I can't figure out how to improve humans so I'll take self driving cars when they get better than humans.
Even if it will be technically possible, and I'm pretty sure some day that will be solved. That's the easiest part.
The biggest hurdle for self-driving cars will be ethics, such as 'the car brakes for a child suddenly crossing the street but then kills the old woman' or perhaps even make decisions based upon social status. Lots of options here.
Those ethics have to be agreed upon, and they will likely differ per culture, state, country, etc. It's a political and sociological challenge. It will be interesting crossing borders.
And then those ethics need to be implemented in software.
I would refrain from saying it's never going to happen, but I doubt very much this will happen in our lifetime.
I believe that this scenario is very over-blown. If you're in a situation where you have to choose between swerving into another car or hitting a child that darts out onto the street then the question really is: why were you in that situation in the first place? At 20mph pedestrian collisions are almost invariably non-fatal and emergency stops are almost instantaneous. So if you're on a street where a child can appear instantly, why are you going faster than 20mph in the first place?
a child can appear instantly on any street. they can even run into the highway if their parent pulls onto the shoulder for a bathroom break or something.
of course, it's much more likely to happen in a neighborhood than on the highway, so we drive slower in the neighborhood. but anything that can happen eventually will, so there ought to be some policy.
that said, I think there is a certain segment of the population that gets overly excited about mapping the trolley problem onto autonomous driving. I expect that little will change other than the liable party (the car driver vs the company) and it will mostly work out.
sure, but I don't think the whole highway is obligated to slow down for a stopped car on the right shouler, only the lane next to the shoulder (and I would argue that this lane suddenly slowing all the way down to 20mph would create a worse hazard). theoretically the child could make it all the way to to left before being struck. I'm not saying this is likely, just that there is no speed where you can guarantee that you won't hit a pedestrian that is behaving erratically. the best you can do is make it very unlikely in most cases.
No, the Trolley Problem will not be the biggest (or even a significant) hurdle until AI is so good that it's a moot point anyway. Self driving cars will do what humans do - they'll brake as hard as possible while trying to steer away from anything they're going to hit. If we're lucky they may distinguish between moving and non-moving objects and prefer to hit the latter on the assumption that the former are more important.
The only ethical decision a self driving car has to make is to not actively kill people. What you are proposing sounds to me to be merely a way to stealthily kill people you don't like with self driving vehicles and then use the "ethical decision engine" to gain plausible deniability. To make decisions about "social status" you need a database of people and their "social status". So where are you going to find a database of people with low "social status"? Well every prison and police station has one and oh and for good measure we can also add some skin color detection because that type of analysis can be done in real time.
Or we could just build better self driving cars and accept that some people will be killed but in exchange they maintain the agency to improve their chances of survival instead of being randomly killed on the sidewalk because some stupid idiot wanted to shave off 3 seconds by jaywalking. Just do stupid shit, the car will kill someone else anyway.
The idea that we should encourage a moral hazard like this strikes me as incredibly disgusting.
We don’t currently have those ethics codified now and it doesn’t seem like too big a problem. I think very few situations actually arise where someone has the time to decide between killing two different individuals.
I've always wondered why more attention isn't paid to this. A driver's ability to make a decision to run over a possible assailant is a huge deterrent that is not fully appreciated. To an autonomous vehicle all people will look the same and they will have to stop.
I'm most curious about the self-driving trucks. If they are driving across the country through sparsely populated areas, what is to stop a group of robbers from making a chain on the road that forces the autonomous truck to stop and then offloading the cargo? Legally, this would not be considered "robbery" since there is no person involved. As anybody who has driven is USA knows, time and distance would be on your side in the crime as there are tons of backroads and different routes to make a clean getaway...
All those sensors on the truck will be very useful to police investigators afterwards. Sure they make off with cargo, but theft is illegal even if it isn't robbery. The police are generally pretty good (despite the real cases of bad apples). In most cases "the cops say they know who but there is no proof" (obscure Stan Rogers reference), and the truck will upload the proof they need in real time.
Could be saying that to defeat competition. Don't bother trying it's decades away! And Ford is notoriously behind the rest of the pack so I'd believe even less of what they say as well. I don't listen to executives with a stake in the game regardless of which side they are playing.
The barriers to this seem mosly man made. Regulations, restrictions, taxi lobby, trucker unions, etc. A lot of it depends on where the line is drawn for safety or versatility. Does one vehicle need to operate in all climates with 0 accidents a year? Simply being safer than humans is easy- we are already there.
I'd be happy to even see more and more driver assist to the point where you can't crash even if you jerk the wheel and try to.
I think the bar has been set incredibly high, almost impossibly out of reach, by people interested in things staying the same.
Just Waymo was driving over 1 million miles a month in 2018. How many people have they killed? The fact that an autonomous vehicle or Tesla "autopilot" fatality makes national news is telling. It's not commonplace.
A million miles is not a lot. For comparison, the people of LA drove a million miles by 5a this morning. The drivers in SoCal will drive over 100 million miles just today.
The fact that many accidents involving human drivers still make the news is telling. It's not commonplace.
This is a weird hill to want to die on. Self-driving is currently statistically safer than human driving. Yes, caveats, for only the domains on which self-driving has been well-trained.
This is a weird hill to want to die on. Self-driving is currently statistically safer than human driving. Yes, caveats, for only the domains on which self-driving has been well-trained.
My point was that the statistics aren't even remotely comparable. Out of literally billions of miles driven daily across the US in all conditions, there are approximately 100 deaths nationwide. Each day, US drivers drive literally tens of thousands times more miles than all self-driving cars combined to date. There are at least 6 deaths attributable to SD cars so far...in the US, not including smaller accidents, despite driving only in optimal conditions. Scaled up per mile, self-driving cars are currently several magnitudes more dangerous than human cars.
>Self-driving is currently statistically safer than human driving.
I don't know of any statistics saying so. I've seen plenty of stats like 'self-driving in easy conditions is safer than human-driving in typical conditions'
So, self-driving is safer for those specific circumstances where self-driving is safer? Well, sure. Not much surprise there. That's not very persuasive about self-driving being (or ever being) safer than humans at real world driving in general conditions.
One could even argue that adding FSD to the mix for those ideal conditions will "condition", as it were, humans to suck even more at driving exactly when human input will be required.
it's my understanding that these cars can only drive without any human intervention in certain, somewhat ideal scenarios. I've never seen a direct comparison to humans driving in the same limited set of conditions. are the AVs still statistically safer in these circumstances?
It's not sufficient for these systems to just be "better" than humans in terms of accidents/mile/vehicle, they have to be FAR BETTER than humans.
Today, when accidents occur, blame and liability goes to the responsible parties. We're talking about millions of accidents and millions of drivers.
If you suddenly have a car that drives, the blame and liability for all accidents then gets focused like a giant magnifying glass onto ONE (or a few) companies which have made the self-driving car.
I don't know how many accidents/mile/vehicle is the "magic number" but I expect it is going to have to be far far far lower than what it is now. Otherwise, these self-driving car vendors won't be able to survive the legal onslaught.
Not really. The cars only need to be safe enough to get over the potential fear of users to activate them. This may even be less safe than human drivers, we just don't know until they are deployed. For example, it could turn out that present day autopilot in Tesla cars is less safe than human drivers, but yet people still use it. Humans aren't strictly rational based upon risk when it comes to transportation. If autocars are convenient, and in every day experience work successfully, people will use them (especially those who are young, who have the marginal cost of having to instead learn how to drive, and may just opt-out of learning to drive altogether if they can get through their daily lives by using autonomous vehicles.)
The legal issue you mention seems secondary to the human factors one. If people get addicted to self-driving, and some are dependent upon it, the laws will bend to the culture not the other way around, as they always do. Look at the current regime for an example of this: we gloss over the 30k fatalities that happen regularly due to cars, and hand out licenses to drive these death machines to people who really should have no right to do so given their poor driving skills or physical impediments -- we do so because to deny someone the ability to drive is so averse to culture and freedom that we err on the side of enabling more people to drive despite the fact that it surely increases the risk for everyone on the road.
Why can't we accept a little better now and keep striving for far better? Seems stupid to have an all or nothing approach, especially when plenty of luxury vehicles have pieces of automated driving in them already. Liability is for insurance leeches to figure out
Because the proper comparison isn't X accidents vs Y accidents. It's X accidents and Z billion dollars spent vs Y accidents, where that Z billion dollars could have gone to other efforts. If the delta between X and Y is minor, Z is wasted.
Two weeks ago I could have reached one of my life goals of ramming a waymo robocar if I were making a U-turn rather than a left so they still have quite a ways to go before they're ready for prime time. Maybe next time...
Presumably that average is taken over all conditions, implying that the average is well below 1.25 fatalities per 100 million miles driven in the conditions waymo is driving in.
True, and drunk drivers, and unlicensed drivers, and people driving while very sick or tired. Really we will need a couple billion miles driven to really start to get into fatalities as a metric compared to humans. Fortunately most car accidents aren't fatal, so those will be a better starting point.
So to sum up, I think that the self driving will start in very specific areas, and then those specific areas would be expanded until they are the only areas. Especially if those roads contain automated taxi (Uber, Lyft, etc) which you can call with your mobile device. The main reason a lot of people drive is because public transportation in their area is junk and you have to go by it's schedule and route. But having a point-point self-driving option is way more convenient. It's like the subway, but it doesn't need tracks.