Hacker News new | past | comments | ask | show | jobs | submit login
Tesla's self driving algorithm's overlay [video] (tesla.com)
669 points by belltaco on Feb 3, 2020 | hide | past | favorite | 641 comments



Compare Google's system from five years ago.[1] (at 7:42)

This new video is at n00b level compared to that.

[1] https://embed.ted.com/talks/chris_urmson_how_a_driverless_ca...


The camera is bird's eye on the google presentation. And it is a recording.

So jitter/flicker can be cleaned up/smoothed out and the data can be massaged in ways that a real time system may not be able to do.

This is a presentation. I'd be /very/ surprised if this animation was from RAW data as-is.

Also there seems to be Lidar data (point clouds) which Tesla doesn't have.

So while this means bounding boxes may have less detail in Tesla's system this is not an issue as long as they are not smaller than the physical object.

Having worked in the automotive space in the last five years and seen lots of those I'd not say one is less impressive than the other.


Those scenarios from Google were obviously selected for the presentation, but very much represent state of the art in _real time_ processing for autonomous driving systems from the time of the presentation.

I’m not sure what is implied with saying it’s a recording - both the Google and Tesla presentations are “recordings” and equal opportunity to pick best case examples, but I would bet strongly there is nothing not RAW = “real time“ for their respective compute platforms.

The top down viewpoint helps show off the quality (still by no means perfect) of the world representation. If you projected Tesla’s model into 3D you would see far more jitter than in the video overlay for a variety of reasons.

That said, I think comparing them directly on specific technical components is a bit of a sidebar. They are taking two very different paths along the way to a still ambiguous problem. Both are leading their respective approaches, but have fundamentally different and unproven assumptions.

Also worth looking not just at how accurately objects are detected but what the visualizations show about the intent of other road users. The Google video shows predicted trajectories for important objects in a number of scenes. We don’t get to see any of that clearly from Tesla, and that is by no means a small part of the problem. Not sure if it is there and not shown, just highlighting there is a lot more downstream even once you are finding objects reliably in the sensors.


It diminishes the comparison when just the lidar for a waymo car costs more than the entire tesla vehicle.

So, not only can you afford a tesla vehicle, you can go out and buy one today.


According to [1] Waymo produces its own lidars. In 2017 it costed $7500 without being mass produced.

Edit: wording.

[1] https://www.theverge.com/2017/1/8/14206084/google-waymo-self...


I guess I CAN afford a Tesla...


I was just answering to the claim "when just the lidar for a waymo car costs more than the entire tesla vehicle". Which is clearly wrong.

The tesla you can afford still does not have self driving capabilities either btw.


I made this joke as I found disparity between the message its reply humorous. If you disagree, feel free to ignore my comment. That aside, it is quite cool how Waymo cut costs by 90%: https://arstechnica.com/cars/2017/01/googles-waymo-invests-i... (perhaps there are newer developments of which I am not aware). The old numbers indeed exceed the price of most teslas...


I see, I missed the joke I think, reading again, it was quite obvious, sorry about that.


All the Waymo vans I've seen have had multiple lidar sensors, I think 4?


Can you go out and buy a fully self driving Tesla today? Because that would be the relevant comparison.


Can you buy any sort of waymo at all?


LIDAR will get cheaper and smaller though. I think Tesla’s argument that cameras+ML+tons of data>LIDAR is more compelling.


The real question is cameras + ML + tons of data vs LIDAR + cameras + ML + tons of data


It will still be power hungry and mess up aerodynamics even if it is smaller. Tesla is looking at the entire picture of autonomous + EV which IMO is a better strategy.


> The camera is bird's eye on the google presentation

The car is aware where it is in the world, so a birds eye view is more representative of what the waymo car actually sees, it’s just one of the many ways the waymo car has better data to work with


Your comment makes no sense. Besides the fact that this is showing Lidar scans with depth and 3D textures (does look cooler), the relevant part which is feature detection is a lot poorer. It doesn’t seem to detect lane markers or boundaries, sidewalks, traffic signs, or show path estimates like you can clearly see in the Tesla video. Plus the texturing implies the route is pre-mapped while Autopilot is doing all the work in real time.


Your comment just doesn't make any sense in context of the linked video. I can only assume you did not watch the video. I have uploaded the most important scenes to imgur so you don't have to waste your time finding the key scenes. The results will surprise you because they show exactly all the features that you claimed that google didn't have. https://imgur.com/a/6zHt8vP


In the Waymo video there are vector narrow lines added on top of the broken white line of the image view, there are also a vector boundary of the centered line and boundaries, and 3D box for cones. You are clearly not looking closely enough, or might not want to.


But still - there are Tesla's driving everywhere, including ones with autopilot enabled, and I have yet to see the first Waymo car on the road...

The technology might be/seem superior, commercially Tesla has cleary won the race..


>Tesla has cleary won the race..

But Waymo is pilot-testing autonomous taxis and Teslas have a "greater than zero chance" of making it to your destination without intervention.


First Mover Advantage is still pretty important though.

Tesla is selling cards today, Waymo is not. Tesla's cars are good-enough for most of the buyers so Waymo's technical superiority has to bs much much better warrant the attention and change in public perspective.

On very important thing is tesla has established that your will get better with software updates over time. Other cars have nice adaptive cruise control and self-driving tech but the thought that the car you bought today can in a few months drive better is a massive psychological advantage imo. I don't think Waymo can compete on self-driving features anymore. They have to make the better car and driving experience as a holistic package to stick


>Tesla is selling cards today, Waymo is not.

Tesla sells cars for which FSD doesn't exist, Waymo is using FSD on cars they don't build.

The business models are different. Why is the natural assumption that everyone will own their FSD car?


> Why is the natural assumption that everyone will own their FSD car?

Great point.

Tesla exists for people who want to own/drive their cars with cool tech and the dream of FSD down the line via a magical update

Waymo's launched Waymo One as a taxi service but it's only available in one region. My understanding has been that they want to buy a fleet of cars, outfit them with their FSD tech for a taxi service? (So competing with Ridesharing services as an autonomous alternative)

There is certainly plenty of room in the market for people both models


Wow. This is what I’d expect from a large company. Full 3D representations with super smooth object recognition. Tesla’s looks like a student project compared to that.

Although I wonder which is more truthful. I bet Google’s demo takes significantly more computing power, was cleaned up for the demo, etc


This is sort of a natural consequence of LIDAR.

Turning point clouds into objects is child's play compared to plucking objects out of cameras.


Waymo is at least 5 years ahead of Tesla. They are running driverless self driving cars already on city streets. My Model 3 can’t even navigate a parking a lot half the time I use the summon feature.


Ancedote: this evening in Mountain View, the driver of one of Waymo's test vans had to take over (I watched her grab the wheel).

The van had come to a complete stop with room to pass a USPS truck (it appeared safe to me from 25 feet away), and didn't begin moving again for a few seconds. She then took control.


The waymo cars have always been timid. That's why most of the collisions involve the waymo car getting rear ended. They don't move when other drivers expect them to.

Of course, this is annoying, and being so timid that you get rear ended is unsafe, but the Tesla approach would be the delivery truck is detected as a stationary object and ignored, so the vehicle would accelerate to the set speed in 3 seconds and slam into it. (See the many reports of Teslas running into stopped emergency vehicles). Of course, Tesla will tell you that their car isn't really self-driving with one side of their mouth, while telling you your car is equipped for self-driving out of the other side.


> Of course, Tesla will tell you that their car isn't really self-driving with one side of their mouth, while telling you your car is equipped for self-driving out of the other side.

This is my biggest worry with the way this is promoted. It’s false advertising and misleading to consumers. I wish more companies (in all industries) would be held accountable to accurately portray what they are selling.


Tesla promotes autopilot as safer than a human driver but blames the driver when there is an accident. How can autopilot be safer than a human if the only way to safely use is with a human monitoring it?


It's hard to extrapolate anything from this, as they may have intentionally been testing some new variation on an algorithm that made it more reticent. Besides, it doesn't sound like the car was unsafe, just timid. A crash would be way worse.


In one very small geo, waymo is maybe 1-2 years ahead.


Tesla hasn't even started the certification process for self driving in any state. Without seeing disengagement reports and looking at how long it took summon to get released to market after initial preview(3 years), 5 years is being optimistic for Tesla. Realist would say they'll never be self driving with the current sensor setup. All the cars that farthest along by looking at miles driven between disengagements are all lidar based cars.


Quick question: if you summon Model 3 and car gets in an accident, would Tesla pay for the repairs?


No. You agree that you are in control of the car.


How do you control the car if you're not even in it?


If you release the summon button, the car comes to a stop. You are supposed to be able to see the car visually at all times so you can release the summon button and avoid problems.


In addition to stopping when releasing the button, it also shows you the path it plans to take on a satellite map on your phone. So you know where it is going.

Some people have problems with it, but it works great for me. It's pretty awesome to have the car pull up to me on a rainy day. Part of it is parking in a smart place on the way in.


This already happened several times after they first rolled out the feature a few months ago (minor accidents). No, Tesla did not pay (nor would I expect them to).


I wonder how courts decide on that...


Of course the software running in the current Model 3 can't compete with Waymo, that's a idiotic demand. That is just arguing in bad faith. We have no clue how far they are as that software is internal.


How much of the Google video is eyecandy versus actual processing imagery?


Fake it until you make it ;-)

That reminds me of all the "AI power" we already use which is backed by low-wage contract workers who listen in.


Google search was curated by humans for a long time. Last guy I met who was doing it, now works for an SEO firm. Things like house insurance, medical insurance etc is highly hand tweaked.


Please understand, Tesla works only with only Camera & Radars (at L2 level). Where as Google Waymo works with pretty expensive and lot of redundant sensors ie., 360degree LIDAR, cameras & Radars (at L4 level). Tesla sells vehicles in volumes, Waymo doesn't (as it is exhorbitant). Hence comparison is not apple to apple. I do agree Visualisation in the google video is more appealing. I assume Tesla also has state of art visualations for their algo development.


they're only redundant if it's actually working and they're not necessary. Nothing is actually working right now in terms of actually self driving the car. So little early to say redundant.


What are these levels you are referring to? We drive just fine with two cameras.



That doesn’t have anything to do with the technology used to implement it.

I might have mis understood the OP’s point with their parenthetical remarks.


And two actuators in every eye ball and two big actuators between the head and the main body.


Because our field of view stinks.


Waymo may be planning to develop overpriced self driving cars to gather data automatically for cheap self driving cars while simultaneously experimenting.


Note that Google's system operates in pre-mapped areas and uses lidar.


So it operates under the conditions that we expect the commercially available product to operate under? Yeah cool thanks for the caveat.


No, it operates in an infintessimaly small test area.


So where can I buy a Google car today?


I have mixed feelings about this. I realize the number and types of sensors that exist in Waymo's cars provide far more data to the driverless car. But then when I really think about it, humans have been driving with only 2 sensors (eyes) for a long time with relatively good success. With the improvement of computer algorithms to help the cars make better decisions than humans, I'm not 100% sure if the long-term solution will require all the extra sensors.

My guess is the added sensors may get Waymo to market sooner, but if Tesla can commoditize this technology for far cheaper they will win in the end.


>humans have been driving with only 2 sensors (eyes) for a long time with relatively good success.

1.25 million die a year in road accidents. We should try to do better than human.


But also keeping speed high enough. You won't have many fatal road accidents with speed limited to 5 km/h. But it won't be very useful transportation. Those timid cars might be safer but people might prefer more aggressive style to reduce travel time.


Sure, but if we can match that number of fatalities with self driving cars it will inarguably be a success from a utilitarian standpoint.


Humans also have 20+ years of experience living in human society. You need to develop AI as strong as humans to match that.


I don't think that the videos contain the information necessary to make this judgment.


Plus Tesla still does not report disengagements to the CA dmv, right? It seems so strange to me that their HQ is there and they do not report any testing there.


Either Tesla: (1) Call it a 'driver assist system' and carry that definition beyond all reasonable bounds. I've read the regulation, I don't think that defense would stand up in court. (2) Does physical testing somewhere other than California. The testing regulation is fairly onerous, and I could see them saying 'nah' we'll do this in Arizona, Nevada, etc.

You're right they don't report disengagements, but they still maintain a autonomous test vehicle permit in the state, according to the DMV website. Very strange indeed. I think (2) is probably right.


It'a (1).

For Reporting Year 2018, Tesla did not test any vehicles on public roads in California in autonomous mode or operate any autonomous vehicles, as defined by California law. As such, the Company did not experience any autonomous mode disengagements as part of the Autonomous Vehicle Tester Program in California.

https://electrek.co/2019/02/13/tesla-autonomous-mileage-cali...


They actively avoid reporting test results in CA.

Nothing about that changes due to semantics (your #1) or if they test elsewhere (your #2).


Why is that strange? Does Tesla strike you as conventional?


It strikes me as evasive.


After seeing reviews of the Tesla Autopilot vs Openpilot it became clear to me it is way to early to trust Tesla's Autopilot (and Openpilot).

They both use visuals only and are easily confused when situations are a little different than 'normal'.

I even think Openpilot performed a little better than Autopilot but because Openpilot only has a forward looking camera it fails often on tight corners.

Google on the other hand is aware of the complete 3D surrounding. So even if road marks are gone or are unclear it still can estimate where the vehicle should be on the road.


Tesla’s system can still estimate where the car should be without markings, just like humans do with their vision.


Well it you look at some videos you can see that there are times Tesla thinks repair lines are road marks especially when the sun is reflected by the lines.

When the car has no real idea of its position in 3D space this gives problems.


Same as humans, except it has GPS, radar and other sensors to help.


"Inside Waymo's Secret World for Training Self-Driving Cars" -The Atlantic

https://www.theatlantic.com/technology/archive/2017/08/insid...

This was a really really good read, especially into how Waymo simulations could generate data like seen in this video. It's from 2017 but still an extraordinary article for the quality of engineering insight.


I was waiting to see if someone would post this. And that was 5 years ago.


On the contrary. When you know the Tesla system is running on real time on a hardware accelerated card IN the car, it becomes way more impressive than a google presentation made with supercomputers after a long time of processing.


A simulation vs actual?


Well, there's no harm for Google in publishing more since the project will inevitably be "sunset" and is an uneconomical research prototype, while Tesla's is a production version that people actually use.


The amount of jitter in the estimates makes me nervous, especially when the model thinks something is present in one frame and not there in the next.


This is a problem even in production autopilot in Teslas. When stopped at a stop light, you can see cars "dancing" and rotating randomly in a jittery fashion. Today during an auto lane change, the system blinked a truck from two lanes across in and out of my target lane, causing the car to cancel going into an empty lane after quarter ways entering it, twice.


The dancing cars were fixed several months ago in an update. The computer used to just recognize cars, and then align them according to the lanes it sees. At a stoplight it when it had trouble seeing the lanes clearly the cars would rapidly change orientation.

Since they updated the neural net to also recognize the vehicle orientations the dancing has stopped.

I have seen a lane change cancel recently, a couple weeks ago though.

You can tell the car is a 'nervous' driver. It plays it way too safe, but I guess that's a good thing at this point.


Model 3 owner here: the dance where cars spun around and landed on top of you is gone but detected cars are still quite jittery. I notice that when I'm stopped and also when I'm driving. This is quite noticeable in the transition between different regions in the car (presumably when the vehicle is handed over between different cameras or sensors). Even something relatively simple as the traffic aware cruise control will sometimes slow down for no apparently reason or simply turn itself off in the rain. Given the combination of the visualizations and the performance of cruise control and autopilot I think Tesla is very far away from fully autonomous driving under all conditions. But they'll probably keep getting better at the semi-autonomous/good conditions/freeway "self-driving"/"augmented driving"...


I'm pretty sure that they "fixed" the dancing cars problem by applying a low pass filter to the data before sending it to the visualization, just so people would stop complaining about it. I think there's still a lot of jitter in the underlying data.


Seems like it would make more sense to model the inertia. Cars don't randomly accelerate at 100,000m/s/s in some direction they aren't pointed. Though they should have a model for detecting obstacles in the view regardless of inertia, because sometimes something really does appear in front of you in a thirteenth of a second.

You could probably model inertia with n prior frames of probability fields.


> Cars don't randomly accelerate at 100,000m/s/s in some direction they aren't pointed

What if they are hit by a truck? Maybe not 100,000 m/s^2 but if you assume that cars can't accelerate in directions they aren't pointed, you will be wrong at the worst possible time.


That's why I elaborate, and why I chose that number. The only way something actually accelerates like that is an error, or it's an error.


A threshold that high will be useless as it will miss most errors. A threshold low enough to catch most errors will reject some valid data. A naive approach like that will not work.

A better approach would be to include temporal data in the inputs to the neural net so it can learn how to do the prediction and filtering itself using all the context available in the input imagery, instead of processing each frame completely independently and feeding low-dimensional symbolic results into some other system. But you'd need a very large dataset and a very large neural net.


Would you not be implicitly assuming that the prior position was more accurate than the current one? If you have a sequence of consistent prior positions, then perhaps something like a Kalman filter would be appropriate, but I would guess that with something suddenly being revealed by a change in either party's position or that of a third party, you don't always have that.


Modelling inertia seems like a special case of a low pass filter? A very useful and physically plausible special case, of course.


That's a good fix, but they should apply the filter to the data used in the driving logic also.


If the filter adds significant latency that could go poorly


To elaborate, the filter also may not improve the accuracy, just the perceived accuracy.

To be correct but one second late is to be completely inaccurate. The system is trying to estimate the current position of the car, but also predict future positions.

So a little bit of imprecision is fine since it improves accuracy related to predicting the future positions of the cars. A slight move in one direction may indicate a lane change, so it is always useful to be aware of that so as not to accelerate past a car whose measurement appears to be more inaccurate, since they actually might be moving. If you did the same thing with a human's "sixth sense" perception of the positions of the cars, you'd definitely find that they move a lot compared to their actual positions when the head is turned since our ability to merge our vision and our inertial sense is not very good for the most part.

The same issues arises with AR/VR, it's useless to know a more accurate position of the user if it's not the present position, because then that will definitely lead to motion sickness.


"The dancing cars were fixed several months ago in an update."

On the latest software and with HW 2.5, this is not true. It's still very much there.


Agreed, with HW3 it's the same, there's still lots of jitter and dancing cars. As of today, with software updated about three days ago.


Dancing isn’t completely gone - especially with large trucks.


I'm pretty sure the truck thing is because images aren't stitched yet. That's coming in an update.


Do you know if, when a vehicle disappears, the system assumes the vehicle continues moving as it was when last spotted?


I'm pretty sure that the visualization is only showing highly confident classifications (not sure about the SUV/Pickup thing). Under the hood the algorithm is locating all kinds of objects that could be but are not displayed on the screen as some kind of unknown box. Probably the reason Tesla isn't showing this is because the location and size of objects are uncertain and people would freak out if they saw all that traffic (some of it quite close) jumping around.


If it’s a debug visualisation, why would it not be displaying everything? Of course, it’s in a public release so it’s probably, er, ‘tidied up’ a bit.


That's not unusual for computer vision (or any kind of sensor really), that kind of data is normally filtered and smoothed after that, and merged with previous frames or other sensors.

What would be worrying is the model misclassifying an object, not detecting it at all, or having the bounding box consistently off.


> That's not unusual for computer vision (or any kind of sensor really), [...]

Including human vision. The raw sensory data is pretty messy, and with some ingenious experiments some researchers can get a glimpse of exactly how messy.


Oh definitely. The selective attention test [0] really shows how humans can perceive certain things in a way that's different from how computers perceive them.

The research on GANs also shows how computers can be fooled by things that wouldn't confuse humans.

[0] https://www.youtube.com/watch?v=vJG698U2Mvo


Agreed. It also said it was running at 13fps. Not stoked about a vehicle going 70mph updating at 13fps.


Tesla says their "Hardware 3", which is what you get if you buy it now, can process all cameras at 60fps.


I find it odd that they would publish a video showing performance numbers from out of date hardware. I mean I believe you - I watched the presentation in their custom processor and it’s quite impressive. Just weird that they’re showing old performance numbers. Perhaps this video is old.


There is speculation going around the "big rewrite" Elon mentioned last week is actually porting the code to run natively on the new hardware. Speculation says it's just been running in an emulation layer, but now they're about to unleash the full potential of the hardware.

If true, it makes sense the video would also have been captured using this emulation layer, explaining why it's not latest-and-greatest-fast.


If that is true you should call in to question the integrity of a company that would run life-critical software on a non-RTOS.


Realtime has nothing to do with absolute performance. It's about meeting your deadlines.

And, of course, often real-time systems have much worse throughput performance than a non-real-time system on the same hardware. After all, latency guarantees are not free.


Yup. I think we agree on that.

It doesn't matter if you have a highly performant pipeline for detecting other cars if you have a random 200 ms VM pause as another car blows a stop sign in front of you.


I get your point. But to nitpick: I think you could make 200ms pauses work, even if they are random. Just adjust your deadlines, and drive like a defensive human driver. Humans have worse reaction times.

The bigger problem is that what you might actually be getting is (almost) arbitrarily long pauses with a long tailed distribution. So sometimes 200ms, rarely a second, and every once in a while perhaps two seconds, etc; and no guarantees on the longest pause.


...but if it starts braking hard in 250ms that's still a very superhuman reaction time.


As humans we know our reaction times - you naturally slow down as you get to a junction or navigate crowded or tricky traffic conditions to give yourself as much of a chance as possible to react.

This is the thinking behind many types of speed control street layouts. You should also know where to look to anticipate where danger is most likely to come from, and be ready with some kind of action. This is why we do hazard identification tests as part of the driving test - looking in the right directions at the right times is crucial for operating a vehicle safely.

250ms is a fairly average reaction time for something visual that you are ready for - but you should really be giving yourself as much time as possible - if somebody bombs past a traffic light at 70mph, even if it is green, most people would agree that it was an unsafe move. This goes doubly for an autonomous car, that is unable to play the positioning negotiation game that humans are masters of as a result of being social creatures.


It's very easy to configure a program to slow down before intersections. Also if we want to be realistic here, you're going to be able to see that the other car isn't slowing properly for much longer than 200ms. You'd either already be braking when you hit that pause, or you'd be so early on that the delay doesn't make a real difference.

And if you're not hovering your foot over the brake pedal, you're not getting 250ms.


Yeah, until we're above 500Hz I'd rather have a significantly faster non-realtime main system, with a small realtime watchdog/backup.


Even a guaranteed 10Hz, ie 100ms reaction time, is better than what humans routinely do.

And yes, in practice you will probably combine a small real-time core with most of the code (by volume) running non-real time. Lots of one-time setup at the beginning of the right, or longer-range route replanning doesn't need to finish within tight deadlines. The human analogue is your co-pilot reading the map; or when you are fiddling with the aircon or radio dial.


Nobody said anything about whether or not there is an RTOS involved.


Ah that would make some sense. Certainly I’m expecting extremely good performance from the new computer once everything is running natively.


seems weird to me that it's not taking any visual data at higher frequency than that.


It’s probably running on a desktop computer on pre-recorded video, not on the actual car hardware.


(TM3 owner) While in motion what is presented to the driver there is little to no jitter. Where you get it mostly is when stopped and the car seems to adjusting between cameras to determine where an object adjacent truly is. sometimes there is no jitter and other times its a bit odd.

in motion the car drives just fine with the caveat they have not enabled signal recognition. I use TACC and at times full AP on my daily commute which includes road speeds from 35 to 55. I particularly like it on rainy days. I treat it like having a high school kid being chauffeur... I am a back seat driver who just happens to be in the driver's seat.

as for visual representation like in the video or waymo's demo videos, like many other things in life when you see how the sausage is made it is a wonder how we all survive it. The key difference between Tesla and Waymo is Tesla is not geo fenced, same with Cadillac's supercruise which is not available except on interstate.

who has the best solution, I am not willing to place a bet on that yet


Why can't they just whack it with some kind of Bayesian latent space model. A big jump should have to require more evidence than the history of the previous posterior


"Just". Pro tip, these guys work on these problems all day every day. If you think you've solved one of their major problems after 18 seconds of consideration then you're probably missing a large amount of context.


I was inviting you to tell me why


Some care is needed when choosing priors in a hierarchical model [such as Bayesian], particularly on scale variables at higher levels of the hierarchy.

The usual priors such as the Jeffreys prior [1] often do not work, because the posterior distribution will not be normalizable and estimates made by minimizing the expected loss will be inadmissible.

[1] In Bayesian probability, the Jeffreys prior is a non-informative (objective) prior distribution for a parameter space; it is proportional to the square root of the determinant of the Fisher information matrix.

Why is this of relevance?

It has the key feature that it is invariant under a change of coordinates for the parameter vector. That is, the relative probability assigned to a volume of a probability space using a Jeffreys prior will be the same regardless of the parameterization used to define the Jeffreys prior. This makes it of special interest for use with scale parameters.

Why is this an issue?

Accordingly, the Jeffreys prior, and hence the inferences made using it, may be different for two experiments involving the same theta parameter even when the likelihood functions for the two experiments are the same—a violation of the strong likelihood principle.


But we have a lot of statistical information and can use reasonable priors about the world.

Ie objects don't spring into existence or disappear or fly around at 400 mph or change direction at 2000 gees.


> Accordingly, the Jeffreys prior, and hence the inferences made using it, may be different for two experiments involving the same theta parameter even when the likelihood functions for the two experiments are the same—a violation of the strong likelihood principle.


I do appreciate your intent, but I learned that to get the intended response you need to add way more humility into your provocative question. Especially in written communication amongst strangers, where there's a lot of contexts from body language etc missing.


Presumably that sort of thing does exist at some other layer. But the design goal here is, obviously, very much NOT to reduce the false positive rate via clever filtering, it's to reduce the rate of collisions with REAL OBJECTS IN THE ENVIRONMENT. Tolerating some false positives, and the phantom braking incidents that go with them, is going to be needed.

Basically, no, you don't just throw it at some Bayesian math or a Kalman filter to make it look prettier. Yikes.


Well, there's some trade-off between false positives and false negatives. Otherwise the optimal would be never to start the car.

But yeah, the scales are heavily tipped.


Your human eyes+brain do the same thing.


Our brains are purpose-built to really see only the very core center of our vision; the brain then creates an approximate model of the surrounding space, but a lot of what is in that model is influenced by what the brain "expects" to see. So I think yes, we can also suffer from some similar inaccuracies when the objects in question are in our periphery.

However the main difference is that, when we are consciously looking directly at something, we can almost always tell with 100% certainty what we're looking at, up to a considerable distance. I can see a car pulled over to the side of the highway a solid half mile ahead sometimes, and have plenty of time to respond. Computer vision doesn't have this additional strength.

As always though, the strength that computer vision has over us is it never gets tired or distracted, and it never operates in "default mode" where sensory inputs don't get full (or even much at all) conscious attention.


Nope. You can “see” all kinds of things that aren’t there, because your brain has yet to notice anything forcefully telling you otherwise. This happens all the time and you’d have no way to notice it.

Even when some new information forcefully comes into play, your brain is often able to adjust your memory so you believe you knew it along, so long as the initial percept is fresh enough and had enough uncertainty.

All of this feels to you like a perfect unbroken stream of direct seeing but it is an illusion. You don’t see anything directly, you get fuzzy spurts of probability and turn it into your world in your mind. A world that’s likely to be unrecognizable to the next person.


> we can almost always tell with 100% certainty

That's just your brain again. You might mistake a bike for a lamp post, and switch between beliefs several times, before you figure it out, then convince yourself you knew it the whole time.


Humans and many other animals have a persistent world model. My eyes+brain reckons things don't disappear instantly even when I don't have a direct line of sight.


Kind of like a human?


I kind of giggle because the jittery-ness of some of the graphics makes it like I'm watching some robot version of Home Movies or Dr Katz.

I bring that up because my hypothetical personal questions working on a system like this is 'smooth' decision making. Objects, lines, are jittery and falling in and out of recognition, but the actual control inputs to the car are smooth.

I know, good conditions and all that, but I always find it quite remarkable watching any sort of automaton make relatively fuzzy decisions. I'm very curious to know more about how this system 'thinks' about the things it 'sees'.


The raw sensory inputs to a human brain must also be jittery. Eyes apparently see highest definition detail only in a tiny area of the visual field, for example. The brain stitches it together, interpolates, tags features, and guesses. The preprocessing miraculously creates the impression of a big clear image out of a dirty data feed from two jello cameras that are swiveling around all the time. There's a lot a smoothing going on.

Imagine if you could see the raw input from an eye - it would be a big field of view, but mostly blurry, mostly not in full color, with a blind spot hole near the center, and the whole image jittering around violently.

One trait of practical real-world intelligence is ignoring 99.9% of everything. It usually works.


There's smoothing, but maybe more importantly there is an expectation of continuity and a bias towards likely interpretations. The brain is trying to detect what it already thinks should be there, and not starting from scratch every millisecond. I'm not sure if anyone is doing that with neural nets, but I think we'll keep failing until we do that.


Yes, mostly. Your brain also cheats and adjusts your memory retrospectively to a certain degree.


I believe this was released as part of Tesla's Autopilot hiring announcement. More details here:

https://www.tesla.com/autopilotAI

Definitely someone's dream job ;) I particularly like the applicant query: "Tell us, what extraordinary work you have done?"


"Tell us, what extraordinary work you have done?"

I wrote code that worked....

I like boring things.


You may be trying to sound snarky, but that is truly extraordinary in todays world!


Too right! I seem to be encoutering garbage software all the time, either in use, or by expanding an existing project (including widely available, popular OS projects).

Sometimes it's my own code.


I like to check for ranges and limits


Big fan of treating all inputs and outputs to and from functions as hostile. So parameters and return values as well. Always interesting how fast the bugs turn up once you start enforcing that.



partial evaluation much


This is starting to sound like the worst dating site ever.


would that more languages let you do this proactively.


while an interesting video I really want to see the entire front arc stitched together. How is it judging it is safe to go through the stop sign? Situations like that fascinate me the most.

One issue not discussed enough is all this push for automation really needs road marking guidelines pushed down from the Federal level. While the feds can hold domain over the interstate system or roads it can be maddening the differences on right of way rules to simple markings at state level


>> One issue not discussed enough is all this push for automation really needs road marking guidelines pushed down from the Federal level.

That would be a mistake. You don't take a safety critical system and rely on a nationwide beauracracy getting all the details right to make it safe, or even effective. We've had the discussion here before, and the only way for full level 5 autonomy is a general AI capable of most everything a human is. Until people realize that it will be an endless stream of "we just need to fix these corner cases" or put more constraints on the physical world because the cars aren't smart enough.


Isn't that exactly what the federal government has done to extrodinary success with the interstate system?

The interstates follow very strict and powerful standards that make us all so much safer.

https://en.wikipedia.org/wiki/American_Association_of_State_...


I drove across the US 5 times in the summer of 2019, almost excursively on interstates (the 40, 70, 80 end to end).

I can assure you, there are hundreds of times that "strange" things happened to the road. Odd or incorrect lane markings, no lane markings, lanes beginning from the right and left, lanes ending suddenly on the right and the left, lanes marked as merge that were not, lanes not marked as merge that were, etc. etc. etc.

Every time I thought I could just stay in my lane with cruise set I found I was severely mistaken.


The vast majority of the US interstate system involves two lanes each way with a wide median separating opposite traffic flow. There is always ample signage around exits, interchanges and whatnot. There is also a stretch that is straight and level enough to be used as a runway in wartime. I think most inclines are restricted to a max 6%-6.5% gradient (for reference, I grew up on the side of a mountain and the first climb was around 14%. You could stand on that in icy conditions and would slide down if you attempted to stand on it. Most mountains passes on the interstate also have "runaway truck" pulloutss in case a semi's brakes fail.

Most of the weirdness on the interstate system is usually caused by cities; having to adapt to preexisting conditions. As an example, I offer up Chicago. i290 is a mess. The Austion and Harlem exits are both left-handed exits. Worse yet, oncoming traffic are both given protected turns from the ramps. A lot of people dont follow the law and turn into the nearest lane. A lot of people drift, and a situation like this just breeds accidents. This is in oak park, a near west suburb. The Edens expressway (I90) also had expresslanes, which causes confusion, too. Not always open at the same time or direction.


If it worked so well, why isn't it good enough for self driving cars?


Or people, for that matter, who break the speed limits like 99% of the time and regularly smash their cars into eachother?


Five over is still an acceptable rate, right?

Okay, good.


I drive I88 outside of Chicago. The typical speed is in the 80-85 mph range, while the limit is 60 (I drive opposite prevailing traffic). Prevailing traffic is a parking lot for my entire 15 mile commute.


Isn't that what we do with building codes, fire safety codes, airline industry regulation, electrical safety regulation, gas regulation, food hygiene, product labelling, site safety symbols, and many more?

Collectively decide what needs to be done for something to be considered safe enough, then regulate that it must be done that way for it to be allowed. Maybe this could be "self-driving car approved roads" and self-driving cars must check their navigation systems and only travel down approved roads. Drivers then get to lobby their local or state councils to make more roads follow the self-driving car markings and signage, and when they have, cars become permitted to drive down them next time they update.

Until people realize that it will be an endless stream of "we just need to fix these corner cases" or put more constraints on the physical world because the cars aren't smart enough.

That is one way of considering trains and trams, and they're good enough to be useful with extreme constraints on what they can do.


> One issue not discussed enough is all this push for automation really needs road marking guidelines pushed down from the Federal level

Who will fund that in a way that ensures a timely rollout (as in "sometime this century")? Who will pay to maintain it?

Unfortunately, self driving cars are just gonna have to deal with infrastructure as-built.


> One issue not discussed enough is all this push for automation really needs road marking guidelines pushed down from the Federal level

Yup especially because every country is USA.


Two issues, first: I thought fully autonomous driving was meant to be done by now? Second, don't Tesla & SpaceX have a reputation for being a terrible place to work, with Musk expecting everyone to be working as hard (or harder) than him, and firing people in weird and capricious ways.


>I thought fully autonomous driving was meant to be done by now?

Musk was bullshitting when he made that prediction. Maybe he did it because he had bought his own bullshit. Personally I think what's more likely is that he was cynically conning people.

In reality, I think no-one is anywhere close to fully self driving cars. I would be surprised if we saw fully self driving cars any time in the next 50 years.

The whole field of fully self driving cars has been astonishingly full of empty hype, for some reason often believed by otherwise quite smart people.


> for some reason often believed by otherwise quite smart people

I don't understand why so many smart "tech people" fall for the hype.

Maybe it is because "machine learning" is just abstract enough that even the most jaded developer thinks they can treat it like a black box where if you pour enough videos, photos and LIDAR readings for training into the top it will somehow spit out a fully autonomous self driving car at the bottom.

I do find it funny though. On the one hand Alexa only manages to turn on the lights successfully 50% of the time yet somehow we will magically have self driving cars capable of safely navigating the roads any day now. I mean for fsck sake, we don't even have a thing that can wash and fold clothes automatically but somehow self driving cars will be on the market any day.

Like, if we cannot even get voice recognition to work right, how on earth will you tell this magical car which street parking spot to take in a busy city? How will you tell it to pull over to pick up a friend? Hell, how will you tell it to go through a drive-through at McDonalds? A touchscreen?


As to "tech people" falling for the hype, it all depends which particular hype but:

"surprised if we saw fully self driving cars any time in the next 50 years."

a couple of years after there have been self driving cars driving around Arizona https://youtu.be/aaOB-ErYq6Y?t=93 reminds me of quotes like

"The aeroplane will never fly." — Lord Haldane, 1907

a while after the Wright brothers had done many flights. You could argue both the Wright brothers planes and the Arizona Waymos were a bit rubbish but these things tend to improve rapidly.


Rather expensive self-driving cars, with safety drivers have been driving around a very small part of Arizona, right?

That's not really that close to an affordable self-driving car that I can take overnight door to door from the bay area to the Los Angeles area while I sleep.

That may not be 50 years away, but it sure doesn't seem any nearer than 20 years. Progress towards something usable has been pretty slow, and the edge cases are going to be tricky.


A percentage of the trips have been without security driver for several months. As for affordable they are comparable to Uber and lyft and you can actually get a Waymo trip using the lyft app, so that seems pretty affordable to me.


If you click the video link it's one driving without a safety driver.


> How will you tell it to pull over to pick up a friend?

You can’t imagine a way to do this, therefore it’s impossible?


The way you suggest the future of automotive transport rests on a fast food chain drive through suggests a kind of astroturfing social media advert. If burgers were that important, the burger company would partner with the self driving car company and find a way.

None of those things are physically complex - the car will be able to see or lidar if a space is big enough for it to park in, pulling over to pick up a friend is no different to pulling over to let you out at your destination, and a drive-through is moving forwards slowly and steering a bit like all low speed driving is. None of them require the dexterity of folding soft wet fabric, and the market for $30,000+ vehicles is much larger than the market for $30,000+ washing machines.

One simple answer is: you don't. Self-driving car owned by <car company. takes you from A to B and you get out. You don't choose the parking spot anymore than you chose the lane or the highway exit, you don't pick up a friend they take their own separate ride, and you don't use the drive through, you use <car company>'s food delivery service.


> None of those things are physically complex

I don't care about how physically complex something is. You didn't tell me how I'd tell the self driving car which spot to park in or which person on the corner is my friend. If you think this detail is minor, trivial or doesn't matter.... you are sadly mistaken.

> If burgers were that important, the burger company would partner with the self driving car company and find a way.

So you are telling me your $70,000 self driving car can't take me through one of the hundreds of thousands of drive thru's out there? That sounds pretty shitty.

> You don't choose the parking spot anymore than you chose the lane or the highway exit

You are telling me that this self-driving taxi will just stop at whatever place it feels like? How will I tell it precisely where to let me out? Remember it is pouring down rain / I am physically disabled and cannot walk very far / I am not entirely sure where I need to be dropped off until I get there.

So seriously, how will I command this self-driving car? Nobody seems to be able to answer this or even thought about it in any amount of detail at all. Probably because we are so far away from an actual self driving car that a realistic answer doesn't really matter at all.

And that is the problem with self-driving hype. Every time you ask about a specific detail it gets hand waved away as not important or some kind of edge case that doesn't matter. Guess what... edge cases matter. Your edge case is my important case. You add up all these edge cases that "don't matter" and you've excluded almost your entire market.


> I don't understand why so many smart "tech people" fall for the hype.

Maybe it's just that some of these "tech people" are not that smart at all?

> And that is the problem with self-driving hype. Every time you ask about a specific detail it gets hand waved away as not important or some kind of edge case that doesn't matter.

I think you are you spot-on with that analysis. Way too much hand-waving going on.

But that is a constant problem in tech, the entire area is susceptible to hypes and fads, with hands waving and waving at record speed. Like I said, maybe it's just not all geniuses and such...


> > I don't understand why so many smart "tech people" fall for the hype. Maybe it's just that some of these "tech people" are not that smart at all?

The crux of the problem is you have people who although may well be cabable of driving to work--they are by no means cabable of understanding driving at its essence. They will never, not ever; be able to create a car that can also interact with human drivers because they don't understand driving. Period... end of story.


A comment which is nothing but posturing. Why would you expect everybody discussing self-driving cars, in every situation to, be a genius, and to be using their maximum effort even on trivia questions about what UI you would use to tell a car that doesn't exist yet how to drive to McDonalds, under the questionable assumptions that a) it must work exactly the same way as a traditional car and you won't accept otherwise, and b) there will be only one self-driving car full stop, so there will be one answer and it must be perfect for every possible use case first time?


That's the easiest problem to solve... if the car is capable of driving itself then you have both hands free to operate your phone or a touchscreen.

You can enter the location to stop at, and in this imaginary world where the car is capable of driving safely, then it should also be able to find the nearest safe location to stop at.

The actual part where the car is able to drive itself is clearly nothing but unfounded hype though.


Honestly, I think I'm fine piloting a Tesla through a drive-through if it means I can eat on the road. I'll take 80% of possible performance / [aptitude?] now (heck, even 50-60%, as long as it's safe enough) versus 100% of possible scenarios being covered at some far-off point, especia)y if my enjoying 50-60% of capability actively gets the remaining % closer.


Too bad you are judgment impaired cause you are drunk. Always remember, you have to assume people in these self driving cars are drunk / high.


Always remember, you have to shift the goalposts every time someone answers, so you can claim they didn't answer.


I did answer. Quite possibly you won't have control. Your entire complaint is a "you can't precisely manage memory in a garbage collected language? Then nobody will use them." style complaint. And that's demonstrated wrong by history over and over - take people's choice away and do it for them, people like it.

Self-driving car, is the hard bit. "Who will design the smartphone app which lets me tap "go to the nearest mcdonalds drive thru"?" is the easy bit. Almost anyone on any $5/hour microconsulting web developer site could do that bit.

"Oh but who would buy an iPhone if it can't let me control the filesystem? You can't answer, nobody can tell me, it's impossible, the tech can't exist, everybody is stupid" - nope, it sold by the hundreds of millions.

Where will it stop? It will stop wherever there is a space. Maybe you can mash your finger on your app for "closest space right now" or maybe you'll get bored of doing that because it already does "closest available space to the chosen destination" because that makes sense.

"One disabled person will get wet in the rain so the tech can't exist" - most tech ignores disabled people completely, and sells in large numbers.

"I can't voice control tell it to go through a drive through so nobody will use it" - if it matters that much to people in general, mctakeout will partner with car company, let you drive up in your fancy car, and they will walk the food out to you. I'm more worried that the cars will come with a "take me to the nearest McDonalds" physical dashboard button, than that no cars have any way to instruct them to go anywhere off-route ever.

"your $70,000 self driving car" - I don't think most self-driving cars will be owned by individuals, but what does price have to do with it? People buy supercars which can't drive through narrow, bumpy, steep, city areas, there is a market for them. Why do you think there will be only a single self-driving car design which has a binary appeal to everybody or nobody, and therefore must be a single answer to how you will control it, and that there's no room for iteration so it must be conclusively decided and locked in several years before any such car even exists or its capabilities are understood and settled?

"So seriously, how will I command this self-driving car?" - so, seriously, driving two tons of machine around a busy unpredictable space is something you're fine with, but having some buttons is what you think the impossibly dealbreaker is? Touchscreens are an appalling human interface, and having almost every control in a touchscreen hasn't stopped Tesla customers.

"Nobody seems to be able to answer this or even thought about it in any amount of detail at all. Probably because we are so far away from an actual self driving car that a realistic answer doesn't really matter at all." - Yes, good point. Nobody has settled on the trivia, because the trivia is not a dealbreaker compared to whether the car can exist at all. It's like you're obsessing over what font the Dragon space capsule will use on its readout displays, and saying that human spaceflight is all hype because "nobody has thought about it in detail".

But no, surely everyone involved is just stupid for not prioritising shovelling burgers into your face over getting you from A to B without killing you or anyone around you.

"Every time you ask about a specific detail it gets hand waved away as not important" - you asked about details, I gave plausible answers (you won't have the level of control you demand) - you completely ignored my answers because you don't want that (you can't answer! ya boo sucks you can't answer!) and then whine that nobody will give you answers.

"You add up all these edge cases that "don't matter" and you've excluded almost your entire market." - I am still boggled that you are putting "can drive through a drive thru" as if that was a thing people actually do enough to influence their choice of car purchase. Can commute to work? Is affordable? Is safe? Is reliable? Has reasonable maintenance and insurance costs? Is big enough for all the people? Can carry enough luggage? Looks OK? All way more important after the dominating "can actually self drive".


This has been my take too. The whole hackathon and hiring effort screams to me of "we have finally concluded that a LIDAR-less implementation is impossible within our lifetimes. We hope one of you has at least a Hail Mary that we can burn money on so we can at least pretend that it may still be done. Anything to keep us from having to refund millions, dealing with a huge class action, seeing the stock halve, and Elon having to admit that he's something of a charlatan."

And that's just on the decision to not use LIDAR because "it's dumb", not the whole concept of self-driving as it has been marketed being decades away.


Tesla lawyers are good as their engineers. They know how to make you think they are selling self driving cars without actually saying it.


Another commenter here said that Tesla has been selling "fully self driving" as an option since years ago.


I think 50 years is being overly pessimistic. Self-driving in temperate climates is probably pretty close (<10 years). Waymo has pilot programs currently underway in California and Arizona[1][2].

[1]: https://techcrunch.com/2019/09/16/waymos-robotaxi-pilot-surp...

[2]: https://www.theverge.com/2019/10/10/20907901/waymo-driverles...


The thing is, I see no way to have full self driving without AGI. And I don't think humanity is anywhere close to developing AGI. Without AGI you can have level 4++ self driving, maybe, but not level 5.


> The thing is, I see no way to have full self driving without AGI.

Why? AGI seems like a significantly harder problem than self driving cars (itself a hard problem admittedly).

What I personally think will happen is we'll meet somewhere in the middle: we can redesign roads/cars to make solving the problem of self driving easier.


Because of all the edge cases that humans can handle a good amount of the time because we have a lot of intelligence compared to any computer.

But I agree with you that if we meet in the middle, that could actually work well.


How much difference does 4++ vs 5 make for the predicted consequences of self-driving cars though? That's the important thing.


I would say that I would feel comfortable falling asleep in a 5 while it is driving nontrivial distance and/or road conditions, but I would not feel comfortable doing so in a 4++.


Why? Plain old 4 is supposed to be the "safe to sleep" level.


How often do you have drives when you need to fall asleep though?


No Elon constantly buys his own bullshit. This has been true for years and years and years long before he became famous from Tesla. His bullshit does eventually become true though, unless it's canceled entirely. It just never happens when he says it will happen.


Publicly bullshitting (without explicitly stating as such) about your companies capabilities while CEO of a publicly traded company is a potentially huge issue with the SEC.


I don't understand why "otherwise smart people" feel the need to diminish the potential and successes of ML. When chess was basically solved, it was dismissed brute force, with a sarcastic challenge to do Go, and we now the fate of that challenge.

There used to be, on slashdot and I believe in the early days of HN, this running complaint of the new-at-the-time CSI-style shows, specifically the "enhance" trope: "you can't reconstruct a license plate from a bad frame in a video. That information is just lost. It's not there anymore", they would say, usually with all the aura of letting you in on a secret only a very smart mind could gleam, although there were five others in the same thread making this point.

Today we have superresolution algorithms that can reconstruct license plates from low-res images. Turns out the information wasn't really lost, at least not in the sense applicable to the situation (i. e. you are allowed to train on other data).

Many tech people dismiss such progress as "just statistics", but I haven't seen much of an attempt to find a definition of intelligence that is meaningful different from "just statistics". In fact I doubt it's possible within the realm of science, i. e. without resorting to mind-body dualism.

As to driverless cars: they exist, right now. Google does thousands of miles without any need of human intervention. Yes, maybe it doesn't yet work well enough in a hailstorm. But predicting that these problems will endure for 50 years plus, against a combination of restricting these cars to certain situations, improving the models, and/or improving maps, seems at least as overconfident in your ability to make predictions as those made by self-driving optimists.


> When chess was basically solved, it was dismissed brute force

Deep blue was brute force. There are different ways to solve a problem, and only some of them are impressive in certain ways, even if they're impressive in other ways.

> Today we have superresolution algorithms that can reconstruct license plates from low-res images.

You need to take another look at CSI. You can reconstruct a little bit, usually with the help of multiple frames or exact font data. You can't do this: https://www.youtube.com/watch?v=I_8ZH1Ggjk0 https://www.youtube.com/watch?v=3uoM5kfZIQ0

> Many tech people dismiss such progress as "just statistics", but I haven't seen much of an attempt to find a definition of intelligence that is meaningful different from "just statistics". In fact I doubt it's possible within the realm of science, i. e. without resorting to mind-body dualism.

That's about as useful as saying everything is 'just chemistry'. It's true but not helpful. And the same way that your brain and a science fair volcano are both just chemistry, demonstrating a simple reaction isn't impressive.


They exist now as what amounts to fancy amusement park rides. Wake me up when I can get sloppy drunk and catch one home. Only I was drunk and didn’t provide the exactly correct drop off address and need to somehow communicate to the robo-Uber where exactly it should drop me off... (how will I do that, by the way? talk to it? What if it doesn’t understand my accent?)

Also, I’m at a wedding and the pickup is in a grass field in front of the venue. Can’t pick me up on pavement... it’s a two lane road 50mph road with no shoulder to pull over in. Also the entrance to the venue is a poorly marked dirt road. Also, the “pin” I told the robo-Uber to pick me up at has it at the street ‘cause that is the address.

I’ll wait for the robo-Uber to call me on how the hell it should pull into the lot. I did mention cell sucks here, so hope you’ve got that all offline, right?

And I fully expect my use case to be waved away as “edge case” and “doesn’t matter”. Except it does. Every trip that isn’t a pre-planned, set route amusement park ride is an edge case. Until you can meet my use case, a common one for an Uber driver, self driving cars are empty hype.


> Until you can meet my use case, a common one for an Uber driver, self driving cars are empty hype.

For you. The semi-autonomy we have now- not "this vehicle will get me from point A to point B without need for intervention- but "this vehicle can take out some amount of the area between points A and B" is incredibly useful. I don't know how hard it is for you to keep going straight while fumbling about with a hamburger- I'm not that great at it- but if I can get a bot to help me steer in a relatively straight path and keep me in my lane while I take my jacket off because I'm beginning to sweat, send a message or place a call (having something make that easier is also useful), fire at an armed robber, or let me give more attention to a child who needs it, I will be safer.


Nice for you until the “self driving” car gets into an accident and kills the other driver. You don’t get 50% attention with driving. It’s either 100% or 0%. Anything other than that will put everybody else on the road at risk.

If you are driving around in your Tesla using its adaptive cruise control and not paying attention to your driving... I can only hope the accident you will eventually cause hurts nobody but yourself.


I have easily driven 12,000 miles under autopilot and regularly do the kind of tasks the parent described.

Maybe I will have an accident and comments like this one will age poorly. Or maybe we will find that the scary technology wasn't as scary as first thought.


Your phone knows where your home is.

What if the human doesn't understand your accent!

> Every trip that isn’t a pre-planned, set route amusement park ride is an edge case.

That is not even remotely true.

And you could have a fleet that's 80% or 90% robot. It's not hard. $5 discount if your end points aren't all wonky and robot-confusing.


- I thought fully autonomous driving was meant to be done by now?

One thing that Elon did is gathering lots of video data instead of perfecting LIDAR, and at this point it seems that he was right: until detecting objects in 3D from video data is completely solved, self driving can't work. After it's solved, LIDAR is not needed.

- Tesla & SpaceX have a reputation for being a terrible place to work

When I'm looking for jobs, my main criteria is market cap / number of employees, and Tesla (as most startups/small companies) was very bad before the stock price went up. Now with high stock prices Tesla can afford to pay market rates (even if it's in stock).


> When I'm looking for jobs, my main criteria is market cap / number of employees, and Tesla (as most startups/small companies) was very bad before the stock price went up. Now with high stock prices Tesla can afford to pay market rates (even if it's in stock).

All big companies grant stock based on monetary value at the time of the grant. If you join company X when their stock went up 2x, stock grant will be smaller 2x.

You can get market rate, from historically underperforming company, only if you got grants that appreciated, don't expect grants to improve pay rate.


> When I'm looking for jobs, my main criteria is market cap / number of employees, and Tesla (as most startups/small companies) was very bad before the stock price went up. Now with high stock prices Tesla can afford to pay market rates (even if it's in stock).

You don't care about the work culture at all?


Of course I care, it's just in my experience it's much easier for a company to provide any kind of benefits when the company has money for employees.

Before 2008 when I was working at Google, we were just getting more and more fun stuff all the time, and it seemed like it's never ending. I was together with about 200 people in the Zurich office.

After the stock price went down, although we just had a hiring stop, the food got worse, the bonuses got smaller, and later every new year more and more people came, the stock went up, but the benefits got worse every year until the point where you have to stand in a queue to go to toilet if you are a male.

I think Google is still a great place to work at, but very far from what it was 13 years ago.


Sure, but having worked in a number of different environments and having a family, a work culture that is willing to support me is very important. Tesla and SpaceX don't seem to have that at all.


Sure, I'm also too old to go to Tesla, but if I would want a great carreer, really change the world (not just talk about it) and not care about having a private life, I would seriously consider going there.

At the same time Zurich is small, and many more men go to work there than women, so dating was so hard that I didn't have a real private life anyways.


> One thing that Elon did is gathering lots of video data instead of perfecting LIDAR

I really don't understand this thought. Every self-driving car company has more raw data than they can realistically use. I don't see how Tesla has some advantage


It's pretty clear at this point what you're signing up for at Tesla and SpaceX. They're intentionally trying to filter for people who will dedicate much of their life to the mission of electrifying transport and becoming interplanetary. If you want to do that, there are few better options. If you don't, avoid.

As for firing, well, who knows... The story about Elon's assistant was apparently rife with misinformation, as these things often are.


> I thought fully autonomous driving was meant to be done by now?

That is not what Elon predicted at Autonomy Day last April. He thought they would be feature-complete by the end of 2019, which is to say, that all the basic code paths necessary for best case city driving would be functional.

If you've done software development, you understand that getting to "feature-complete" is different from "code-complete", which is also different from "Beta", which is different from "GA". Those are each progressively later stages in the software development / release cycle.

Feature complete is merely the day that you can say that there is some functional code in place for all of the code paths that you expect to write. You would expect to be able to do an internal demo showing all the functionality working in the "happy case" at that point. Usually QA has not really even begun in earnest at this point. It is by no means the point where development is "done", which is for FSD in fact, approximately, never.

Code complete isn't always distinguished from feature complete, but in my experience, code complete would contemplate negative testing, error handling, and alternative processing modes which might not have been implemented at the point of "feature complete". Code complete typically signifies that from that point forward only bug fixes will be added into the next release.

Some projects can reasonable be expected to remain in testing, validation, and certification for several years after the point of "code complete".


He said a million self driving vehicles would be on the streets a year from FSD day.

To try to say the hardware for FSD is now present in 1M vehicles (which I doubt) is disingenuous to the insinuation Elon made that FSD is right around the corner. They're not meaningfully closer to FSD today than they were three years ago. Somewhat closer? Sure. A few feet on a journey that's a few miles though.


> They're not meaningfully closer to FSD today than they were three years ago.

Actually driving a Tesla for 15,000 miles, of which perhaps 5,000 of those miles have been AP over the last 18 months, I can state this is totally false.

There has been extremely significant incremental progress with AP which is totally evident in everyday usage.


Looking at how much has happened over the last two months though it seems that they are on the verge of something big. I'm on pins just waiting for stop light and stop sign detection to be integrated into lane keeping. They're already seeing them, they just need to put that data into action. In terms of my personal driving that'll be pretty big, road trips get a lot shorter when my attention isn't forced.

Turns through intersections is another area where Tesla's current implementation needs a lot of help (read: can barely do it at all) but should be within range soon given where they are at. And lane splits in city streets is definitely something the Tesla implementation is going to need to get better at before really making it door-to-door.

It's a very exciting, even historic time for self driving.

But yeah, taxi fleets are a ways out. There's a turn in the road just a 1/2 mile from my house where my Model 3 gives up every time.


Given that in many countries GPS barely works enough to order an Uber with a human driver, I'm highly skeptical of self-driving taxis.


I'd like to understand this better. Intentional interference with satellites? Too mountainous or too heavy tree cover?

I can imagine navigation services not working, but I've not been in places outside of the high Arctic (~82'N) where GPS itself wasn't very reliable.


Deep in large cities (New York, San Francisco, etc) with all the RF reflections can actually be quite challenging for GPS. Challenging == actually terrible and everyone knows it. Off by blocks, and definitely no help at all for vertical location.

And inaccuracy at start-up is also surprisingly challenging; think a person requesting a car within 5 seconds of opening the app, before the location service of the device has really resolved the location, thus ending up with a pick-up pin that is a hundred feet wrong or more. And maybe on the wrong side of a street, fence, etc.


There's also issues like Australia, which because of plate tectonics the maps corresponding to GPS coordinates had to recently be moved close to a meter.

We think of GPS as just the positioning part, but its just as important to remember that there is a large amount of work to translate that position into a meaningful data point within each given country. Just knowing someones exact GPS coordinates isn't helpful.


Have you ever used Google Maps in a country like Costa Rica? Because the country isn't mapped to nearly the same degree (also, they have a habit of not even naming their roads), it becomes barely useful. I'm not talking about the technology of global positioning, I'm referring to all of what we take for granted with it for.


I've gotten the impression that the "firing people in weird and capricious ways" bit has been greatly exaggerated.

I do think anyone taking this has to expect grueling and challenging work. The job description practically demands that.


Anecdotal evidence here, but I have a friend who worked in their palo alto office for several years. One day as Elon was walking through the Fremont factory he saw a large empty floor space and asked someone why there were no workers occupying that space. It was then decided that her job would be moved from Palo Alto to Fremont. She was not fired in a weird and capricious way, but she was managed out of the company in a weird and capricious way, and that was less than a year ago.


I do not doubt your story at all and have heard a few similar stories. My comment was specifically about the Business Insider story that someone else linked, as I feel like that one was exaggerated.


That happens in pretty much all big companies.



I don't doubt for a second that people like Musk (and before him for example Steve Jobs and Jack Tramiel) can be very demanding and sometimes difficult to work for. I also don't doubt for a second that a lot of the stories told have a sour grapes quality to them: If you're good at the kind of job they're offering, you can easily find employment somewhere else and you're also probably smart enough to move on of your own volition, before being capriciously fired.

I worked for an erratic, sociopathic boss once. I quit after about six months. Unless you're a masochist or truly have no other options, you'll soon realize it's not worth whatever extra money or prestige the company name might possibly come with.


I worked for a hedge fund that fired the bottom 10% every year. I lasted 9 years and my first boss was fired after I was ready to quit 6 months in (figured iwas about 2 weeks away from quitting). Not all firms that employ such tactics are terrible. Sometimes, you need to get rid of the people that just arent cutting their weight. 10% isnt a hard rule, just a guideline.


> I worked for an erratic, sociopathic boss once. I quit after about six months. Unless you're a masochist or truly have no other options, you'll soon realize it's not worth whatever extra money or prestige the company name might possibly come with.

Many people move their families, buy houses, and make large commitments taking new jobs (plus are normally granted large stock vesting at a given schedule). There's a lot of reasons people might put up with abuse for awhile.


Some of that fall under "having no other options", which is of course a very unpleasant situation to be in.

I'm not, however, desperate enough for wealth to sell my sanity for stock options. :-)


> I thought fully autonomous driving was meant to be done by now?

It was, but software often goes over-schedule, especially in a brand new field.


The prudent thing to do there, would be to exercise prudence. Not announce to investors and the public ever more grandiose targets after each miss.

As it is we're on about Autonomous Driving Promise Mk IV from Musk, "fully autonomous, coast to coast, by the end of 2020". (That's after 2018 was missed. Which was after 2016 was missed.)


Especially not when they are literally accepting money for the feature, which they have consistently estimated wrong timelines on.


> Not announce to investors and the public ever more grandiose targets after each miss.

Those poor investors are really taking it on the chin today because of those overly optimistic software projections!


A massive short-squeeze/bitcoin-like retail bubble is certainly not indicative of the strength of the company. Look at the last earnings report, it wasn't impressive for a growth company.


More capital volume happened on TSLA today than SPY. Do you think that's a retail bubble? Only the top 1% of global asset controllers have the ability to do that.


Why do you think it's appreciating? Do you believe that volume comes from large asset controllers going long at that price level?


Couldn't find the source but supposedly today was long buying as opposed to short covering. One theory is funds buying ahead of S&P 500 inclusion.

E: https://twitter.com/ihors3/status/1224362065906872322


That sounds plausible, thanks!


Tesla and SpaceX, specially SpaceX has been rated very highly in some rating I have seen. But there is a reputation that it is a lot of work specially in crunch time.


>> Tesla & SpaceX have a reputation for being a terrible place to work, with Musk expecting everyone to be working as hard (or harder) than him, and firing people in weird and capricious ways.

> specially SpaceX has been rated very highly in some rating I have seen.

IIRC, SpaceX has a COO who handles much of the day-to-day management, so the employees there are shielded from much of Musk's management style.


I may have a biased sample because I know more former SpaceX employees than current ones (and I'm sure it varies from team to team), but being overworked is the #1 complaint I've heard. Like, to the point where someone working 60 hour weeks is considered a slacker because they take Sundays off.




Honestly, I don't know how often that question ever warrants good responses. From my experience, most hiring comes from interviewing people who resumes show they're pretty obviously qualified, rather than questions like this from the beginning.


It's a fascinating video. Does anyone know of a similar video by Waymo, Cruise, Uber, etc?

Love them or hate them its pretty cool Tesla put this video out at all. Certainly gives us all a lot to talk about.


This is from 6 months ago so pretty up to date. Drive with Nvidia https://www.youtube.com/watch?v=1W9q5SjaJTc


Nvidia is in an interesting spot - trying to create chips and toolkits for self driving without anyone actually having created the full self driving solution yet. How do you know that if you buy the new AGX chip from them you'll actually end up with a product someone will buy? Some of the other tier one suppliers (I'm looking at you, Continental) are kind of in the same boat.


If you're not doing your own chip design and fab, there's really no one else in town to buy from. AMD, Intel have fallen well short in GPUs. TPUs from Google/Amazon are a more likely threat long-term.


Zoox regularly posts their CV perception layer into their videos. Eg https://twitter.com/zoox/status/1222226139252281346


That one is great! Moving into an oncoming traffic lane to get around stopped trucks. One of those things I would consider difficult for machines - when is it OK to break the normal rules of the road.


The cruise team has put out a few videos in the past. They even showcased their visualization software.

https://youtu.be/_3_Bb-dlq0Q?t=1116


There used to be a lot 5+ years ago.


I have often wondered about this, so I do find this interesting. Of all of the info presented, one question I have is how does the AI decide when to go at a 4-way stop? In real life, I'm am constantly amazed at how confusing a 4-way stop is to humans.

Not that I ever had any doubt into how complicated real-time video analysis could be, this just makes my appreciation of the complexity of the problem that much more qualified.


The proper rules[1] of a 4 way stop are probably pretty straightforward to encode, and probably easier for a computer to apply than a human, but the real issue is that you can't trust the other drivers (or AIs) to understand/follow them consistently. So the interesting part is how these kinds of systems can almost instantly react when another driver starts to go, and let them go.

In NYC, where I live now, the de facto rule seems to be if you hesitate at all then the other person just goes, rules be damned. In the rural south, where I'm from, there's a lot of "no no, you go first" waving/gesturing/light flashing, which I'm curious if/how a self driving car would handle. (Do we need to give the self driving cars hands to gesture with?!)

1. https://en.wikipedia.org/wiki/All-way_stop#Operation


The car stopping if another car goes before it brings up another issue. If self driving cars start showing up on the road and are recognizable, you'd be able to "bully" the self driving cars since you know it'll always give right away. Imagine being stuck on an on-ramp behind a car that is programmed with infinite patience and extreme risk aversion; you could be sitting there quite a while.


That's not the real issue, that's just the initial domino. The real issue is how self-driving companies will combat that. They'll surely want to record film at all times and have a pipeline to law enforcement to curb suspected bullies. It's often the over-correcting solution that becomes the bigger problem. There are analogues in other human-robot interactions.


Waymo resolved this early on by encoding the human resolution into the action. Stopping as required by law, then moving forward slowly until the next action is more clear.


Waymo 'resolved' this by not driving in areas where people intentionally fuck with self-driving cars.


Imagine if cars were given the same authority to generate citations as redlight cameras and schoolbus cameras. They'd be cheap to free and financed by kickbacks from law enforcement.


The human driver can always take over if the car isn't being aggressive enough.

Also the feeling of "how long" is too long to wait changes a lot when you are in more of a passenger mindset.


In a Tesla, yes. But the idea is for these algorithms to work without anyone in the driver seat.


I imagine if enough self-driving cars were on the road, that would be less of a problem, because more regular speeds & distances between cars would allow for more opportunities for merging. Perhaps even between-car negotiations of merging or 4 way stop type of behaviour.

I think we're quite a ways away from that though - hopefully not as far away as the flying car thing :)


> In NYC, where I live now, the de facto rule seems to be if you hesitate at all then the other person just goes, rules be damned.

This has been my experience pretty much everywhere I've driven, which includes a half dozen or so European countries and about as many U.S. states. Notably though I've never driven in New York, state or city.

My experience is that in larger cities, and cities with a lot of tourists, this "aggressiveness" seems to be the norm. I put that in scare quotes because I'm not sure I'd call it aggressive, I just feel it's the pragmatic way to break the tie.

Maybe that's why this is my experience everywhere – I'm that guy! (My apologies to all my fellow drivers out there!)


Wouldn't the data decide what the rule is? I'm guessing if in NYC, a more aggressive driving style is the norm - the models and the pipeline should be capable of adjusting to that.


Adjusted for New York driving the AI evens flings up the doors randomly to smack down cyclists!


> Wouldn't the data decide what the rule is?

Lawyers will define what the rules are for a manufacturers self driving cars. After all, any accident at a 4-way stop will immediately result in blaming and then suing the manufacturer. And what are they gonna have for a defense? "Sorry, your honor, we use machine learning to decide how our car drives--as a result there is no real way to truly understand how our car will react to every situation it encounters".


There is an interesting and important theorem from control theory that is relevant for this situation. In a paper from 1984, Leslie Lamport has called it Buridan's Principle and phrased it as follows:

A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time.

The paper is available from:

http://lamport.azurewebsites.net/pubs/buridan.pdf

It shows that under very general assumptions, a decision cannot be made in bounded time, leading to starvation (metaphorically and also literally). An example is a 4-way stop with other cars arriving at various times.

The paper's history is also very interesting:

https://lamport.azurewebsites.net/pubs/pubs.html#buridan


Note that Lamport admitted that his result was never accepted by the scientific community.

> continuous

(The paper implicitly assumes non-discrete continuity) Differential-equation based hysical models are continuous. Computational systems are discrete, as long as the clock is slower than the . Lamport surely knows this, the transition duration. He waves away the empirical disproof of his claim by arguing that reality is merely a finite approximation where low probabilities runs down to zero.

He even mentions this stuff in the paper, making the paper quite weird.

My only guess is that he fell into the trap of forgetting that infinitesimal objects ate idealized mathematical models not physical realities.


thanks for sharing this - this is fascinating. Admittedly I have never been exposed to Control Theory before, so I see a rabbit hole in my future.


Essentials of Control Techniques and Theory by John Billingsley is a book with many superb examples.


Google had to program their car to stop treating 4 way stops exactly as the law required, and instead pretend to be a little aggressive: https://www.nytimes.com/2015/09/02/technology/personaltech/g...


Unless you're driving in a place known for courtesy like Canada or Hawaii ಠ_ಠ... People often stop late and won't move until you go first...


My father taught me this trick -- if you're going to arrive simultaneously, just make sure you stop clearly second (especially if the other car is on your right).

I find it is usually pretty efficient and have yet to notice someone doing the same to me at the same time.


I do exactly the same - not with 4-way stops as we don't have those in Germany, but with areas where a two-lane road is intentionally narrowed to one lane for a very short piece of road, which we sometimes have in order to slow down traffic around pedestrian crossings. If I see someone approaching from the other side and we are about to arrive roughly at the same time, I slow down just enough so the other person hopefully realizes she can go first. If I see the other person doing the same I add light flashing, which is typically understood as "You go first". But it is rarely necessary, because if I start going slow early enough, the other person realizes he/she is a tiny bit closer to the narrow part, so they slightly speed up in order to ensure they are first and thus get the right to come through before me, which is exactly the reaction that I want, because my interest usually isn't to get through first, but to be able to roll through the narrow part without coming to a full stop.

When I was road tripping in the US, I also attempted to prevent unclear situations at 4-way stops by intentionally slowing down just a bit so it is clear that I'm not first to drive. Since I couldn't prevent the stop anyway I was usually much more comfortable with letting other people drive first, or at least with having a clear order and knowing about my position in that order.


Hey I really like that! I often feel awkward at these where it is ambiguous who among 3 people actually stopped first. Not to mention people who roll right through without stopping (when other cars are present.)


The algorithm I apply is actually pretty simple: First at the intersection goes first. So you, individually, should let everyone who was there when you arrived go before you, but you should go before everyone who arrived after you.

It gets messy when people try to be polite or don't understand the rules correctly, but otherwise it's pretty easy.

It might depend on your jurisdiction, though.


My algorithm is so much more complicated than yours. In my area you get rear-ended if you were going straight and you didn't go when someone coming from the opposite direction but also going straight went (your algorithm doesn't allow that).

Assuming cars are already at the intersection, I wait until each of them goes and then take my turn, unless it's someone's turn who wouldn't cause a conflict with me and nobody else could go at the same time and would cause a conflict with me.

When I arrive at the intersection at nearly the exact same time as someone else then some more complicated heuristics come into play (I'm not sure how you handle it when the timing is close enough to be in question). The person on a significantly larger road has priority over smaller road. People going straight have priority over right turners, which have priority over left turners, and if all else is equal, the right-most person should go first.

I guess I could sum it up as "if you can go safely, and nobody who was there before you can go safely, then go. Otherwise wait your turn", but with a lot more nuance around tie-breaking.


Yeah, I forgot about the going straight part, but I actually do pretty much the same thing as you.

I guess one additional trick I use is to intentionally avoid conflicts: if it looks like I'm going to arrive at the same time as another car, I slow down a bit before arriving at the stop, then clearly stop AFTER they did, so it's clear that it's their turn to go. It works pretty well.


People in your area get rear ended while stopped at a 4-way stop?


If the person is supposed to go but doesn't, then sometimes.

/shrug

People gunna peep.


Yes, but there's close-to-simultaneous arrival, in which case you yield to the person on your right.

But what's "simultaneous"? If I show up 0.1 seconds before the person on my right, he could think it is simultaneous and go...

There's also the direction that is 'claimed'. If there's a dedicated left turn lane or it is obvious that the person across from you is going straight, and you want to go straight, you should go too.

In the end, multiagent arbitration of a shared resource is hard.


I've been driving for 40 years. I still hate 4-way stops.

In the late 70's in Connecticut, I was taught the first person to arrive goes first, followed by the person to their right and so on.

But there are so many holes with that heuristic itself! And putting aside guilt and anxieties of humans that prevent them from being assertive or committing--it's just a cluster fuck.*

However, I have noticed that people are becoming more comfortable with zipper merging in major cities on both coasts, so if we could arrive at some consistent 4-way stop rule that's 1) succinct, and 2) easy to remember, I have hope. :)

[* I'm referring to San Fran and the South Bay... but still better than my experiences driving through Italy and in Bengaluru!]


In California 4-way stops are often never marked "4-way" like in some other places. Whenever you come to a stop sign you have to inspect the crossing road to see whether or not it also has a stop sign. Often you stop and wait several seconds for a crossing car, only to realize that they also have a stop sign.


Sorry for the dumb question but what is a 4-way stop? In the UK we have roundabouts (but many drivers still have no clue which lane to get into, and don't migrate to the lane on the left after passing an exit, like they should).


Yes. In theory, 4 way stops follow some simple rules -- whoever gets there first and makes a full stop goes first, and if more than one party makes a full stop at the same time, the party on the right goes first.

In practice we live in a world of (very illegal) rolling stops, people beckoning at each other to go first, people inching forward, etc.

That's not even considering some very stupidly designed four-way stops here in Illinois with two lanes on each side -- how the heck is one supposed to even figure these things out? It ends up being a game of chicken.


Even the theory is not that clean cut.

What if two cars arrive simultaneously at opposite sides of the 4-way and are both turning left? This could easily deadlock two self-driving cars if not considered.


Your situation is a bit ambiguous. Do you mean they turn to their respective left (in that case their trajectories should not intersect), or to the same left? Not sure about USA, but where I am from the right of the way belongs to the driver who turns to the same side as the side of the road they are on, i.e. to his right (we drive on the right side).


Near where I live we've got two lanes + turn lanes on all 4 sides. It is a nightmare both as a driver and as a pedestrian. I'm not sure how that gets put in place or gets addressed as it inevitably gets busier. Traffic circles perhaps?


Traffic circles work better, but most people in North America have trouble with 2-lane traffic circles. (they're common in most places around the world)


Most people in north america have trouble with 1-lame traffic circles, because they're almost uniformly too small. Traffic engineers like to wedge them in as retrofits, when there's already a lot of stuff built up around an intersection, and it's not feasible to enlarge the intersection to make it work well.

Traffic signals with good sensors are more space efficient, and usually more time efficient when there's congestion, and more pedestrian friendly. Proper sensor design means no or little waiting during low traffic times too. Traffic circles can help with akward intersections though, where there's more than 4 entrances or exits, and some three way intersections work well too.


I've wondered why cars don't have a built-in communication mechanism.

I should be able to broadcast somehow (using a voice message?) to nearby cars that "I'm going now, don't go".


Vehicle-to-X (A generalisation of vehicle-to-vehicle communication) https://en.wikipedia.org/wiki/Vehicle-to-everything


The system exists but nobody is buying it. Probably because if your car has it first, there will be no other cars to talk to.

https://www.google.com/search?client=firefox-b-d&q=car+v2x


That works if every car has the same communication mechanism and is honest and nobody interferes and everybody's semantic understanding of the messages is the same.


As long as it is there just to improve safety. But it should not be relied on.

In other words, if you 'hear' "don't go", you don't go. But if you don't hear anything, it still doesn't mean it's safe to go, so you'll have to rely on something else. And if you are doing that anyway, is the added complexity worth it?


It certainly will be, once the ocean is boiled = as soon as self-driving cars begin to comprise a significant percentage of cars on the road.

This, in the long run, is a selling point that seems to be missing from most discussions in this comment section.

I can't communicate "telepathically" with fellow human drivers.


We tried it, but people kept on using it only to say the most innovative curse words at each other... Hah. :P


As a casual observer of self-driving, I feel like this is a really interesting part of the overall problem space. Until such time as a vehicle fleet is 100% autonomous, human piloted & autonomous vehicles need to co-exist. Humans bend & break driving rules by social interactions which autonomous vehicles don't currently participate in.


There's a research dedicated to ambiguous decision in swarms.


The system isn't AI based... It's currently based on static programming. This is why a development team would have to think of every possible scenario that could happen on roads to make self driving cars really work.

You'd have to ask all kinds of wild questions too like:

"What if kids played a prank and re-painted road lanes on a highway at 50MPH?"

"What if a sand storm happened?"

"What would the car do if a tire blows out at 55MPH?"

"Could the car detect a Tsunami is coming and abandon a trip assignment?"

And that's just a tiny tiny sample of scenarios necessary.

Some AI elements may exist within the network, but I'm pretty sure they are only running on the back end in terms of maps. The computing power to really permit the car to run autonomously, and to "adapt and learn" about driving only comes with experience and it also takes a lot more computing power and storage than your home computer can handle, much less the components in Tesla cars.

Launching Self Driving Cars so close to their infancy is pretty reckless if you ask me. People are being paid off and buried online by paid staff online to keep quiet about all the accidents and issues with them.

To make self riding cars reliable at this point in time, they pretty much need perfect circumstances, and they need to operate on fixed, predictable, and well maintained paths... Kind of like a train. They should really work on this technology in mass transit, shipping, big industry and on planes before pushing out thousands of smaller vehicles with it.

There is plenty of proof that development teams aren't and may never be flawlessness enough to make error free updates, if you don't believe that, then just ask Boeing.


> The system isn't AI based... It's currently based on static programming. This is why a development team would have to think of every possible scenario that could happen on roads to make self driving cars really work.

I'm not really sure what this dichotomy is that you're describing. It certainly looks like they're using computer vision to detect the bounding boxes of cars, and the field of computer vision is widely described as being either a subset of artificial intelligence or a cross-disciplinary approach combining artificial intelligence with a handful of other fields.

> You'd have to ask all kinds of wild questions too like:

> "What if kids played a prank and re-painted road lanes on a highway at 50MPH?"

This sort of thing actually happens with real humans, and the legal system has to (and does) handle it. In the 1990s three people removed a stop sign as a prank and were sentenced to 15 years in prison after a human driver drove into the intersection and hit another vehicle, killing three teenagers. https://www.nytimes.com/1997/06/21/us/3-are-sentenced-to-15-.... The fact that it's possible for humans to illegally modify road markings and end up killing people is a challenge and a tragedy, but it's not anything new or particularly game-changing for self-driving cars.

> "What if a sand storm happened?"

That sounds like one of the easiest possible problems. What do humans do when they suddenly have nearly zero visibility on a road? In most places I assume they are supposed to (and do) pull over to the side of the road and stop. I've been in rain that is so heavy that all the nearby cars on the highway had to pull over.

> "What would the car do if a tire blows out at 55MPH?"

Again, that's another thing that happens a lot, and probably causes a lot of injuries and deaths. Again, I'm not minimizing that loss, but it's not some huge "gotcha" for self-driving cars. In this particular example, I would expect self-driving cars to be vastly better than most human drivers in this situation at maintaining control while pulling over and stopping.


> What do humans do when they suddenly have nearly zero visibility on a road?

They cause gigantic pile-ups, because humans are unbelievably bad at driving.

https://abcnews.go.com/US/50-people-injured-69-vehicle-pileu...


> In this particular example, I would expect self-driving cars to be vastly better than most human drivers in this situation at maintaining control while pulling over and stopping.

Cars today are vastly better at maintaining control in this situation even without self-driving. If the car has traction control, as long as the driver doesn't panic they will have a much greater chance of keeping control of their vehicle, compared to cars without a computer assisting them.


This is an area which I am interested in.

While most interest/discussion in self-driving surrounds the interpretation of the environment viz other road users, road markings and signage, I am following the (too slowly developing) Roborace, Stanford's MARTY and similar efforts to automate expert car control including handling the loss of control.

I look forward to the day when passengers can be confident that they are safer than with even the best human driver. And not just statistically safer, but demonstrably so.


You replied to the 4 scenarios I posted, but forgot the thousands of others left while you missed my point. They were rhetorical to illustrate that nothing yet can replace a trained, experienced, and attentive driver effectively yet.

There are no simple answers to the above questions, but if a driver causes the accident, they are accountable. Companies want to launch this tech but not own accountability for accidents when it could well be faulty logic and or code that contributes to deadly incidents. In a ship... less of a risk... In a car, surrounded by 4-10 other cars, the risk is much higher.

Profit making should not drive this blatant hubris... We forget how history reminds us of an over-reliance on technology and intelligent design.


> You replied to the 4 scenarios I posted, but forgot the thousands of others left while you missed my point.

    try
    {
      while (true)
      {
         if (IsScenarioOutsideOfUnderstanding())
         {
            throw NotUnderstoodException()
         }
      }
    }
    catch (NotUnderstoodException)
    {
        SlowDownAndStopLikeAnyoneElseWould();
    }


    Patch for car stopping way too often:
        catch (NotUnderstoodException)
        {  
    -        SlowDownAndStopLikeAnyoneElseWould();
    +        // FIXME TODO let the safety driver handle it for now
    +        // SlowDownAndStopLikeAnyoneElseWould();
        }


Go watch Elon's presentation Autonomy Day. He pointedly addresses it.

Specifically, he shows an orders-of-magnitude chart, each point accompanied by a picture of a car presented at that magnitude of occurrence. At one extreme is "car" as they appear with most common frequency (in front of you, rear visible). At other extreme is "car" in an extremely unlikely occurrence (airborne, bottom view prominent). The system Tesla implements addresses such a broad range of information & occurrences.

Tesla has upwards of a million "full self driving" vehicles on the road now, all gathering video, all processing that data, all feeding what they learn & experience back to Tesla. The computers, even if not actually driving, are constantly observing situations, categorizing, and comparing to what humans do as a response. Extreme situations (crashes etc) are flagged and routed to Tesla for incident analysis; the AI is corrected & taught to handle the situations, and all vehicles updated accordingly in short order.

Upshot: Tesla DOES have a system for addressing, learning, encoding, and improving the "thousands of other" scenarios. By the time "full self driving" is unleashed, the Tesla AI system will already be trained to extremes by a million "trained, experienced, and attentive drivers"; insofar as some cases may be insufficiently covered, they will be statistically less common & dangerous than the baseline error rate of human drivers (distracted, drunk, debilitated, dumb).

As far as accountability, Tesla is working hard to build a decisive system & case that if bad happens, it is a statistical anomaly which is still better than the alternative of not using self-driving tech. There is a legal principle whereby one may violate the law when obeying it would cause greater harm; autonomous vehicles are close to crossing that line for the better.

And yes, great profit is due those who improve humanity via technology.


> Tesla has upwards of a million "full self driving" vehicles on the road now, all gathering video, all processing that data, all feeding what they learn & experience back to Tesla.

I'm curious how you square this with the fact that humans learn to drive with intermittent sensory input from only two cameras plus some auxiliary sensors. Clearly, humans can extrapolate from very limited input. Why can't Tesla demonstrate object permanence after years of development?


Takes 16 years to train a human to learn driving, and badly at that. We have a vastly larger & superior neutral net, specialized training support, swivel mounted cameras, and probably a lot more than we yet realize.


> than your home computer can handle, much less the components in Tesla cars.

Ehm a recent Tesla has very powerful specialized hardware, developed in-house, which is much more powerful than a commodity home PC with expensive GPUs. You might want to do some basic research before you make such claims...

> The system isn't AI based... It's currently based on static programming

A large neural network is used to make many of the decisions, as well as to run the perception of the sensors/cameras. Unclear what you mean by "AI", but neural networks are usually covered in the "AI" term by researchers and engineers in this field.

> may never be flawlessness enough to make error free updates

It seems that you are new in this argument, because this point has been rebutted so many times in the past, but here goes: it does not have to be flawless, it just has to statistically cause fewer accidents per kilometer/year than what human drivers cause in a similar situation (which is already non-zero, and in some countries actually quite high).


> it does not have to be flawless, it just has to statistically cause fewer accidents per kilometer/year than what human drivers cause in a similar situation (which is already non-zero, and in some countries actually quite high).

Even though it logically makes sense, there are some nuances to it, most prominent of which is: When human drivers cause accidents, they are held responsible by the legal system. Who's responsible when your car made the decision? Is it Tesla? Is it the driver? Society is probably not going to accept a small improvement over humans while all the legal burden goes up in the air. It may demand a much higher improvement to compensate.


> Who's responsible when your car made the decision? Is it Tesla? Is it the driver?

Any time a Tesla is involved in an accident, it seems like the company is more than happy to throw out reams of log data "proving" that it isn't their fault. But yet at the same time they maintain marketing material claiming their "auto pilot" feature is (or will be) so awesome that the human driver is only there to make the lawyers happy.


That's actually totally non-contradictory. The human is there to make Tesla's lawyers happy, and they are definitely very happy to have an easy scapegoat to blame if their tech fails to live up to the hype.


> It seems that you are new in this argument, because this point has been rebutted so many times in the past, but here goes: it does not have to be flawless, it just has to statistically cause fewer accidents per kilometer/year than what human drivers cause in a similar situation (which is already non-zero, and in some countries actually quite high).

Yes, but they need a process in place to insure that that is true, and a process in place that insures that is true with each new update. They are remarkably cagey with statistics about AutoPilot.

Elon claims that Autopilot is currently much safer than human driving, yet the only statistics released are biased comparisons between Autopilot miles, which are almost exclusively on highways, and general auto statistics (where crashes occur most frequently off highways). If Elon really wanted to show the software effectiveness of current Autopilot, he could release the crash statistics of customers who purchased Autopilot and customers who did not, as many people have requested of him. He has thus far refused to.


There is Network Dependent AI... Not independent processing power on board each vehicle to make autonomous decisions.

I was referring to AI that operates independently of a network connection. There are multiple scenarios in which a network connection can become unavailable.

I'm not here to disparage Tesla, I really don't care what company creates self driving cars. Most people are acting like Tesla are the only ones.


A Tesla autopilot works offline without any network connection. (I mean of course the driving itself, not the updates.) So what exactly are you talking about?


Tesla vehicles equipped with the Full Self Driving package do have an onboard computer that runs neutral networks in real time to process sensor data and control the vehicle.

https://ww.electrek.co/2019/04/22/tesla-full-self-driving-co...


>it does not have to be flawless, it just has to statistically cause fewer accidents per kilometer/year than what human drivers cause in a similar situation (which is already non-zero, and in some countries actually quite high).

Says who?

The fact that you state this so confidently demonstrates a lack of understanding of both human nature and politics.


This is much less true than it used to be with the current rewrite of the autopilot software:

https://cleantechnica.com/2020/01/31/timestamped-guide-to-pa...


The system is absolutely AI based today. There is no hardcoding like this. Hardware 3 in Tesla cars is an ASIC meant to run neural nets (https://en.wikipedia.org/wiki/Tesla_Autopilot#Hardware_3). Why are you so confidently stating things you don't know about?


From your own cited source:

At this level, the car can act autonomously but requires the driver to be prepared to take control at a moment's notice.[103][104] HW1 is suitable only on limited-access highways, and sometimes will fail to detect lane markings and disengage itself. In urban driving the system will not read traffic signals or obey stop signs. This system also does not detect pedestrians or cyclists,[105] and while AP1 detects motorcycles,[106] there has been two instances of AP rear-ending motorcycles.[107]

https://en.wikipedia.org/wiki/Tesla_Autopilot#Hardware_3


What does hardware 1 and AP1 have to do with the latest hardware and autopilot?


Because each individual car is not truly autonomous when it comes to learning and improving diving based on it's specific geo location. I am not talking specifically about Tesla, this goes for all automated cars...

We have to be able to admit that we aren't ready to launch thousands of these cars out on streets at this point. They have NOT been perfect.

There are tons of issues beyond just the quality of the AI. Software development and updates, vehicle maintenance, planned obsolescence of models, Legality, ethics, ownership... tons of other issues not hashed out. The combination of all of those issues makes autonomous vehicles near impossible any time soon, unless we ALL want to give up our right to own personal property and submit our safety to being test subjects. I'm not willing to do that at this point as a development manager myself.


I'm not really sure what the hardware link you provided has to do with anything but plenty of rule-based heuristics are used in state of the art autonomous driving systems. Machine learning systems have taken over the image processing and classification parts, semantic segmentation, object recognition and so on but traffic rules or emergency behaviour or hard speed limits are not learned.

the OP is wrong though in somehow classifying this as 'not AI'. Just because ML has become an important part of the equation doesn't mean we have thrown control theory and logical constraints out of the window.


The current AI is running rudimentary calculations based on what is truly required for flawless operation... The type of processing that is necessary to operate like a GOOD human driver is nowhere near what currently exists... Sure there are bad drivers out there, and that's why autonomous vehicles would need to be EXCEPTIONALLY good... Not just good enough for a few demos on youtube.

If a company's test cycles were good enough to warrant reliability and safety promises, the window on the CyberTruck would have never broken.... This is how companies work, they over promise and under-deliver, this time it affects everyone's safety, including safety of those who don't buy them.

I'm not saying the strides aren't impressive, I'm saying I wouldn't feel safe having this experimental technology forced upon me knowing the potential for historically complex human factors.


I wonder why they would let the public see this. To the critical eye it looks like it barely works. It loses its mind right at the beginning, around 0:06-0:07, veers to the right for no reason and then rolls through a stop sign without stopping. It spends the rest of the video hunting left and right like a drunk.


Most of what is happening on that video appears to be a human controlling the vehicle, the overlay is just what the computer sees. In current public releases autopilot only warns you that it thinks you are going to run a red light / stop sign. And it is up for the human driver to actually stop. So, unless its on some beta release it was probably a human that rolled through it.


> warns you that it thinks you are going to run a red light / stop sign

I need this feature, and I would prefer this to full autonomous driving.

A computer that warns you when you're about to make a mistake is achievable today and will increase safety for everyone.


This. I expect this to be the near term outcome of a lot of autonomous driving projects, which is one of the reasons that despite my scepticism about the idea we're close to abandoning the steering wheel I still think the research itself is inherently worthwhile when conducted safely.


I was in a Honda Stepwagn last week and that one recognised stop signs and clearly displayed it.


These can make neat demos but what’s the false negative rate?


I don't think that data exists (outside of Honda's internal servers), does similar data exist for Teslas?


I doubt that even Honda knows.


> It loses its mind right at the beginning, around 0:06-0:07, veers to the right for no reason

I'm speculating but I think the reasons it veers to the right are some combination of the following: 1) There is water on the left hand side of the lane. I was going back and forth on the video and you can see "Raining" along the left edge turn green. So, it has detected the water and is avoiding it? At the same time it has turned green "Wet road". 2) There is a vehicle on the right side of the lane - it has been identified with an upside down green triangle / crosshairs and a square (is that block the license plate?) and the Tesla is "following" it.

Again speculating based on watching the display change.

I found this clip amazing for the following reasons: a) Around 0:02, it starts to notice the right edge of the road. Along the right edge of the video, you can see the estimated road. Around 0:04, you can see the right turn lane and the dots along the right edge change to show the vehicle has detected it. b) Detection of arrows on the road surface. They are labeled with "RA", "FA" and "LA". RA = Right Arrow, etc. c) Around 0:04, a "container" type of black outline appears on the right edge - it is a vehicle! d) Around 0:05, another container appears in the right turn area and it is another vehicle. e) Around 0:09, another container appears on the left middle and it is another vehicle.

Jumping to 0:22: f) It has seen the vehicle about to enter the scene from the left and "CutinExcited" probability starts to increase and when the vehicle has moved through the scene, it goes away. g) If you see the frame at time 28668.2612540 (?), it has picked up and classified a person on the left of the red 44.

Super impressive real-time classification.


> I'm speculating but I think the reasons it veers to the right are some combination of the following: 1) There is water on the left hand side of the lane. I was going back and forth on the video and you can see "Raining" along the left edge turn green. So, it has detected the water and is avoiding it? At the same time it has turned green "Wet road". 2) There is a vehicle on the right side of the lane - it has been identified with an upside down green triangle / crosshairs and a square (is that block the license plate?) and the Tesla is "following" it.

"May potentially swerve into other lanes to avoid puddle"?

And yeah, it literally runs a stop sign. In their own marketing video.


The video is clearly sped up and it looks like it did wait at the stop sign in the first part, it just didn't wait long.


yea, there's a big "stop" icon at the top, and it goes down to 2mph (afaict not zero). a rolling stop in any case.


Some have speculated that it's a human driving and not autopilot (plus autopilot still has a warning that it's only meant for highways).


> plus autopilot still has a warning that it's only meant for highways

And Summon has a warning that you need to "pay full attention to the vehicle".

Unless you look at their marketing, in which case it also tells you that you car will come "as you are being distracted by a fussy child".


Karpathy (their Head of AI) showed how they are able to generate a 3D depth map of an environment given 2 forward-facing stereo cameras, it's possible that's what the car is attempting to do here since it recognizes that part of its vision will be blocked once it stops moving. Or it's trying to avoid the puddle lol.


Maybe they can, but the videos give the impression (possibly misleading) that the software doesn't have object permanence. After it gets real close to the truck its estimate of where the road might be starts to wander around.


Sorta looks like its avoiding a big puddle in the road to the left of the vehicle. Notice the 'wed road' number goes up fast when it's going through that section.


What's the FPS rate of the algorithm? It says on the OSD: `13.3 FPS`, which seems very low:

50mph = 73 feet per second / 13 frames per second = 5 feet per frame. This still way faster than a human I'm guessing, but seems very slow.


Short of hardware, is there any way to speed up this classification by "pruning" the neural network without losing accuracy?

As an aside, how do they get training data from their existing sold vehicles? over cell connection probably is too expensive I assume, so maybe via wifi?


There are some odd things that happen with sharp turns and this latency in the current version. It will cross centerline because it's chasing a frame that doesn't contain the steps in the turn to not cross.


All this jitter couldn't get some physics ? If it recognized a object moving couldn't it do some approximate dynamics calculation to predict it's location on the next frame and use that to help the networks ? Especially in the end where you see the stopped cars blinking.


If you look closely, you'll notice that the stopped cars aren't actually blinking - their outlines are changing color from yellow to black. Who knows what that means, but at least it isn't losing track of cars completely.


At 0:29 a parked vehicle blinks out of recognition briefly (no box of any color), but it's partially obscured by another vehicle.


Imagine removing the camera layer and drive yourself using those lines and boxes, without perceiving the "actual" world at all. Which is what the AI is doing. It seems completely terrifying to me :)


I like these videos, but I'd really like to see one in snowy conditions, or other less-than-ideal road scenarios. I think there's a lot more to observe and learn from in those conditions than in bright, sunny days, in well maintained streets.


I'd love to see a Tesla on auto pilot on a Vietnamese 'freeway'... buffalo crossing street while 3 people are driving directly at you on your side of the road, in the wrong direction, while 8 people are crossing the street laterally from both sides (2 of them are drunk), while one person is merging without looking and also dodging a wheel eating pothole and a truck driven by a meth addict, is about to rear end you.


Actually, sounds like it wouldn’t be tough for a Tesla to drive safer than the human drivers on that road ;)


Trivially true since car driving must always include the option of refusing to drive out of safety concerns.


Not sure I'd trust an autopilot based on crude visual odometry and segmentation that can only achieve a paltry 17 fps.


Not sure I'd trust humans since their crude visual systems lead to reaction times of over one second:

https://www.researchgate.net/publication/233039156_Brake_Rea...

17x faster is an impressive improvement


Human beings visual systems are far faster than once per second. Your reference is complete reaction time to both process visual information and decide on an appropriate action and actuate a control surface given a state model of the world. Your brain most certainly processes visual information faster than 17 fps doing novel segmentation and odometry.


Tesla's system is also doing much more than simple segmentation and visual odometry. Tesla's latency around actuating the control surface is quite insignificant (a few ms) so I'd argue it is a fair comparison.


Does it have the object permanence abilities of a six month old baby?


In the "FSD Preview" update that came out around Christmas time, the car now shows a bunch of symbols on the road, and signs (aka stop signs) that it recognizes. They don't flicker at all, even while you're driving and a large truck obscures the car's vision of the sign / stoplight. So they likely have started to enable some form of temporal memory into the system so that it remembers what it has observed in the environment both frame-to-frame and overall.


That hasn't been my experience. The visualization only ever shows what I can see with my own eyes, and the cameras are in a very similar position. I've definitely watched traffic signals come and go, all while not moving an inch.


If Tesla's autopilot has no notion of object permanence, then it's worse than 5 month old infant in some respects.


Really doesn't matter how fast it is if it doesn't work does it? Don't let all the impressive CNN results fool you, we are a long, long way from doing human level vision.

There's a reason everyone else uses lidar...


Human frame rate is more like 1000 fps physiologically, effectively 150 fps with perception.


"impressive improvement" over such a flawed and dangerous form of transportation that it yields one of the top causes of human death.

Self driving cars is a case of doing the wrong thing better. I agree with the parent and there are better solutions out there.


These videos are quite old. The current system runs at 60 fps on all cameras. The old system was limited to a single camera at 17 fps because of the hardware processing requirements.


If Teslas navigate with cameras, does that mean a Wil E Coyote style tunnel painting will actually work?


If done right it may even work against humans.


I think this is under-discussed. People bring up how self-driving cars can't deal with unwinnable situations, like crashing into a school bus vs. an old lady. Why is this scenario considered novel? Human drivers cannot avoid these either.


Because it hasent happened? The roadrunner gag comes up every few years only to be debunked (by snopes and others). Computer vision is fooled by many things that are still very obvious to the human observer.


It's sort of been tried https://artofgears.com/2015/12/16/did-a-fiat-crash-into-a-pa...

A problem I think is the visual appearance of a tunnel changes as you move in relationship to in in a way that a static image would not.


I think that's what the radar is for.


The radar can't see stationary obstacles. Autopilot has already killed people because of that.


What if a bridge is drawn over the tunnel?


With multiple cameras it is possible to have depth perception, although I imagine that can be difficult if the tunnel is just a black blob



Any other HN users getting an Access Denied on their website? The usual "You don't have permission to access "http://www.tesla.com/" on this server."

I'm not behind any proxies - I'm at home, connecting from Brazil here.


Same here. Tesla's website is blocked in Brazil by Tesla itself.


Same happens to officedepot, gamestop...


I wish they would show the video overlay WITHOUT the background video.

Is the car seeing enough on its own that we as humans would be comfortable driving with just that information?


What are the blue squares? Features to estimate position and motion?


Most of them are at the base of structures like signs and trees. In a few places it looks like the Z projection is overlaid onto the video with the wrong perspective, so it's hard to tell. My guess is that they are generic stationary object recognition tags that serve as hints about what isn't part of the road.


They seem to be points related to those shown as white road outline on right bottom quarter, also when a car drives across view at 0:24 it is marked by blue dots too, so it must be radar data for road boundaries and physical obstacles (like building wall at 0:15).


My guess would be radar data overlaid onto the cameras, but not 100% sure


My guess on tag meanings:

Orange labels:

O - Opposite traffic lane (used together with LA or S)

F - Same-direction traffic lane (used together with FA,LA,RA or S)

FA - Forward Allowed

LA - Left turn Allowed

RA - Right turn Allowed

S - Stop sign/line

C - pedestrian Crossing

T - Trash can =) (see 0:32)

Black labels:

P - Pedestrian (see 0:24)

M - Motorcycle (see 0:28)

C - regular Car ?

V - Vehicle or Van ?

K - truK ?

S - ?

L - ?

At the beginning, utility truck is labeled first as S, then as L, then finally as K.

Almost all cars are labeled as C, some are labeled as V - in my observation mostly Vans and SUVs.


I understand that it is common to use Hough transforms to detect lines, which is critical in driving to understand lane markers and so on.

In my experiments with the implementation in OpenCV I haven't gotten good results, especially with noisy detections of lines that aren't there. But here they seem to get good line detection without many false positives, despite difficult properties of the image such as worn down paint and low contrast between the paint and the pavement. Anyone know what they are doing that works so well?


Hough transforms are "old-school" computer vision. They rely on a know parameter-space, "This is a circle, that is a straight line" and that isn't super robust to noise or to unfamiliar curves.

I don't know for sure, but I imagine these days the techniques are closer to what you're more likely to call Machine Learning. Probably neural net classifiers trained on manually tagged data, and maybe augmented with a lot of map data. Maybe with some memory too -- I want to say you'd use a particle filter, but there's probably some newfangled ML technique that does a better job than those.


I love how ugly and function-driven the overlay is. I think every other self-driving overlay I've seen is much more polished and more clearly made for public consumption.


I know the exact location of the latter part of the video. It's in Mountain View off Castro Street. I was like, "Woah. The Terminator lives in my burb!"


I would love to get access to a 360 equirectangular video of this process. I suspect that the narrow FoV makes this seem less reliable than it actually is.


The jitter artifacts could be removed by applying some kind of temporal consistency constraints on the network predictions. But such an approach will also definitely introduce a lot of edge cases where a sudden change is actually ignored due to enforcement of the constraint. An adaptive consistency parameter could probably work better, but it's non-trivial to have a meta-algorithm to figure out the paramter.


I love this! Anybody reminded of Person of interest?

Exciting to see that slowly but surely we are starting to approach the future where a true AI will be born.


What was with the "STOP" at the top of the screen blinking randomly even when the car wasn't at a stop nor at a place where it should stop?

This doesn't really inspire too much confidence, IMO. It makes me think their FSD efforts are on par with their autopilot implementation.


It looks like that just shows up any time a stop sign is anywhere in view of that camera, not that it's actively stopping.


Is it just me, or other people out there terrified of trusting their life to some closed-source neural net that sees in black and white at 18fps which apparently believes that cars pop into and out of existence in a matter of milliseconds? This is not the future I hoped for.


This is a dumb question. How does tesla self driving algo decides which car was first at a stop sign. Does it store the time difference? How would it decide if other human driver decides to go before someone else.


Do all these self driving systems work with regular IR cameras?

Somehow I always expected they also used some sort of depth camera to detect distances instead of figuring it out based on pixels.


Tesla uses only cameras (some color and some black and white I believe) along with a radar system and perhaps sonar? But Tesla is controversial in their approach - essentially all other self driving systems use depth sensors. Tesla makes the argument that you can actually calculate depth from moving cameras, which is true. The technique is called structure from motion and it is well understood. However a lidar is a more direct and reliable way of measuring depth accurately, so others use them.

The difference being that all Teslas manufactured in the last two years or so have this camera sensor system installed on them, so they can all be upgraded to self driving with a software or computer upgrade. No one else is shipping cars that can be self driving. So Tesla has the advantage of collecting a huge data set of real world data now, and if they get the system working they can somewhat instantly “activate” a huge self driving fleet. Other companies have more expensive sensor systems such that the price is too high to sell the vehicles (only rent as taxis), and they will have to start manufacturing vehicles after they get everything finalized.

If Tesla and Waymo perfect their systems at the same time, Tesla will be way ahead.


> So Tesla has the advantage of collecting a huge data set of real world data now

Do Tesla owners know about this? Is this an opt-in feature?


Can confirm: There is an overlay that appears which asks whether you consent to collection of video (only from the 7 outside cameras -- there is one facing the interior but it doesn't include that), and telemetry / GPS data for autopilot training and sentry mode training. If you don't agree, I'm assuming your car will not be used for the fleet learning since they can't gather any telemetry. I do believe that critical logging w/o user-data(?) (aka drive system failure, MCU crashes, etc) is still enabled independently && regardlessly of the data collection for AI toggle.


Tesla is very open about how this works. I don’t own a Tesla but I assume they mention this in some agreement when activating these features.

By the way, they don’t collect all driving data. It’s too much data to transmit back home. They load test algorithms on to the car in “shadow mode” where they don’t control anything, then check for failures and send the car data from the failures back to Tesla. So very little data per car is actually shared.


Lol


Shouldn’t this jitter be seen purely as additional accuracy to the human eye? It isn’t like there is a need for motion stabilization in execution of the algorithm.


Interesting, but is it just me or does it seem overly jumpy?


I’m kind of impressed by how smooth it is, actually. If you watch videos of state-of-the-art object localization NNs, they tend to be EXTREMELY jumpy. These neural nets usually operate on only a single frame at a time, at least in the lower layers, so their predictions tend to jump around a lot from frame to frame (especially when the camera is moving!)


It worries me in general though, and I think that a higher level of consistently in the results of the parts of the image that don't significantly change between frames seems like a goal worth pursuing.

I also think that a deeper understanding of the mechanisms and techniques required to reduce jitter might offer some insights into ways of handling adversarial images.


And I do mean insights and not a potential solution. I think it's an issue related to the handling of the spatial discontinuities introduced by the conversion of an effectively continuous reality into a representation consisting of a large number of discrete elements.

I think a more in depth understanding of how to navigate organically introduced discontinuities could provide a baseline against which we can look to combat the maliciously introduced ones.

It most likely won't be enough - since the problem source is an adaptive and intelligent adversary.


The "vision fps" runs around 12 to 20, so yeah, that's gonna seem kinda jumpy. The display fps is slightly higher, which contributes to a juddering feeling.


I think the more interesting/dangerous jumping is when the perception engine fails to consistently classify/find objects in the scene, between adjacent frames, not the specific fps of the process. This video seems to display some of such inter-frame jumpiness.


Inter-frame jumpy-ness can be smoothed out slightly by the control algorithms, on the assumption of object permanence, as long as the underlying vision NN doesn’t miss things too often.


One of the real challenges of these systems is that even a very good system only needs to lose confidence in some object for a very short period of time for the system to lose confidence in its navigation and return control to the user. The easy way to get around that issue is to categorize objects and then just ignore them out of the context you expect them to appear in -- which is how Tesla managed to kill a pedestrian walking their bicycle across the road.


That event was done by Uber and if the human backup driver was paying attention, as they were supposed to because it was a test run, it wouldn't have happen. Yes the software was at fault of doing what you are saying, but the incident was a test run and the driver should have caught it and report to the engineer, not play on their phone and not pay attention.


Arguing that it happened because the human back up wasn't paying attention isn't a great defense of an autonomous vehicle. The issue isn't that the bug happened, the issue is that that particular bug only comes from doing something fundamentally bad - hardcoding the situation.


> which is how Tesla managed to kill a pedestrian walking their bicycle across the road.

That was Uber, just so you know


Wasn't that uber?


Does anyone know where to find more details about the sensor and hardware stacks the different self-driving companies are using?


How does it work in the snow/heavy rain?


Their head of AI talked about this during their AI Investor Day and he said that even when the road is entirely covered, there is still enough subtle clues for the neural net to pick up on that it will be possible to have a self-driving car in the snow. Just like how humans are able to observe the edge of the road (enough) to slowly drive to their destination, the car will pick up and learn from those clues too eventually.


Is someone else worried that the vision is not 180 view in the front ?

At 22 seconds at the stop sign, after the first car drives past what if theres a car that breaks the stop sign and ends up ramming into the tesla (as a human sometimes you anticipate this by seeing that the car is not decelerating as it is approaching the stop sign)


I assume that's just one of the forward cameras, the car has quite a few more to build a 360° panorama. https://www.tesla.com/autopilot


Probably minor but seeing the made up parking spot lines in the video is really cool


This is strikingly similar to to the imagery in Squarepusher's latest music video: https://www.youtube.com/watch?v=GlhV-OKHecI


You mean just basic bounding boxes around an object?


Well, yeah, bounding boxes overlaid on real footage.


Is RESTRICTED the fourth prime directive?


What happens when there is no stop sign?



too bad they dont show you the 360deg view like the car hopefully sees


would be really sick to see the code of this or something similar


Will it find and terminate Sarah Connor?


>Vision fps: ~18

Surely that can't be right


Probably is. The computer power necessary to go higher would use all the battery and take up a ton of space.

If you've seen other self driving cars, they usually have no trunk because the entire thing is server racks.


Maybe this is ridiculous, but is it feasible that Tesla plans to use the SpaceX Starlink satellites to offload data for processing remotely?


Unlikely, too risky. Even the slightest glitch in the network could be disastrous. The car has to be at least smart enough to pull over and stop safely.


When talking about computation being a power drain, one could imagine a scenario where certain computations that can deal with the latency of starlink internet could be offloaded by default to the cloud. When a network failure is detected or results take too long to come in, the on-board computer could still take over the same computations, resulting in more power drain for a short moment.


There are 4,000+ planned starlink satellites that are planning to be deployed in total. This, in addition to backup with cell towers in case starlink goes down really makes this scenario risk-free.


Have you ever filled out a Captcha identifying roadsigns?


I'm not sure I follow? I'm just asking whether it's possible that the underlying processing could take place in a remote server, rather than an onboard computer in the car.


I was making a subtle joke that filling out a captcha identifying roadsigns could be the remote server that drives someone's car.


The cameras are running at higher fps, but the processed outputs don't need to go that fast. At 65MPH, 18fps is one frame for every 5 feet of travel.


Not fast enough to react to someone swerving into your lane.


Cars don't tend to jump sideways all of a sudden (and if they do, then usually by outside force, which tends to be hard to avoid). There's plenty of frames to detect that (note that the view shown is not the only camera, otherwise making a turn safely would be fairly impossible).


They don't have to "jump" sideways. All it takes is someone assuming you aren't there and moving into the space you occupy.


Much faster than a human would react. Acutely focused humans generally take at least 1 second to hit the brakes in an emergency. Longer if distracted or tired[1].

[1]: https://www.researchgate.net/publication/233039156_Brake_Rea...


Braking is slowed by the limitations of nerve conduction down to the legs. You can swerve away from danger with faster reaction time.


The same nerve conduction latency exists for your arms. The computer will beat you every time.


Perhaps a humorous example of this: a robot that always wins at rock paper scissors: https://www.youtube.com/watch?v=3nxjjztQKtY

There's a fun parlor trick which takes advantage of human reaction time too: place a hundred dollar bill on the table, let the other party place their hand a few inches away, start your hand a foot above theirs; the first person to place their hand on the money wins, and the other party can start moving as soon as you start moving. Turns out you can easily go around their hand and grab the bill before they're able to react.


It's a shorter distance.


The distance is beside the point. The amount of time it takes for your brain to parse visual stimuli, then propagate a signal to your arms, and finally for the muscles in your arm to contract will be much slower than a computer parsing a sequence of images and then actuating the steering column. The computer is faster both at understanding it needs to respond and at signaling driving systems to respond.


It still strikes me why so many people are still against a future with self driving cars. Sure, people throughout history are scared of change & new tech. But if you look at this video and compare the amount of stuff the camera catches AND processes, with the things you are seeing & processing, it's really hard for me to see me being better at it than a computer.

The reality of driving (as a human) is that most of it happens on autopilot anyway. It's rare to deal with an anomaly. So realistically, it makes total sense to keep refining these algorithms and make driving safer and more pleasant for everybody.

I just find it very hard to understand the perspective of those that are opposed to self driving cars.


There's no evidence that computers are any better at driving than humans, and there is no evidence that this generation of machine learning will be able to tackle this problem sufficiently. All signs point to driving requiring an understanding of what's going on around you, which this generation of ML does not provide.

On the other hand, there is a lot of hype and promises being made without the results to back them up.


> There's no evidence that computers are any better at driving than humans

Yes, there is.

You can debate the exact figures/circumstances, but the numbers are good, and improving all the time. Humans are just not very good at guiding tons of metal around at high speeds, which isn't very surprising when you think about it.

One accident for every 4.34 million miles is already pretty good (compared to 500k miles for humans) - if it were rolled out today across all cars it would probably save thousands of lives in the US. Yes these quarterly numbers are in the best conditions and supervised, no that doesn't invalidate them, particularly as they show dramatic improvement at the same time as they keep expanding the conditions in which autopilot functions.

There will always be a long tail, but computers are getting very close to good enough for highway driving, known truck routes, and known taxi routes.

https://electrek.co/2019/10/23/tesla-autopilot-safety-9x-saf... https://www.tesla.com/VehicleSafetyReport?redirect=no


> Yes these quarterly numbers are in the best conditions and supervised, no that doesn't invalidate them...

That is exactly what it does: it invalidates them. Comparing apples to oranges is an invalid comparison.

Let's compare the numbers again when there's sufficient data for self-driving tech driving in all environmental conditions without supervision and without human assistance jumping in whenever it gets rough. At that point, a comparison is valid. Until then you're comparing someone who aces tests with someone who is just doing good, but the first guy is only solving first-graders' multiple-choice tests and cheats by asking a friend who graduated at a university whenever a question pops up that isn't "choose one of these three", while the second guy is on his own in the middle of his college education.


It's not apples to oranges. If a human interferes then a human is now driving. That human has a worse record of non accidents than the computer. Ergo, that human is making the driving record worse not better.


You know that it's not the job of the human driver in experimental self-driving vehicles to just take control every once in a while, when he feels he'd like to drive a little?

A human interferes in precisely those situations in which the machine either decided to surrender or - worse - started doing something stupid and the human caught it quickly enough to prevent an accident. It is safe to assume that the human has a much better safety record in such situations.

And who does the human surrender to when he fails at driving? Should we also not count human accidents that could have been prevented if there was a world-class race driver ready to take over when the actual driver makes the failure?


Are computers better than the best human drivers in all circumstances? No, maybe they never will be, but that doesn't matter.

Are they better than average humans in average conditions already? Yes.

They don't need to be all powerful AGI which can replace humans in all circumstances to be useful and save lives.


What doesn't matter is who wins in average conditions. Because averages are just that: averages. But actual driving is full of non-average situations, whether they are created by weather, other human participants in traffic, animals, whatever...there's millions of surprising non-standard situations happening each day in traffic, and current technology isn't even able to reliably detect when it's facing such a situation, much less capable of handling them entirely on its own.


And yet autopilot is doing millions of miles a year with fewer incidents than humans, and expanding in capability (for example taking lane changes and turns), all while constantly lowering the level of accidents.

I think average conditions do matter, and supervised computers will continue to take over driving until humans are simply not required. Of course they will not replace humans this year or next, but at some point humans will be passengers, not drivers.


Right. And that's why it's not rolled out everywhere yet. But it's clearly getting closer and closer. It's probably less than 20 years away.


As someone who works in insurance, I can tell you that the vast majority of accidents occur in adverse conditions.

Creating an AI that only functions on dry sunny days is basically useless from a safety perspective.

...though it does save people time since they can be doing something else while in transit!


The stats aren’t good enough to be sure yet. They’re encouraging, and if you’d asked me in 2010 I absolutely would not have guessed that a few billion miles of experience wasn’t enough to know for sure, but turns out we need more data.


It doesn't invalidate the numbers if they were in ideal conditions, but it certainly means you cannot compare them to regular human drivers. That 500k number is the average from all kinds of different driving conditions. You would have to find a set of data where the humans also only drove in ideal conditions and see if the rate is any worse. I agree with the original poster and that there's not enough evidence to say that machines will be better than humans with the current technology being used.


I think you meant "rate"...


Ugh, speech to text. Thanks for finding that in time.


No problem!


You are right and wrong and the same time.

By quoting an average, assuming those numbers hold when scaled up, you are compressing the actual distribution into a single number. Overall that high-level number may be correct.

But at the same time, the kinds of people that get into accidents will be a different distribution.

How you drive factors a lot into how likely you are into getting in an accident, even allowing for the other drivers.

If the distribution becomes essentially random then you might actually have a higher chance of getting into an accident than before, if you were a very careful driver.

(Obviously those that benefit the most are the bad drivers since they'll no longer be bad)

Just remember that whenever you summarize a distribution into a single number like an average, you loose a lot of information in the process.


Sure, there are various inadequacies in the data, Tesla is clearly not an unbiased source (though there has been some verification from gov agencies), but I think the rapidly decreasing accident rate is significant (even when compared to the same car without those features), and we'll see numbers continue to improve over time.

Most people are simply not careful drivers all the time. This is one major advantage computers have - they don't get tired or distracted.

Time will tell as we get more data about supervised driving, but I think it's inaccurate to say we don't have data yet, it is imperfect of course, but it does point to computers being safer than humans in some very common conditions over millions of miles.


Would you accept a higher risk of accident for you and your children, if it meant that an order of magnitude more other bad drivers would be saved?

Also keep in mind the outrage factor. The reason airplane crashes are so feared is because of the loss of control. Meanwhile alcohol and cars are much more dangerous but much lower outrage because you still feel like you have some control.


Everyone thinks they are better than average.


> There's no evidence that computers are any better at driving than humans

That's a vacuous statement. Evidence would be a deployed self-driving fleet. At which point it would be mission accomplished and we wouldn't be discussing this.

Now if you want theoretical considerations instead of evidence, then consider the lower reaction times of machines.


> then consider the lower reaction times of machines

Meanwhile, Teslas run into the backs of car carriers because they only recognise the car it's carrying and not the truck bed extending behind it. It might react faster, but it needs to identify the danger first and up until now, it's much worse at that in many situations.

Yes, you're right, once it's safer, it's safer. That makes your statements just as vacuous as theirs.


> Meanwhile, Teslas run into the backs of car carriers because they only recognise the car it's carrying and not the truck bed extending behind it

And someone rear ended me the other day because they didn't notice cars were stopping. Yes, self-driving cars make mistakes, but humans make all sorts of mistakes, too.

The question is, do self driving cars have more accidents per mile (or more damage per mile/injuries per mile/ deaths per mile) than human driven cars?


And the answer is yes, they are less safe than human driven cars(https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t...)

"activating auto steer increased crash rate by 50%"

Tesla safety stats are ok when compared to the average car on the road. But that's because they are expensive, high quality cars driven by middle aged people. It's the safest car demographic on the road. When comparing auto drive teslas to equivalent luxury cars they look terrible. Though i think if you ignore the autodrive they're perfectly good cars.


Tesla is only one of several players in the game. Waymo is generally considered to be better and safer at the moment.


Waymo is only better in a very well-known and predictable setting. You can't pick it up and deploy it instantly. It's geofenced to a small part of a US city.


Wrong interpretation.

They are absolutely not "less safe", because they are all human driven cars.

Why not say the rest of your argument with the fact that cruise control made cars "less safe than human driven cars" because people fell asleep at the wheel and drifted off the road.


I feel the question is more about the responsibility and the social contract of sharing the road with fellow humans.

Self-driving cars don't take risks when they make mistakes while humans are exposed to physical or legal consequences that are well understood by everyone on the road.


We already take risks of this type; cars have mechanical failures, and people die with things totally outside their control. Brakes fail, tires blow, and people have accepted these risks.

I know self driving cars feel different to us now, but they aren't really different in kind, just degree. I am sure people felt these concerns when cars were introduced in general.


> The question is, do self driving cars have more accidents per mile (or more damage per mile/injuries per mile/ deaths per mile) than human driven cars?

It should matter if they have less, but in the real world it doesn't matter, because it's "scary" to be rear-ended by an unthinking machine, and business as usual to be rear-ended by a clumsy human, so people will always hold machines to a much higher standard.


Only new machines. Once we get used to them, they become 'natural' and we don't think twice about them being scary. When cars first came out, they were scary and people feared dying from them. Now, they are just normal, and we don't think twice about the thousands of people that die because of cars.


Remember that when elevators were developed, people were so scared of them they needed drivers for decades before people accepted that they could operate on their own.


as we should.

Human makes a mistake, they could have been tired, distracted, got something in their eye, or anything else.

Machine makes a mistake, it becomes a question of why and will it make that same mistake every time?


> only recognise the car it's carrying and not the truck bed extending behind it

This is very much a solvable problem.


There is a difference between perfect and better.

If your standard is that self driving must be perfect then i'd argue you're being unreasonable.

If your standard is that self driving cars must be better than that is something that can easily be tested.


Don't fear monger.

"Teslas" do not "run into the backs of car carriers".

I don't know where you are repeating that from, because you've clearly never driven one, but the core concept of autopilot is that the human should step in whenever a situation is confusing and/or dangerous. If the car does not recognize a situation it will always alert the driver to take over.


You are factually wrong. The autopilot has even swerved into gore points, causing fatal wrecks.

The parent poster was referencing this, which I found immediately using Google:

https://www.express.co.uk/life-style/cars/1231834/tesla-mode...


> That's a vacuous statement.

That's just your opinion, while my comment is a statement of reality as it stands.

> Evidence would be a deployed self-driving fleet.

This is also just your opinion. Evidence could also be a self-driving vehicle that doesn't drive under the beds of trucks, run people over or is statistically safer than human drivers.

> Now if you want theoretical considerations instead of evidence, then consider the lower reaction times of machines.

Consider the high error rate of machine learning when it comes to novel situations that driving exposes drivers to. Human beings can understand situations they are in and adapt based on that understanding, whereas this generation of ML does not.

Computers have the theoretical potential for a lot of things. The internet had the theoretical potential for mass informational enlightenment, democracy and resource allocation in the face of dwindling scarcity of resources.

Instead, the computers and the internet have become tools of mass manipulation, weapons for totalitarian regimes, and despite an abundance of food, people still go hungry.

I understand that a lot of people are sold on hype, and believe promises that were made, however the results are just not there.


> Evidence could also be a self-driving vehicle that doesn't drive under the beds of trucks, run people over or is statistically safer than human drivers.

The relevant part is the last: "statistically safer than human drivers."

Every driving system that has existed or will exist will make errors due to extremely novel situations. Given the energies involved, such errors will lead to crashes, some of which will be horrible.

The standard for self driving systems is, at a high level, simple: statistically safer, by a fair margin, than human drivers.


> statistically safer, by a fair margin, than human drivers.

Considering other utility unlocked by autonomous cars (e.g. less stress, life-hours freed up, helping the disabled) they could even be marginally less safe than human drivers and still provide a net benefit.


> statistically safer, by a fair margin, than human drivers

And not just that, but individually they have to be seen as safer. As in, I am not going to be happy with a computer driving me around that is merely safer than an average driver. Given that I'm middle aged, don't drink and drive, or drive while exhausted, etc, and I haven't ever caused an accident, I'm going to need a little more assurance.


I understand, that's what I was, roughly, trying to capture with 'by a fair margin'.

I believe Elon Musk has talked about targeting '10x safer' than an average driver. My guess is that's a sufficient margin to cover the vast majority of better than average drivers.


> That's just your opinion, while my comment is a statement of reality as it stands.

No, perhaps I am to blame for incomplete quoting, but you're asking for evidence about the future, whether machine learning will be able to tackle this problem. So it's not just about current computers but also future computers. We can make predictions, but hard evidence would require a time machine or some first-principles proof that humans are the optimal arrangement of atoms for car control.

> however the results are just not there.

Well of course, but what information does that add to the discussion? That level 5 autonomy is an achievement that lies in the future is a premise of this thread and I think most people take this as fact. Bemoaning the absence of evidence for the results of a technology that's expected to only arrive in the future... doesn't make sense.


Now if you want theoretical considerations instead of evidence, then consider the lower reaction times of machines.

Reacting quickly except in emergencies is almost always worse and more likely to cause an accident in an environment consisting mostly of human drivers.


Isn’t there already a mounting amount of statistics of accidents of self driving cars vs manual ones and the difference is night and day that the self driving ones are statistically safer?


Not at all. Self-driving cars cannot yet safely navigate complex or unusual scenarios. The human backups in test cars still have to take over with good regularity. That's not something you'd see if it were anywhere close to ready. Even with all their data, there aren't enough training examples of every situation. Deep learning models also have a tendency to be pathologically wrong in rare but completely unpredictable circumstances.

Tesla's self driving has resulted in cars swerving into gore points, for example.


> Not at all. Self-driving cars cannot yet safely navigate complex or unusual scenarios.

That's assuming accidents are caused by complex and unusual scenarios instead of humans being inattentive during routine tasks. If machines fail at complex tasks but fail safely then that's not an argument against self-driving cars being safe, it's only one about their limited usefulness.


> If machines fail at complex tasks but fail safely then that's not an argument against self-driving cars being safe, it's only one about their limited usefulness.

They don't, not at the moment. The software detects the situation is outside its capability space and says "jesus, take the wheel!".

For all practical intents and purposes the only reasonable way to interpret the current self-driving car statistics is to treat every single human intervention as a would-be accident. That rate is somewhere in the 1 per 10000 km ballpark. That's - for now - orders of magnitudes worse than human drivers.


> For all practical intents and purposes the only reasonable way to interpret the current self-driving car statistics is to treat every single human intervention as a would-be accident.

Not necessarily. If the AI response is to stop the car when it can't figure something out, then the cost of each of those situations is the car not continuing on as quickly as it should (and maybe stuck cars if we dont have a way to override the stop/take control manually).

Every hand over to a driver is not an accident.


I wonder how much of that fail safe difficulty is to blame on humans too. The car can't just slow down on a highway due to the reaction time of other drivers after all.


No, it gives back control if it can't continue in the situation. That means that it can't handle the scenario at all. Other drivers are completely irrelevant. Even if you took that out you'd still need to deal with pedestrians, cyclists, cars blowing tires and veering out of control, etc.

Not to mention that many of the times control has to be taken have nothing to do with other drivers or even other cars.

The reality is that a car can only be driven by a general artificial intelligence. We're nowhere in that ballpark.


Point taken. However if you a look at statistics regarding the leading cause of accidents in the US, for example by going to https://www.after-car-accidents.com/car-accident-causes.html

It seems that most accidents are caused by human error and humans being distracted or under the influence of a substance.

Weather and complex situations would only rank at the bottom of the list and therefore could be considered a YAGNI problem.

If you remove the human element from the driving, already the top 4 causes of accidents on the road would be reduced drastically.

So at the moment a self driving car may not be able to handle some of the most extreme situations that a human driver could handle however, I would challenge that assumption and simply ask: does it have to?


> Weather and complex situations would only rank at the bottom of the list and therefore could be considered a YAGNI problem.

This is a logical fallacy. You are inferring from a comparatively low incident rate that the condition must also be rare, which is not a reasonable conclusion to make.


Breaking those causes you linked individually:

#1 was speeding. Self driving cars would be trivially programmed not to do that, of course. But this one is tricky to interpret, because Americans habitually speed. That means two things: Speeding is trivially something that can be listed as a factor; it wouldn't be a lot less meaningful to say that bucket seats are a factor. And not speeding might actually be more dangerous. That leaves me thinking this one is equivocal for the purposes of the debate.

#s 2, 3, 4 and 6 are drunk driving, distractions, cell phones, and driver fatigue. Those are easy point for self driving cars, of course.

#5 is weather. This feels like an easy one in favor of humans, considering that all the major self driving car companies deal with weather by avoiding it. That's not much of a vote of confidence.

#6 is red light accidents. I would want to see more about override statistics before interpreting this one. It could be that self-driving cars literally never miss a red light. But, given that some of the intersections in my city have quite confusing traffic light arrangements - reflectors that make the light invisible outside of certain angles, lighted intersections spaced 40 feet apart that are separate lighted intersections nonetheless, printed signs informing drivers of non-standard semantics for that light, wildly varying yellow light durations, etc. - I have my doubts about self-driving cars being able to properly deal with the traffic lights. If there are any tests going on in a similar city, I wouldn't be at all surprised to find out that operator overrides are routine at lighted intersections.

Zooming out to the big picture, though - the ones that are a clear win for self-driving cars are all instances where the human's decisionmaking capacity has been impaired. Which implies that humans are really quite good at operating vehicles when they aren't being stupid. But it also implies something that I think doesn't get enough credit among many advocates of self-driving cars: the fact that, behind the wheel of every safe self-driving car, there's a highly trained human operator who is not drunk and not tired and not diddling around with their cell phone. And we've got evidence to suggest that, when the human operator is a bit more human, the self driving car's accident rate suddenly spikes considerably.

And that's a big deal. It's common to place those errors on the shoulders of the car's driver, but that's trying to have your cake and eat it too. You can't wave away every fatality involving a car on autopilot by saying, "Oh, that wasn't the car's fault, the driver was being an idiot," and also claim that we've got good evidence to suggest that we know how to make cars operate safely without relying on a human driver to act as a backstop for the fallibility of the human programmers. It may eventually come - I think we all hope it does - but given how, in AI, the last year or two's worth of R&D always ends up taking another decade or two before everyone just gives up on the whole enterprise and moves on to a greener field, a little skepticism is far from unwarranted.


The problem is the datasets are incompatible. Tesla's numbers are very impressive until you realize that it will currently only engage in scenarios which are less prone to accidents across the board.

As the number of situations where self-driving kicks in increases to include more complex environments, it's likely the numbers will shift.


Even though the numbers will shift, Tesla can't afford to have worse statistics than with human drivers.

That's why it takes years to enable city level (L2 and maybe L3 in rush hours) driving.

At the same time it really looks like Tesla will enable it this year, as Elon's predicted timelines are getting shorter.


No. There is not yet enough data to say that Tesla's or anyone else's current technology is better than human drivers. You likely need tens or hundreds of billions of miles to prove that. However since accidents are a very low probability event, it would take substantially less data to prove that this technology is drastically less safe than human drivers and that hasn't shown itself in the data either. Either side that points to specific numbers as proof either doesn't understand the statistics or is intentionally trying to mislead you.


Self driving cars killed one pedestrian in 10 million miles driven, about an order of magnitude worse than humans


But that’s a sample size of one, so we can’t really draw any meaningful conclusions from that data. (Also the safety driver was watching Netflix on her phone; it could be argued that the vehicle was not production-ready and not expected or intended to be able to handle every situation.)


Let's say everyone replaced their current cars with self-driving vehicles and they never killed anyone ever again. If normal vehicle usage continued and fifty years passed without a single traffic death, your claim suggests we'd still be unable to say if self-driving cars were safer, as it would still be a sample size of one.

I don't think your use of statistics is correct.


I'm not sure that sample size is a relevant idea here, but it's certainly misleading as you state it. One accident in many billions of miles would be a wonderful result, but you would dismiss it as a sample size of one.


Is that data on self driving cars or assisted driving? I think disengagement statistics also need to be factored in.


That's the Uber "Level 3" accident.


How would you compare both? Computers can process millions of objects and things all at the same time, with a error margin than any human being would love to have.


It doesn't matter how many objects a system can process if it can't recognize what they are or which ones are actually important, like say, the truck crossing in front of you.


They don't need to recognize what objects are as much as they need to compute the expected and predicted paths of those objects.


Humans can have "visions" or can mistake shapes and interpret as things, our brain is easily hacked.


Yet a human tends to notice that their perception is unreliable and can react accordingly.


accordingly in the middle of the road?


> There's no evidence that computers are any better at driving than humans

People said that about computers when it came to basic computations.

They said it about computers when it came to landing on the moon.

They said it about computers when it came to electronic tax returns, digital currencies, music, movies, video games and everything else in the history of the world that computers once couldn't do, but now do with ease, and much much more accurately and faster than the old way of doing whatever it is.

With sub-millisecond reaction times and eight cameras and radar instead of only two eyes, it's blatantly obvious computers will be better drivers than humans.

It's just a question of when.


> computers are any better at driving than humans

Does it matter though? If it drives like my grandma, I can still do other things while in a car and hate commuting/traffic jams a bit less.


The problem is that all these so-called "self-driving" systems expect you to be alert and ready because at any moment they might decide that the human needs to be in control again.

So you can't do other things while in the car.


That's right now and that's why companies have to provide level 5 autonomy or there won't be any customers. Yet for days with good weather in urban environments level 5 might come within a decade; it's still better to be able to drive only half of the time instead of all the time. The long tail of problems that is supposedly plaguing self-driving systems exists with humans as well - how would you react if a bridge/building in front of you started collapsing or somebody started overtaking right in front of you?


I'd hit the brakes.

But then again, I am actually able to discern what is a bridge and what isn't, and thus don't have the need to suddenly cover my eyes and scream, "I'm confused and I don't want to drive anymore!"


There is some evidence. For instance, Tesla drivers report the car stopping before they can perceive an accident about to take place. An example is the first ten seconds of this video https://www.youtube.com/watch?v=APnN2mClkmk

These vehicles have a superior sensor system to us. Our processing is better but sometimes their superior sensors are better.


Sure, and as soon as I can sit facing the other way and not be responsible for the cars fuck ups that's fine. Until then semi autonomy can fuck right off. I'd rather trust myself 100% of the time than a machine 99% of the time.


Sure, that's your decision. I'd rather have the car and I work together to prevent accidents, or let the car take over at low speeds


> or let the car take over at low speeds

Except arguably that's when the car is least effective! That is when pedestrians walk out or vans reverse with the wrong kind of shiny paintwork. Reaction time is less important at low speeds vs effective and accurate processing.

Actively driving results in my attention being engaged, all the time. Passively letting the car drive will 100% reduce my engagement, and when the car fails I will not notice. And sure, for now it's my decision, until it becomes mandatory on new cars alongside automatic breaking etc...


Here are the superior sensors steering the car straight into a paper box:

https://twitter.com/greentheonly/status/1202778123433119745

It's even fricking speeding up!


Right, and so the conclusion is that there is some evidence saying that the sensors are superior in some cases and there is some evidence that the sensors are inferior in other cases.


That's not particularly impressive. The system reacted pretty much the same way a human driver (watching the brake lights and general attitude of the cars ahead) would react in this cirumstance.


Searching "dash cams" on Youtube says otherwise.


What about computers being safer drivers than humans who are texting and driving?


"All signs point to driving requiring an understanding of what's going on around you, which this generation of ML does not provide."

That may be very true, but are most of us fully present in the act of driving ? Where the ML is deficient we can make up for it by making the infrastructure more robust.


> making the infrastructure more robust

Can't make human behaviour inside that infrastructure more robust though. So we'd need to completely split human and autonomous traffic. That'll be about as big a challenge as creating the autonomous cars in the first place.

My guess is at least 15 years before we'll see any large scale adoption of autonomous vehicles. And that's for countries like the Netherlands which are already pretty well organised. Countries with chaotic driving styles and roads like Italy will take longer and the third world might take decades longer because of the lack of rules in many of those countries.


Waymo ordered 60,000 mini vans and 20k suvs. They are running full self driving in Arizona without safety drivers. Self driving is already here. Waymo just started petitioning California for the ability to start charging. I bet we see sf go live with Waymo taxis at the start of 2021.


Speed of computation cannot overcome lack of cognition.


When the alleged dangers are scary/obvious, and the benefits hard to comprehend, easy to oppose.

I remember when credit cards, caller ID, online shopping, tax software, etc were all new. Each was loudly denounced as evil for various reasons; eventually the benefits became so obvious that we all now use those techs with little concern - even though many/most of the concerns were warranted. Just became too convenient to use them.

Self-driving will be the same. Scary concept to hurtle down the road completely & existentially subject to a machine built of (per A.C.Clarke) pure magic. Once in one, and seeing how smoothly & smartly it transports you, people will quickly embrace self-driving cars ... especially when they discover how too darned convenient it is to use them.

"Any sufficiently advanced technology is indistinguishable from magic." And self-driving neural-net cars, to most, are flat-out magic.


Most people have 0 experience with it, and 0 experiences seeing it (at least that they are aware of).

If you gave a random sample Tesla-level autopilot with their current car and walked them through how to use if, I'd guess that at least 75% would want to use it regularly.

Familiarity and basic understanding work wonders on shifting the human mind.


> I remember when credit cards

I have one because I need it for online booking, my first one was revoked because I apparently cost them more by not using it enough. I practically don't see a significant benefit, most places accept other forms of payment.

> caller ID

faked and missing all the time.

> subject to a machine built of (per A.C.Clarke) pure magic.

As a software dev. I really would feel much safer with pure magic. I will buy your self driving car the moment Linux went a year without new significant bugs or exploits. Not even bug free, just no significant bugs. I do not trust the industry in its current state to produce a complex piece of software that can reliably control a car in an even more complex and constantly changing environment.


You're the anomaly.

Most people find "swipe and pay later" far more convenient, "pizza shop already knows where to deliver" & "oh it's Joe, I'd better answer" far more convenient, etc. Yes they're imperfect, but the flaws aren't "all the time" - they're lost amid the overwhelming convenience.

Opposite your "complex piece of software that can reliably control a car" is "humans who damage/injure/maim/kill other drivers with alarming frequency" (but the aforementioned overwhelming convenience mitigates our concern). Sure we want "no significant bugs", but rapidly reaching the point where even the existing & concerning bugs are better than facing distracted/drunk/debilitated/dumb human drivers.


> Most people find "swipe and pay later" far more convenient,

As I said most places accept other options of payment and the pay later part seems completely irrelevant unless you have money management issues.

> "oh it's Joe, I'd better answer"

Usually it fails for people I know. Which makes it anoying since I usually can call back my mother later as she tends to call every few days, but have to answer when its my landlord who only calls when I missed something important.

> but rapidly reaching the point where even the existing & concerning bugs are better than facing distracted/drunk/debilitated/dumb human drivers.

Amount of drunk drivers I face daily on average: 0.0001. Amount of self driving cars is see daily if we adopt them completely: ~500.

I would still take my risk with running into that one drunk driver over several thousand buggy cars daily.


> it's really hard for me to see me being better at it than a computer

There's a big difference between seeing and interpreting. The car sees literally everything and can maybe identify most things, but a pedestrian window shopping across the street isn't really that interesting and neither is a bollard. As humans, we filter out the unimportant stuff to focus on the important stuff. We can also pickup on subtle clues in posture, through eye contact and even through the driving style of another person. The car will see all this, even in people across the street, but it won't be able to interpret it.

Humans are still way ahead of anything close to autonomy in most situations and it will take years for anyone to get close to covering 95% of situations. Removing the steering wheel needs way more than 95% covered.


> We can also pickup on subtle clues in posture, through eye contact and even through the driving style of another person.

I think this is what a lot of people take for granted. Driving anywhere where there is another vehicle immediately becomes a social activity. There is a tacit agreement to work within the framework of the enforced driving laws, but beyond that there is a social agreement as to who is doing that. Even which driving laws we follow when and were is fluid, but relatively cohesive among drivers.

It's why pedestrians try to make eye contact with drivers. It's why drivers make eye contact when there is even an iota of ambiguity on who has the right of way. It's why almost no one drives at the posted speed on a high way and we usually go as fast as the fastest lead driver. It's why we gesture, flash our lights, honk, and sometimes curse. We've built vehicles to communicate as much as possible, but that is still not even an ounce of what we can communicate with our eyes.

And sure, when vehicles communicate with each other, that will be even better. In the meantime you've got different actors using incompatible protocols.


> It's why pedestrians try to make eye contact with drivers.

I do this so I know they've seen me, otherwise I've almost been run over. If cars would stop because generic object was in the way, that special social cue wouldn't be needed.


you think the average driver is aware of other people's posture and eye contact while driving?


Yes. Humans are currently much better at recognizing subtle danger signs and planning accordingly.

As one example, let’s say you have a right turn a few hundred yards ahead of you. There are no cars behind or beside you, but there is a long row of parallel parked cars next to the right lane.

An average human driver, anticipating the danger of a car door opening, will stay in the left lane until the last minute, or at least slow down considerably. A current self-driving car will likely stay in the right lane and proceed at the speed limit.


> The reality of driving (as a human) is that most of it happens on autopilot anyway.

Right. So the marginal benefit of a self-driving car to people who feel that they already drive on autopilot most of the time is low. Especially when they weigh that benefit against the risk of putting their life into the hands of a 4000lb machine controlled by what? A computational system programmed by the same group of people (in the general public's mind) that brought them clippy, the Hawaii nuclear missile scare, algorithmic flash crashes, bloated ad-filled websites, telephone bots, 8 remote controls per living room, and Flappy Bird.


I think it is much less that people are opposed to self-driving cars as a concept, just that they are skeptical of their current efficacy, and even more skeptical of what many of the current competitors in the field self-report their efficacy is. I for one am skeptical of many current claims that self-driving cars are eminent, and would like a greater regulatory setup to confirm that self-driving is in fact as safe to safer than human driving, and in what instances that is not true. In order to prove that to a statistical significance, you are going to need many many miles, which we should force companies to do in the effort of being cautious.


It's that people think it will work as good as their phone, which, well, does goofy stuff every once-in-a-while. Meatspace is harder.


I would much rather we invest billions into existing public transit solutions as a community. A light weight electric train doesn't need advanced LIDAR + AI to navigate, it can transport 100x people than a single car.


Well, that depends on what problem you're solving. Self-driving vehicles aren't solving mass transportation. They're solving ubiquitous personal transportation. They may also solve mass transport by being cheap, ubiquitous, and by performing an end-run around the piles of local regulation that restrict mass transit.


Self-driving is largely a reaction to the single-person-single-car scale problem. When you're trying to transport millions of people on a weekday, cars will never be able to meet demand. Look at China trying to vertically scale their highways resulting in 40-lane wide highways and multi-day traffic jams.


"Self-driving is largely a reaction to the single-person-single-car scale problem."

What in the world? No it is not.


Username ... does not check out.


Density + mass transport is the solution to the problem of ubiquitous personal transportation.


This is the answer, but it opens up another line of questioning: do we force people to live in dense urban areas? Sometimes I wonder if pro mass-transit commenters (I'm pro mass transit, FWIW) have ever lived outside of a major metro. There are so many places where people live where mass transit is never going to be viable in way that replaces the car. Who gets to decide when/if we invalidate their way of living? Maybe I'm not seeing all the angles here.


> the piles of local regulation that restrict mass transit

Yes, that's what's stopping local governments building more mass transit, it's all those regulations imposed by the pesky local government!


Yes. Take SF, for instance.

EDIT: Rate-limited, so read response to request to elaborate below

San Francisco has multiple entities involved in the process of policy relating to transit. So, while the SF MTA and Jeffrey Tumlin may want more rail (perhaps underground), more buses, and fewer car thoroughfares there are also supervisors in charge of local districts who may oppose construction on grounds that they will hurt constituents.

As an example, supervisors David Campos and Hillary Ronen at different times opposed the bus-lanes along Mission St. on various grounds including (paraphrasing) "speeding up transit does not improve things for the local community", "the transit lanes lead to gentrification", "local merchants will see decreased foot traffic".

That's just an example and the details are too numerous to list here but consider also that the SFFD opposes Vision Zero.


For someone who knows nothing about SF laws, please elaborate.


At least traditionally, the driver had direct physical control over the vehicle through hydraulics. Obviously this has become less true over the past couple of decades with the introduction of electronic systems, but the traditional imprint remains in the popular mind.

(Truly) self-driving cars are, inversely, completely outside the control of the "driver". Anyone who has considered it even briefly has realized that governments will inevitably mandate lockout systems preventing people from using their own car if they are a wanted felon/drunk/a dissident. Such restrictions are completely impractical on traditional vehicles.


They're already in use in the case of repeat drunk drivers and it's not a stretch to imagine similar technology in other ways with traditional vehicles. Self driving is completely orthogonal to this concern.


I'd suggest reading the NTSB report on the pedestrian killed by Uber's "self-driving" vehicle [0].

The vehicle misclassified Elaine Herzberg so many times in the seconds leading up to taking her out at 40 mph - its clear to me that none of this is ready for public use.

We are so normalised to vehicular violence. This would never be allowed in any other sector, say healthcare for instance.

[0] https://www.ntsb.gov/news/events/Pages/2019-HWY18MH010-BMG.a...


Medical errors and omissions kill way more people than vehicles every year [1].

There's no evidence a human driver wouldn't have killed Elaine. An intoxicated one probably would have and we have direct evidence that a distracted one would have. On average two bicyclists get killed every day in the US. [2]

[1] https://www.hopkinsmedicine.org/news/media/releases/study_su...

[2] https://www.iihs.org/topics/fatality-statistics/detail/bicyc...


Which document describes where it "misclassified Elaine Herzberg so many times"?


Here is a revised link [1] to the full report. Table 1, on Page 10 shows a summary.

[1] https://dms.ntsb.gov/pubdms/search/document.cfm?docID=477717...


Wow, that was interesting.

At every change of identification of an object it becomes "static" again. That seems very stupid if you've already estimated the path of an object.

The human driver didn't do too badly I think. The auditory alert only came at -0.2s until impact, and they braked at 0.7s after impact.

From relaxing and playing with your phone to braking in only 0.9 seconds.

Of course they were supposed to be paying attention, but I fully understand how it would be difficult to pay attention for hours without having to do anything, then suddenly be expected to react immediately. That's the most dangerous thing about these half-driving cars.


To be fair, jaywalking at night away from any streetlights on a road with a 45 mph speed limit is a risky endeavor whether or not there are autonomous cars on the road. It seems likely the outcome would have been the same if the car was being operated by a typical human driver.


The car spotted the pedestrian 5.6 seconds before the crash, but didn't have the brains to know what to do.

The main argument for self driving cars is that they have better sensors and reaction time than humans, but clearly those things aren't enough.


I've only seen autopilot examples in places with uniformed and fairly good infrastructure. Self driving even in the near future will be a toy for the rich and geographically privileged.

That should trickle down during time, but the fact that a self driving car is still a car won't. We need better and efficient trasport and self driving cars won't chage that very much.


Traditional public transit only works for the geographically privileged. If you don't live along one of the few transit routes in a city, it doesn't work, and even when it does it can be quite inefficient.

In contrast, self-driving cars scale quite well especially to rural areas. They are point-to-point and on-demand. Low cost to operate compared to traditional public transit means that even rural cities could have a self-driving car based public transit option.


Because we are happy to accept bad consequences due to human error and human shortcomings. Even more so if the human with the errors and shortcomings also share the risk (e.g a distracted driver takes a risk themselves).

I think few are “opposed” to self driving cars, but I’m not optimistic about when we’ll see the tech being widespread (level 5 self driving), and I think the ethical can of worms might delay it even further.

I’m optimistic about cars that drive very safely for 95% of the way on 95% of all trips, with equipment that isn’t very expensive


Lol. This is the kind of comment that I don't expect to see in hackernews, but then again it's about Tesla, so all reasonableness goes out the window.

> if you look at this video and compare the amount of stuff the camera catches AND processes, with the things you are seeing & processing, it's really hard for me to see me being better at it than a computer.

Such a handwavy argument is worth exactly squat. What does this mean? Brains are hugely complicated machines which we barely understand in any reasonable detail. You see a handful of lines and text across the screen and it's sufficient to make you go ":o"? Enough to conclude it must be better than a brain?

So far I only see a world of promises and predictions of "self-driving cars are 5 years away", but no real tangible hard evidence that self-driving cars are better than humans. Point me to that, not to a fancy video, before we can have a discussion.


> You see a handful of lines and text across the screen and it's sufficient to make you go ":o"? Enough to conclude it must be better than a brain?

I think he's referring to selective attention. Can't comment on the validity of this "study", but I recall it catching me off guard the first time I saw it. https://www.youtube.com/watch?v=vJG698U2Mvo


Self driving cars are already here, just not widely distributed


Show me a self-driving car functioning in the middle of a blizzard and we can start talking.

Southern California and the weather it experiences is a poor representation of what is required for self-driving to replace humans.


Isn't that asking for too much? Most humans can't drive safely in blizzards. And unlike self-driving cars, many humans are overconfident in their abilities. If a self-driving car recognizes dangerous conditions (poor visibility, malfunctioning sensors, etc), it can simply stop.

It seems as long as these machines are safer on average than humans, they're worth deploying.


> Most humans can't drive safely in blizzards.

Sure they can. During snow squalls, accident rates obviously go up (especially if the roads are icy or if it's the first one of the season), but most drivers are good at assesing risk and take appropriate actions.

Importantly, they have the ability to decide that it's worth the risk to go out on the roads to pick up Sally from daycare (for example). In any case, if the car computer decides it won't drive, do you really think people aren't going to drive the car manually so that Sally can come home for the night? Of course they will. And they'll be fine.

Have you ever been on the freeway during white-out conditions before the ploughs have gotten a chance to come by? People slow down, turn on their hazards if necessary, and if the road markings aren't visible, naturally form two lanes instead of three (straddling the lanes). In snowy climates, a simple snow squall is something to talk about, but not stop you.


Your account of human drivers is hilarious utopian. I've driving in horrible conditions, I've watched people driving too fast and later having their car wrapped around the divider barrier, I've watched people get stuck on a residential street and end up in the ditch, then begging for someone else to go pick up their Sally.

People are awful in these conditions and exercise incredibly poor risk decisions.


You are implying that the inability of self-driving cars to drive in adverse conditions is somehow a positive when actually it is a serious limitation.

Sure, a bit of rain doesn't help the safety stats, but you can't just put the whole city on hold when there's a downpour or some fog.


> Have you ever been on the freeway during white-out conditions before the ploughs have gotten a chance to come by? People slow down, turn on their hazards if necessary, and if the road markings aren't visible, naturally form two lanes instead of three (straddling the lanes). In snowy climates, a simple snow squall is something to talk about, but not stop you.

I lived in snowy climates for a decade. Your summary completely disagrees with what I experienced. In whiteout conditions there were always cars that had gone off the road. Even after years of driving in the snow, people get into accidents far more often than in dry conditions.

And don’t forget: most of the US population has little to no experience driving in winter conditions. If the Bay Area got an inch of snow, there would be more collisions than a demolition derby.


Really? By your logic Minnesota, Iowa, North Dakota, South Dakota, Colorado, Montana, Idaho, Wisconsin, etc. should shut down for the winter?

Businesses don't shut down just because the forecast says it might snow. Plows can't get out fast enough to clear the road to a point the lines would be readable/visible for a self-driving car. Humans may not be perfect but we find a way to get home in the snow without the massive casualties you're implying should be happening.


Most drivers don’t live in Minnesota, Iowa, North Dakota, South Dakota, Colorado, Montana, Idaho, Wisconsin, etc. If you want to see how the average driver performs on icy roads, look at videos from Portland and Seattle. Both of those cities practically shut down when there is snow on the roads.


Ok - Illinios, Michigan, Pennsylvania, New York, etc.

Seattle and Portland shut down because they rarely get snow. You might as well have listed Atlanta.


Isn’t it enough if it recognises the limits of its own capability and safely asks you to take over?


No, actually it is not. It basically throws the operator into the worst possible conditions. Remember that the whole point of self driving cars is to eliminate the human driver. A driver who doesn’t drive is not capable of taking over from a machine which just gave up on recognizing what’s happening due to the severe weather.

An example from my own backyard. My car is equipped only with ACC and lane assist. I already had a couple of situations where in severe rain the sensors are no longer able to „see”. The car throws a fit and I have to take over immediately. These things are not fit for purpose. They are not able to judge „a big surface covered with water, risk of aquaplaning” or „surface covered with snow, temp dropping, possibility of black ice” or even simple mud / sand.

Maybe if one lives in Nevada, Arizona, California, where it’s always sunny, rains once in ten years, one can get swayed into believing that these things work. However, rsin, dnow, black ice or a mix of those and it becomes very quickly obvious it does not.

These cars, today, they are just dumb machnines without judgement. Never I am going to sit in an autonomous car outside of a controlled environment like an airport / strict city centre. Outside of the city, no way.


Computers have already been judging the conditions you cite better than humans for a long time. Your car is equipped with anti-slip systems that do a far better job both controlling the car in bad conditions and recognizing those conditions than you can.

Frankly, I'd trust a computer over the average human any day, given the horrible judgement I've seen most people exercise on the roads.


> Computers have already been judging the conditions you cite better than humans for a long time. Your car is equipped with anti-slip systems that do a far better job both controlling the car in bad conditions and recognizing those conditions than you can.

Only when they find themselves right on that surface. They’re not able to predict this before the situation occurs.


I can't drive, so no, not really.


Doesnt have to be sunny South California. There are tons of Teslas in PNW states where it rains all the time, and it still works great. I regularly commute during rain, and have encountered zero issues so far. I know it is anecdata, but at least it is supported by other numbers, such as high numbers of Teslas on the roads here.


I was thinking the same thing. I would think they thought to write in some traction detection, however I've never seen a demonstration of the like. They need to test cars in the heart of MN during the winter; when you can't see the lines on the road, and sometimes the car in front of you.


That is a situation where humans routinely crash.

The machines are better drivers, in that they admit defeat and stop.


or simply slow down. I dread to have to sit in the car that decided to just stop, risking freezing. You don’t know if you sit there for an hour, a day, a week?


I don’t think object recognition is the important bit. Human’s are unbelievably adept at filtering out noise subconsciously and focusing on important things. We’ve evolved for this over many millennia. We will never be able to compete with machines for broad, shallow intelligence because that scales relatively linearly with CPU.

The real issue is decision pathways which you can’t depict in a video, and are the real issue with self driving. The sheer solution space is so massive that even simulating with current DL techniques isn’t feasible with current hardware.


Driving appeals, very much, to the ego.

Having freedom and control over a physical system that is bigger and more powerful than you, is a liberating experience for those whose spirit is used to a smaller, lighter frame.

Car manufacturers know this - they cater their interior design to appeal to this factor - and it is indeed a cultural phenomenon well and truly imprinted on Western society.

"Freedom means, freedom to drive yourself anywhere you want."

This 'liberation' has become a chain. Real human beings spend hours in these containers, driving themselves somewhere, feeling powerful.. driving to their work or home, or whatever.

2 to 4 hours a day, on the mobile throne.

When computers start to take that freedom, its going to tweak quite a few freak-outs. And really, why shouldn't it?

Ultimately, automotive industries are hedging their bets that cars will no longer be personal possessions, but rather something you summon on an as-needed basis.

This would be a highly desirable condition for the rent-makers/lease-share holders, who are really pushing this forward - along with their buddies in the insurance mega-industries, who stand to gain a great deal more control over their customers lives when it comes to computerised automation.

Myself, I'd personally prefer cars were user-serviceable and user-operable, under no conditions do I need a computer, just make me a better car. Preferably electric, simple as possible, and ships with a manual.


> But if you look at this video and compare the amount of stuff the camera catches AND processes, with the things you are seeing & processing, it's really hard for me to see me being better at it than a computer.

I wonder if the fear has more to do with it simply being different. The set of things that a CV system misses is likely quite different from the set of things that a human misses. A CV system is likely to make mistakes that a human would never make, and even if those mistakes are far fewer in number, they make it look bad to humans.

We've all seen electronic devices do this. When an Amazon Echo interprets the sound of a tea kettle as its wake word, we're left with no explanation, and no way to rationalize it other than to assume the device is, in some way, bad at its job. Surely a human would never make this mistake, we think, and so therefore the device must be dumber than a human at word detection. And yet, there surely exists a class of noises that a human would mistake for a word that an Echo would not - but that doesn't factor in.

Anyway, this means that a CV-controlled car would behave, compared to a human, strangely. When one's intuition about the car's behavior, tuned from years of experience both driving and observing cars, differs from the car's actual behavior, it becomes harder to make predictions about how the car will behave, and from here comes a fear and a sense of uneasiness. And when it makes a rare mistake that a human would never make, because it's perceptual system is entirely different, we assume its dumber than a human all-together.


> And yet, there surely exists a class of noises that a human would mistake for a word that an Echo would not

This fits the bill: https://en.wikipedia.org/wiki/Electronic_voice_phenomenon


Driving is essentially a spiritual activity.

By normal standards, it's insane to expect non-experts to maintain and operate heavy machinery, in public streets, half asleep or half paying attention.

It's also a pretty absurd way of providing mass transport in most places.

So if you're going to throw out normal risk assessment as a starting point, where do you go from there? People know how bad quality software can be, how ubiquitous bugs are, and how often programs are deep morasses of strange and perverse hacks, and how it only doesn't (normally) kill people because the stakes are small. Then Uber's car actually did kill someone, and predictably, the familiar problem of bad, buggy software looms larger, as a risk, than the dull and more or less commonly accepted fact of cars hitting people all the time.

Very common threats, like pneunomia, poverty, or cancer, tend to get depreciated in comparison to symbolic, unusal threats - and I think it's pretty unsurprising that on HN, people feel threatened by the possibility of bugs not only ruining their day, but actually running them over.


If self-driving cars become the majority of cars, how long until it is deemed that "manual operated cars" are unsafe and should be banned?

For example, if it were the case that safe driving cars were on average 10% safer in most situations.

Wouldn't there be pressure to reduce deaths and injuries by banning manually operated cars.

What do you think about self-driving cars being a source of data for private and govertment organizations on where you are at any time, or your travel patterns?

Do you think there would be a time when people who displeased the government could be essentially stranded because they can't get access to the self-driving network?

I never expected the level of survailance and scanning happening in UK/China, etc.

Are self-driving cars (possibly with biometric face and fingerprint scanners), another possible entry point into survailance?

I've been wondering about the privacy / control aspects. What do you think?


It surprised me how long it took it to realize the road was wet.


I was pulling up to an intersection on a wet road and realized I was seeing reflections of oncoming cars underneath the parked cars. Additionally I was seeing oncoming cars through the windows of some of the parked cars.

I realized that no AI has any understanding of these things. Sure it knows not to pull out from other methods. But I knew it wasn't even safe to look.

Note - our roads are a bit tighter than most. You have to pull car lengths beyond the stop signs to know if it is safe to proceed.


How do they assess stopping distances without accounting for surface wetness? Is grip established through traction control systems somehow.


The wheels have to slip to figure out that the roads are slick. Even then it can be difficult to tell whether or not you just drove over some bumps.


I don't think anyone is against self driving cars if you assume they will work. Its whether they will work and what happens when they screw up that is the pertinent issue.


Until they run over someone important and not just some grandmother in Arizona will the industry realize what they're doing here?


> I just find it very hard to understand the perspective of those that are opposed to self driving cars.

Once you observe how these things behave in severe weather, you get a little bit more perspective on why it simply can’t work. Not with today’s infrastructure, not with today’s technology, definitely not in mixed human / autonomous driver traffic.


> I just find it very hard to understand the perspective of those that are opposed to self driving cars

Consider it in terms of human agency: People expect the use of "consumer electronics" and "appliances" to be mostly unpleasant, frustrating, buggy, full of arbitrary restrictions, and overmonetized.


I'm not against it, I just think that it requires AGI and humanity is nowhere close to developing AGI. I think that AGI is somewhere between "50 years away" and "will never be developed by humans".


People growing up 50 years from now will think the idea that we used to pilot our own cars is as ridiculous as the idea that people used to light their homes with whale oil.


I am in full agreement with you.

Also, it seems like the amount of drivers on their phones, texting and driving or what have you, is at an all time high today. I work in an urban area and the amount of times I've missed a light because someone is on their phone stopped in the lane with a green has begun to give me legitimate road rage that I have never experienced in my life until now in my late 30's. I would trust the car more than the average person at this point.


I think people fixate on “better” without really defining what that means. Imagine it as bell curves. The mean of the computer might be worse than the mean of the human, but the variance is likely to be a lot tighter, which means less of the computer bell curve will be in the “danger zone” where accidents happen.

It’s ok if the human can technically do better, so long as the machine is more likely to avoid severe failure


Big difference between being "against" self-driving cars and simply not buying into the hype.

I don't buy into the hype, and I certainly don't buy what Tesla is promising.


Brings back so many memories of 80s shows.


Awesome! Now do it again and when it passes one set of parked cars on the side, have someone roll a soccer ball in front of the Tesla. Will the car assume a child will follow from in between the cars or not?


> Will the car assume a child will follow from in between the cars or not?

Will most human drivers?


Most will assume this, the correct question would be: Would most human react fast enough?


> Most will assume this,

I've been driving a long time. I rarely see people hit their brakes when a ball comes out, or even slow down. My anecdotal evidence from 300,000 miles of driving says this isn't the case.


Should we not expect both to?


I would expect both to, but for me the baseline is "as good as the average human". So in that vein, not predicting a child after a ball is still better than the baseline.


A neural network needs to be taught this and then all cars with that training will respond correctly forever.

Every individual human would need to learn this separately from experience, which would require far more soccer balls (and children).


As you have implied in your example, humans cannot make observations about the present and reason about the near future. As a result, they must learn the consequences of any action by experiencing it.

So the situation is really much worse than you describe. Even if we invested in demonstrating the "soccer ball and child" scenario for all driving students, they wouldn't be able to apply the experience to tennis balls and dogs, or a child entering the road without any sort of ball. Teaching people to drive would require an exhaustive course in every conceivable scenario that might arise while driving. You can see why it's an intractable problem.


I've yet to meet a human that needs to observe enormous numbers of balls and children against a number of different backdrops before they grasp the concept that they're different from the background environments, never mind potentially linked to unsafe road use.

The conclusion that a neural network classifies stuff 'correctly forever' is also not one supported by the current state of computer vision.


By "taught this" do you mean kill a kid?


Run this scenario in a thousand variants in sims.


No. He means see a human avoid killing a kid.


It's not that hard for people to learn; see a ball, expect danger and take action.

The problem with people is distraction. The problem with code (AI-ish) is people.


Say what you want, but you would never be able to avoid a crash like this: https://www.youtube.com/watch?v=oqHtavx-eec

Or this https://www.youtube.com/watch?v=2uqA9bqICEk


I think the first might just look difficult to avoid because if the lack of peripheral view in the video- a driver would have seen the other car much earlier than the video suggests.

As for the second one, looks like the other car just passed in front of it.


Not if a big car is next to you. And even then many people wouldn't be able to react. We can argue this case, but that collision avoidance overall is a very helpful feature is pretty clear.


Which sensor has primacy? radar, lidar or camera?

I recall it mentioned somewhere that, after a well publicized tragedy hitting the side of a white truck on a bright day, Tesla was moving to radar?

And yet a lot of the cues this awesome video shows seems to be camera-based line detection etc?


If you look at the overlay, the algorithm seems extra paranoid about the following aspects: HIGH_BEAM, BLINDED, RAINING, TIRE_SPRAY, which all sound like camera weaknesses, not lidar/radar weaknesses. My guess is that this is camera-based still.

Also interesting is the vision fps of ~13. That's 75ms latency.


75ms sounds like a lot, but it isn't so bad - average human reaction time is about 200ms. Source:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4456887/


Tesla doesn't use lidar. They have always used radar, but radar can't always tell the difference between a stationary metal sign near/over the road and a stalled car in the middle of the lane, so for stationary objects they need the cameras. This is why every adaptive cruise control system (not just Tesla's) has a warning that it may not detect stationary objects.

https://arstechnica.com/cars/2018/06/why-emergency-braking-s...


Tesla has a whitelist of problematic stationary objects. Pray you don't ever get into an accident with a Tesla vehicle at those places.


Though only(?) Tesla pretends that their adaptive cruise control is full blown self-driving.


I'm not sure about what they've said in the past, but their site is pretty clear about the current state of Autopilot / Full Self Driving: https://www.tesla.com/model3/design#autopilot and for a more in-depth description: https://www.tesla.com/support/autopilot

Adaptive ("Traffic-aware") cruise control is actually an Autopilot feature, not FSD. Of course, Autopilot is a debatable name in itself, but Tesla does not market cruise control as FSD, at least not anymore. (I don't know if they ever did or not — honestly I didn't follow Tesla news until like 3 months ago.)

The closest thing to self-driving you get with the FSD package is "Navigate on Autopilot", which is something akin to "self-driving on highways only in good weather conditions". It can also drive around a parking lot on its own if you're supervising it, and it can supposedly park itself, though I've heard Autopark does not work well.


Source?

If you simply refer to it being called "autopilot", cmon that's not too serious. The word "autopilot" does not automatically mean that it is full self-driving. None of the Tesla drivers or potential Tesla buyers realistically think that it has level 5 self-driving yet.

UPD: The buy page literally says:

"The currently enabled features require active driver supervision and do not make the vehicle autonomous. The activation and use of these features are dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving features evolve, your car will be continuously upgraded through over-the-air software updates."


>Source? If you simply refer to it being called "autopilot", cmon that's not too serious

The Model 3 Builder literally says: "Full Self-Driving Capability".


Autopilot is a separate feature from FSD. It also specifically lists out exactly what it can do. And includes a whole paragraph on limitations which starts with "The currently enabled features require active driver supervision and do not make the vehicle autonomous."


Did you just read the one sentence and not any other text on that page?: "Coming later this year: Automatic driving on city streets.",

Oh and how about "The currently enabled features require active driver supervision and do not make the vehicle autonomous. The activation and use of these features are dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving features evolve, your car will be continuously upgraded through over-the-air software updates.".

- Which I just copy-pasted from the exact page you are talking about. Who is being disingenious here?

I honestly don't think that there is any significant number of potential buyers who go onto a website to buy a car, and pull one sentence from that page out of context, ignore the rest of the text and then make their decision based on that.


You mean the text in smaller face "disclaimer text" buried under the fold, well under the price and the "Prices are likely to increase in the future" ("BUY NOW!"), and below things like:

> Summon: your parked car will come find you anywhere in a parking lot. Really.

(It may plow into a street to do so, but hey. And you should really have "full attention" on the car. Unless you're reading our marketing copy which says to feel free to "attend to a fussy child")

and so on. Ninety per cent of the length of that page is selling promises that aren't there yet, and the fine print at the bottom is taking most of it away, for some indeterminate period of time.


The problem with that collision is that neither the radar nor the camera saw the obstacle. To the camera it looked too much like the sky as you say. To the radar it appeared to be a stationary object and so probably an overhead sign or something instead of another vehicle. Once the car gets close enough that the radar's cone only includes dangerous obstacles then it starts putting the brakes but in that case it was too late.

In general, though, the art of combining information from multiple sensors is called sensor fusion and its a fairly well understood problem. Well, sometimes even good engineers make mistakes, see Schiaparelli and the saturated Kalman filter inputs, but we have a good idea of what those mistakes can be for different schema.

EDIT: Mistyped radar as lidar, fixed.


Tesla doesn't use LiDAR.

And I assure you that sensor fusion is not a well understood problem. One of the biggest challenges in autonomous driving is how to unify all of the different NN outputs.


They use both radar and cameras, but primarily cameras. Lines, road markings, stop lights, and street signs can only be detected with vision, but cars+pedestrians can (usually) be verified by both radar and cameras. In fact, they’ve mentioned that they use radar data to help train the camera neural nets to predict distances more accurately. However, radar alone isn’t sufficient for detecting obstacles — it sometimes misses vehicles with a lot of ground clearance, and sometimes registers false positives when driving underneath bridges and overpasses, resulting in phantom braking.


IIRC tesla still doesn't use LIDAR. Just wanted to point that out.


Correct. I believe they use vision along with front facing radar only




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: