It's pretty obvious from the context of the thread that "vision quality" refers to LIDAR vs cameras, and that human vision is good enough, not that being legally blind doesn't somehow doesn't affect driving quality.
Maybe if your standard is "scheduled commercial passenger flights" level of safety.
>Otherwise why not just drive the car yourself?
There's plenty of reasons why humans can be more dangerous outside of vision quality. For instance, being distracted, poor reaction times, or not being able to monitor all angles simultaneously.
> For instance, being distracted, poor reaction times, or not being able to monitor all angles simultaneously.
As a pitch for self driving, it's going to be a long time before I trust a computer to do the above better than I do. At the very least adding sensors I don't have access to will give me assurance the car won't drive into a wall with a road painted on it. I don't know how on earth you'd market self-driving as competent without being absurdly conservative about what functionality you claim to be able to deliver. Aggregate statistics about safety aren't going to make me feel emotionally stable when I am familiar with how jerky and skittish the driving is under visually confusing driving conditions.
Perhaps vision is sufficient, but it seems hopelessly optimistic to expect to be able to pitch it without some core improvement over human driving (aside from my ability to take the hand off the wheel while driving).
>At the very least adding sensors I don't have access to will give me assurance the car won't drive into a wall with a road painted on it.
This is as relevant as self driving cars not being able to detect anti-tank mines. If you want to intentionally cause harm, there are far easier ways than erecting a wall in the middle of a roadway and then painting a mural on it. If you're worried about it accidentally occurring, the fact that there's no incidents suggests it's at least unlikely enough to not worry about.
>Aggregate statistics about safety aren't going to make me feel emotionally stable when I am familiar with how jerky and skittish the driving is under visually confusing driving conditions.
Sounds like this is less about the tech used (ie. cameras vs lidar) and how "smooth" the car appears to behave.
But Tesla Vision is, currently, legally below minimum human vision requirements and has historically been sold despite being nearly legally blind.
Driving requirements in many states demand 20/40 vision in at least one eye [1]. 20/20 visual acuity is a arc-resolution of approximately 1 arc-minute [2] thus 20/40 vision is approximately a arc-resolution of 2 arc-minutes or 30 pixels per degree of field of view. Legally blind is usually cited as approximately 20/200 which is approximately 10 arc-minutes or 6 pixels per degree of field of view.
Tesla Vision HW3 contains 3 adjacent forward cameras at different focal lengths and Tesla Vision HW4 contains 2 adjacent forward cameras of different focal lengths and as such those cameras can not be used in conjunction to establish binocular vision [3]. As such, we should view each camera is a zero-redundancy single-sensor and is thus a "single-eye" case.
We observe that Tesla Vision HW3 has a 35 degree camera for 250m, 50 degree camera for 150m, and 120 degree camera for 60m [4]. The Tesla Vision HW4 has a 50 degree camera for 150m, and 120 degree camera for 60m [4]. A speed of 100 km/h corresponds to ~28 m/s as such the cameras trailing times of ~10s, ~6s, ~2s. Standard safe driving practices dictates a 2-3 second follow, so most maneuvers would be dictated by the 60m camera and predictive maneuvers would be dictated by the 150m camera.
We observe that the HW3 forward cameras have a horizontal resolution of 1280 pixels resulting in a arc-resolution of ~25.6 pixels per degree for the 150m camera and ~11 pixels per degree for the 60m camera, the camera used for the majority of actions. Both values are below minimum vision requirements for driving with most states with the wide angle view within a factor of two of being considered legally blind.
We observe that the HW4 forward cameras have a horizontal resolution of 2896 pixels resulting in a arc-resolution of ~58 pixels per degree for the 150m camera and ~24 pixels per degree for the 60m camera. The 60m camera, which should be the primary camera for most maneuvers, fails to meet minimum vision requirements in most states.
It is important to note that there are literally hundreds of thousands if not millions of HW3 vehicles on the road using sensors that fail to meet minimum vision requirements. Tesla determined that a product that fails to meet minimum vision requirements is fit for use and sold it for their own enrichment. The same company that convinced customers to purchase systems when they promised: "We are excited to announce that, as of today, all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."[5] in 2016 when they were delivering HW2. Despite the systems being delivered clearly not reaching even minimum vision requirements and, in fact, being nearly legally blind.
>We observe that the HW3 forward cameras have a horizontal resolution of 1280 pixels resulting in a arc-resolution of ~25.6 pixels per degree for the 150m camera and ~11 pixels per degree for the 60m camera, the camera used for the majority of actions. Both values are below minimum vision requirements for driving with most states with the wide angle view within a factor of two of being considered legally blind.
This isn't as much as a slam dunk as you think it is. The fallacy is that assuming visual acuity requirements are chosen because they're required for safe maneuvering, when in reality they're likely chosen for other tasks, like reading signs. A Tesla doesn't have to do those things, so it can potentially get away with lower visual acuity. Moreover if you look at camera feeds from HW3/HW4 hardware, you'll see they're totally serviceable for discerning cars. It definitely doesn't feel like I'm driving "legally blind" or whatever.
It's significant enough to directly prove that degraded vision leads to worse driving. This is very significant if we decide whether a driving system should only use vision, which can degrade.
Never driven before?