Safe trailing distance is determined by safe driving expecations. If everybody in the world started doing hard and sudden brakes, then the safe trailing distance would be extended.
While ideally everybody would always leave enough space for hard and sudden brakes (eg in case a deer runs across a road), the reality is that driving has a lot of loose rules, and you're expected to behave rationally and similarly to the other drivers on the road. For example it's usually fine to speed a little in the left lane if most people deem it safe to do so. And there's no strict definition for "reckless driving" despite it being illegal. Driving is an intricate social dance with millions of actors, and if you are the one acting way out of line, you are liable, and hard + sudden brakes for no reason, is definitely out of line.
(That's not to say this is the best system though. An autonomous system with well defined behaviors, less ambiguity, and less social guesswork, would probably be safer. But such a system would require full participation, not just one or two tesla drivers choosing to let Elon take the wheel)
> Safe trailing distance is determined by safe driving expecations.
Respectfully, it's not. A safe trailing distance is defined as enough room for you to stop even if the car in front of you stops abruptly (for example, if a kid or animal runs into the street), factoring in your reaction time and the stopping distance of your vehicle. The buffer isn't for normal variations in the speed of traffic, it's for anomalies.
> the reality is that driving has a lot of loose rules, and you're expected to behave rationally and similarly to the other drivers on the road
This isn't a valid excuse for unsafe following.
> Driving is an intricate social dance with millions of actors, and if you are the one acting way out of line, you are liable
I'm not sure what sense you mean "liable" here, but it's certainly not true in the legal sense--you're liable if you violate the law, for example, by following too closely (the law doesn't seem to care whether others follow too closely or not).
> An autonomous system with well defined behaviors, less ambiguity, and less social guesswork, would probably be safer.
It doesn't require full participation, it merely requires fewer fatalities caused per 1M miles than human drivers. That's the point at which we should legalize AI, and if it becomes significantly safer, that's the point at which we should mandate AI in the general case. And then something really interesting happens--we can start to do away with those ambiguities that make AI hard because human drivers become the exception rather than the rule. Rather than informal social systems governing driving, driving becomes a formal digital protocol. Maybe rather than relying on computer vision (or computer vision alone) cars are loaded with a national database containing the right-of-way rules, speed limits, scheduled maintenance, etc and/or they use other sensors. These things in turn make AI safer (discrete signals = fewer failures due to computer vision errors). Safe and ubiquitous AI could lead to much narrower roads, freeing up more space for pedestrian traffic.
Moreover, since cars are safer, they don't need to be so heavily armored--rather, they become smaller and lighter, which further increases safety by increasing survivability in pedestrian- or cyclist-involved accidents.
Moreover, those strict protocols could make transportation more efficient (a given road increases its throughput and latency for a without increasing lanes or compromising safety) because (1) cars are smaller and lighter as previously discussed, (2) AI can travel faster with less buffer due to its increased reaction times, and (3) because AI can coordinate so you don't have inefficient traffic patterns like "4 semis driving in quadruple file" or "one reckless ass hat changing lanes frequently, abruptly, and aggressively, causing everyone to slow down".
Without human drivers, maybe taxis become cost-competitive with public transit (or better) and fewer people need to own a car (and the cars they own needn't be so large since one can easily and cheaply rent a larger car for transporting large items). And for those who do own their own car, those cars can drop them off at their destination and park farther away (no need for so much parking downtown).
This is all just a sketch of some of the possibilities, but the point is that there's a horizon (a critical mass of AI cars) that could manifest in a cascade of other significant changes.
While ideally everybody would always leave enough space for hard and sudden brakes (eg in case a deer runs across a road), the reality is that driving has a lot of loose rules, and you're expected to behave rationally and similarly to the other drivers on the road. For example it's usually fine to speed a little in the left lane if most people deem it safe to do so. And there's no strict definition for "reckless driving" despite it being illegal. Driving is an intricate social dance with millions of actors, and if you are the one acting way out of line, you are liable, and hard + sudden brakes for no reason, is definitely out of line.
(That's not to say this is the best system though. An autonomous system with well defined behaviors, less ambiguity, and less social guesswork, would probably be safer. But such a system would require full participation, not just one or two tesla drivers choosing to let Elon take the wheel)