that's basically the AI rubicon everywhere. From flying plans to programming: Soon there'll be no real fallback. When AI fails, you can't just put the controls in front of a person and expect them to have reasonable expertise to respond.
Really, what seems on the horizon is a cliff of techno risks that have nothing to do with "AI will take over the world" and more "AI will be so integral to functional humanity that actual risks become so diffuse that no one can stop it."
So it's more a conceptual belief: Will AI actually make driving cares safer or will the fatalities of AI just be so randomly stochastic that it's more acceptable.
>So it's more a conceptual belief: Will AI actually make driving cares safer or will the fatalities of AI just be so randomly stochastic that it's more acceptable.
I would argue that we already accept relatively random car fatalities at a huge scale and simply engage in post-hoc rationalization of the why and how of individual accidents that affect us personally. If we can drastically reduce the rate of accidents, the remaining accidents will be post-hoc rationalized the same way we always have rationalized accidents.
This is about the functional society where people fundamentally have recourse to "blame" via legal means one another for things.
Having fallbacks, eg, pilots in the cockpit is not a long term strategy for AI pilots flying planes because they functionally will never be sufficiently trained for actual scenarios.
Really, what seems on the horizon is a cliff of techno risks that have nothing to do with "AI will take over the world" and more "AI will be so integral to functional humanity that actual risks become so diffuse that no one can stop it."
So it's more a conceptual belief: Will AI actually make driving cares safer or will the fatalities of AI just be so randomly stochastic that it's more acceptable.