The only ethical decision a self driving car has to make is to not actively kill people. What you are proposing sounds to me to be merely a way to stealthily kill people you don't like with self driving vehicles and then use the "ethical decision engine" to gain plausible deniability. To make decisions about "social status" you need a database of people and their "social status". So where are you going to find a database of people with low "social status"? Well every prison and police station has one and oh and for good measure we can also add some skin color detection because that type of analysis can be done in real time.
Or we could just build better self driving cars and accept that some people will be killed but in exchange they maintain the agency to improve their chances of survival instead of being randomly killed on the sidewalk because some stupid idiot wanted to shave off 3 seconds by jaywalking. Just do stupid shit, the car will kill someone else anyway.
The idea that we should encourage a moral hazard like this strikes me as incredibly disgusting.
Or we could just build better self driving cars and accept that some people will be killed but in exchange they maintain the agency to improve their chances of survival instead of being randomly killed on the sidewalk because some stupid idiot wanted to shave off 3 seconds by jaywalking. Just do stupid shit, the car will kill someone else anyway.
The idea that we should encourage a moral hazard like this strikes me as incredibly disgusting.