I feel the author failed to address the crux of why people argue that "non-statistical" bias is bad – that we should be judged by our actions, and not by factors out of our control such as race, class, the family we were born into, or where we were born.
If we included every aspect about a person in some statistical model, we may discover "uncomfortable truths" that hold true for the general population. But these truths, while statistically correct, may fail our test for what we consider to be philosophically fair, and ultimately undermine an individual's agency to act independently.
So perhaps in your experiment, the problem is that our feature selection is not reflective of the values we'd like to uphold, and that aspects like "had lead poisoning as a child" is not a sound feature to include in our model because it measures aspects of a person outside their control. Instead maybe our feature set should only include aspects that measure facets that are under the individual's control such as community service, whether they still associate with other criminals, whether they have or are pursuing education, whether they have children to care for, etc. (or some other feature set that's more thought out and sound, but you get the gist)
This still may not have as good accuracy as a model that included other features about the person, but it's arguable that this system would be more fair, especially over a model using more features but was artificially fudged to satisfy some prior about what we consider fair/unbiased.
>I feel the author failed to address the crux of why people argue that "non-statistical" bias is bad – that we should be judged by our actions, and not by factors out of our control such as race, class, the family we were born into, or where we were born.
This is exactly what the author is talking about. You are comparing the predictions against your fantasy of a world where these aspects do not matter because they're not "fair". When they don't match up, you call the predictions biased. But these factors outside our control do matter, accounting for them does not introduce bias, and averting our eyes will not change that fact.
I'm not claiming that we should ignore these aspects and lie to ourselves about reality, in fact I'm acknowledging that these relationships exist. My greater point is that the author is trying to use the meaning of statistical bias to dismiss what journalists/laypersons consider bias without addressing why the latter is concerned with bias in the first place.
My suggestion is that we should be using a better feature set that only looks at aspects that we can reasonably hold an individual responsible for rather than using demographic information which is out of an individual's control. If we have two convicted criminals with similar crimes, behaviors, and histories, but one is white and grew up in a wealthy neighborhood while the other is black and grew up in a poorer town, why should the former be granted a higher probability of parole than the latter? Why should either of them be held responsible for the actions of others? Even if in expectation people from the latter demographic were more likely to reoffend than the former, that is not justice – it undermines liberty.
If "these truths, while statistically correct, may fail our test for what we consider to be philosophically fair" then perhaps it is your philosophy of 'fairness' that is biased and incorrect.
Would you like to point out how it's biased and incorrect? My latter claim is that we should base features on aspects that an individual could be reasonably held responsible for, rather than aspects out of their control such as their race, gender, or where they grew up.
So my philosophy of "fairness" here is that we should hold people responsible only for their own actions, which, while off-the-cuff, seems like it would agree with most justice systems. If we included demographic information in our models, we would effectively be holding individuals responsible for the actions of others, which doesn't seem sound.
If we included every aspect about a person in some statistical model, we may discover "uncomfortable truths" that hold true for the general population. But these truths, while statistically correct, may fail our test for what we consider to be philosophically fair, and ultimately undermine an individual's agency to act independently.
So perhaps in your experiment, the problem is that our feature selection is not reflective of the values we'd like to uphold, and that aspects like "had lead poisoning as a child" is not a sound feature to include in our model because it measures aspects of a person outside their control. Instead maybe our feature set should only include aspects that measure facets that are under the individual's control such as community service, whether they still associate with other criminals, whether they have or are pursuing education, whether they have children to care for, etc. (or some other feature set that's more thought out and sound, but you get the gist)
This still may not have as good accuracy as a model that included other features about the person, but it's arguable that this system would be more fair, especially over a model using more features but was artificially fudged to satisfy some prior about what we consider fair/unbiased.