One of the first papers I read in this area was very interesting in this regard (https://crim.sas.upenn.edu/sites/default/files/2017-1.0-Berk...). I think the challenge is that a business (e.g. COMPAS) can certainly take a position on what definition of algorithmic fairness they want to enforce, but the paper mentions six different definitions of fairness, which are impossible to satisfy simultaneously unless base rates are the same across all groups (the "data problem"). Even the measurement of these base rates itself can be biased, such as over- or under-reporting of certain crimes. And even if you implement one definition, there's no guarantee that that is the kind of algorithmic fairness that the government/society/case law ends up interpreting as the formal mathematical instantiation of the written law. Moreover, this interpretation can change over time since laws, and for that matter, moral thinking, also change over time.
I think the upshot to me is that businesses, whether it's one operating in criminal judicial risk assessment or advertising or whatever, don't really make obvious which definition (if any) of fairness that they are enforcing, and thus it becomes difficult to determine whether they are doing a good job at it.
I think the upshot to me is that businesses, whether it's one operating in criminal judicial risk assessment or advertising or whatever, don't really make obvious which definition (if any) of fairness that they are enforcing, and thus it becomes difficult to determine whether they are doing a good job at it.