The problem is that we(as humans) label the output of these algorithms to give them meaning.
For example if you build a resume grading AI based off historic data(as some companies have done) it's quickly realized that what's being outputted by the algorithm is typically not the same as what people label it. In this case the output is "how likely would this candidate been to have been hired using previous methods", but it's being used to try and answer "how good of a candidate is this".
IMO it's these discrepancies between what these algorithms are doing(which in more complex cases we can't know) and what we label the output that causes these issues. If we acknowledge that historic data can't be used to accurately predict future outcomes(especially a subset of historic data), we wouldn't even try to do things like profile candidates.
The problem is that we(as humans) label the output of these algorithms to give them meaning.
For example if you build a resume grading AI based off historic data(as some companies have done) it's quickly realized that what's being outputted by the algorithm is typically not the same as what people label it. In this case the output is "how likely would this candidate been to have been hired using previous methods", but it's being used to try and answer "how good of a candidate is this".
IMO it's these discrepancies between what these algorithms are doing(which in more complex cases we can't know) and what we label the output that causes these issues. If we acknowledge that historic data can't be used to accurately predict future outcomes(especially a subset of historic data), we wouldn't even try to do things like profile candidates.