It makes sense that it can be easier to be misunderstood by a human than by a robot. And what about the case when any decision will always be bad for some people? I would argue that using (at least partly) an algorithm can add some fairness to the process, as without an algorithm systems can favor people who know the system the most.