> In my opinion black-box algorithms have no place in government, and for this reason it's critical for the government to develop the expertise to develop and understand models themselves, so they aren't reliant on the trade secrets of contractors.
We learned from 2000 years of human history - war, disease etc. - to create a transparent system of checks and balances. These days we are trying to pretty much remove all of it (digitally) for "convenience" and "efficiency".
I can not understand how any semi-decent societal system can accept a situation of opaqueness and no accountability.
> I can not understand how any semi-decent societal system can accept a situation of opaqueness and no accountability.
Part of it comes down to an effect that's been known for a long time with human-computer interaction.
Whenever you have a machine in the decision loop of a job, many workers WILL DEFER to the machine whenever there's a question of whether something is wrong or right.
By deferring to the machine, the workers absolve themselves of accountability and shift all responsibility to "the system". I think this is often deliberately "baked in" by management to customer service of all kinds. You can see this in a small way if you've ever had to call an internet service provider in the USA to deal with a billing mistake.
> These days we are trying to pretty much remove all of it (digitally) for "convenience" and "efficiency".
Well, the same 'we' that push voting machines and other crap like locking government records up in proprietary software.
I push for algorithms specifically because they're (as a written document) transparent. If a human has to judge a historical context you're never going to get repeatable results. But I want the system to be able to diagram, with weights, all the inputs that went into something before I'll trust an automated version.
I don't think "black-box" algorithms are the problem per se. In this case the system was doomed from the get-go by using biased and flawed input attributes.
What I find very problematic in the use of automated systems for fraud detection is taking the system decision as final and not offering an easy and clear way of appealing decisions.
We learned from 2000 years of human history - war, disease etc. - to create a transparent system of checks and balances. These days we are trying to pretty much remove all of it (digitally) for "convenience" and "efficiency".
I can not understand how any semi-decent societal system can accept a situation of opaqueness and no accountability.