Article

The Ethical and Legal Implications of Black Box Artificial Intelligence

August 26, 2020

What is black box AI?

Put simply, black box artificial intelligence works according to rules that no one understands – it is designed to be impenetrable. If it is trained using biased data, it is going to produce biased results. Many organizations focused on artificial intelligence have bluntly said that black box AI is unethical.

Apple’s credit card business was charged with having sexist lending models in November 2019 and an investigation by regulators is ongoing. Amazon retired an AI hiring tool after discovering (it took Amazon three years to make this discovery) that it discriminated against women.

COMPAS

The term “black box” has not been part of common parlance for a long time. We first heard the term widely used in 2016. In that year, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software used by some courts in predicting the likelihood of recidivism in criminal defendants was demonstrably shown by ProPublica to be biased against African Americans.

No one knew how it worked – it was “proprietary” so the company didn’t want to be transparent about its programming – this is when many media stories began to talk about “black box” AI, in which no one can or will explain how the AI generates its output based on the input.

COMPAS is manufactured by the private company Northpointe which says that its algorithms are trade secrets. But should algorithms be used to arbitrate fairness? It’s complicated.

Machine-learning algorithms are trained on “data produced through histories of exclusion and discrimination,” writes Ruha Benjamin, an associate professor at Princeton University, in her book Race After Technology. Risk assessment tools like COMPAS are no different. Some people believe that they reduce inequities while others believe they make them worse. Hard to judge when they have no transparency.

In a controversial decision (Loomis v. Wisconsin, 2016), the Wisconsin Supreme Court decided that the recommendation from the COMPAS algorithm was not the sole grounds for refusing his request to be released on parole and hence the decision of the lower court did not violate Loomis’s due process rights. Confirming the constitutionality of the recommendation risk assessment algorithm, the Court was widely seen as neglecting the strength of the ‘automation bias’. Once a high-tech tool makes a recommendation, it is difficult for a human decision-maker to reject the recommendation.

Read the entire article here