Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

First Time in the US: New York City Law Requires Hiring AI to be Audited for Bias

December 7, 2021

On December 5, Ars Technica reported that, in November, New York’s City Council adopted a law requiring audits of algorithms used by employers in hiring or promotion. The law, the first of its kind in the nation, requires employers to bring in outsiders to assess whether an algorithm exhibits bias based on sex, race, or ethnicity. Employers also must tell job applicants who live in New York when artificial intelligence plays a role in deciding who gets hired or promoted.

And NYC isn’t alone in tackling problems with artificial intelligence.

In Washington, DC, members of Congress are drafting a bill that would require businesses to evaluate automated decision-making systems used in areas such as health care, housing, employment, or education, and report the findings to the Federal Trade Commission. An AI Bill of Rights proposed last month by the White House calls for disclosing when AI makes decisions that impact a person’s civil rights, and it says AI systems should be “carefully audited” for accuracy and bias, among other things.

European Union lawmakers are considering legislation requiring inspection of AI deemed high-risk and creating a public registry of high-risk systems. Countries including China, Canada, Germany, and the UK have also taken steps to regulate AI in recent years.

Julia Stoyanovich, an associate professor at New York University who served on the New York City Automated Decision Systems Task Force, says she and students recently examined a hiring tool and found it assigned people different personality scores based on the software program with which they created their résumé. Other studies have found that hiring algorithms favor applicants based on where they went to school, their accent, whether they wear glasses, or whether there’s a bookshelf in the background.

Stoyanovich supports the disclosure requirement in the New York City law, but she says the auditing requirement is flawed because it only applies to discrimination based on gender or race. She says the algorithm that rated people based on the font in their résumé would be fine under the law because it didn’t discriminate on those grounds.

Some proponents of greater scrutiny favor mandatory audits of algorithms similar to the audits of companies’ financials. Others prefer “impact assessments” akin to environmental impact reports. Both groups agree that the field desperately needs standards for how such reviews should be conducted and what they should include. Without standards, businesses could engage in “ethics washing” (a term I am hearing more and more) by arranging for favorable audits. Proponents say the reviews won’t solve all problems associated with algorithms, but they would help hold the makers and users of AI legally accountable.

A forthcoming report by the Algorithmic Justice League (AJL), a private nonprofit, recommends requiring disclosure when an AI model is used and creating a public repository of incidents where AI caused harm. The repository could help auditors spot potential problems with algorithms and help regulators investigate or fine repeat offenders. AJL cofounder Joy Buolamwini coauthored an influential 2018 audit that found facial-recognition algorithms work best on white men and worst on women with dark skin.

The report says it’s crucial that auditors be independent and results be publicly reviewable. Without those safeguards, “there’s no accountability mechanism at all,” says AJL head of research Sasha Costanza-Chock. “If they want to, they can just bury it; if a problem is found, there’s no guarantee that it’s addressed. It’s toothless, it’s secretive, and the auditors have no leverage.”

Deb Raji is a fellow at the AJL who evaluates audits. She participated in the 2018 audit of facial-recognition algorithms. She cautions that Big Tech companies appear to be taking a more adversarial approach to outside auditors, sometimes threatening lawsuits based on privacy or anti-hacking grounds. In August, Facebook prevented NYU academics from monitoring political ad spending and thwarted efforts by a German researcher to investigate the Instagram algorithm.

Raji calls for creating an audit oversight board within a federal agency to do things like enforce standards or mediate disputes between auditors and companies. Such a board could be fashioned after the Financial Accounting Standards Board or the Food and Drug Administration’s standards for evaluating medical devices.

Interesting story below:

Cathy O’Neil started a company, O’Neil Risk Consulting & Algorithmic Auditing (Orcaa), in part to assess AI that’s invisible or inaccessible to the public. For example, Orcaa works with the attorneys general of four US states to evaluate financial or consumer product algorithms. But O’Neil says she loses potential customers because companies want to maintain plausible deniability and don’t want to know if or how their AI harms people.

Plausible deniability – where have we heard that before?

Earlier this year Orcaa performed an audit of an algorithm used by HireVue to analyze people’s faces during job interviews. A press release by the company claimed the audit found no accuracy or bias issues, but the audit made no attempt to assess the system’s code, training data, or performance for different groups of people. Critics said HireVue’s characterization of the audit was misleading and disingenuous. Shortly before the release of the audit, HireVue said it would stop using the AI in video job interviews.

A revamped version of the Algorithmic Accountability Act, first introduced in 2019, is now being discussed in Congress. According to a draft version of the legislation reviewed by WIRED, the bill would require businesses that use automated decision-making systems in areas such as health care, housing, employment, or education to carry out impact assessments and regularly report results to the FTC. A spokesperson for Senator Ron Wyden (D-Ore.), a cosponsor of the bill, says it calls on the FTC to create a public repository of automated decision-making systems and aims to establish an assessment process to enable future regulation by Congress or agencies like the FTC. The draft asks the FTC to decide what should be included in impact assessments and summary reports.

In August, the Center for Long-Term Cybersecurity at UC Berkeley suggested that a risk assessment tool for evaluating AI being developed by the federal government include factors such as a system’s carbon footprint and the potential to exacerbate inequality; the center suggested the government take a stronger approach on AI than it did for cybersecurity. The AJL also sees lessons in cybersecurity practices. A forthcoming report coauthored by Raji calls for businesses to create processes to handle instances of AI harm akin to the way IT security workers treat bugs and security patch updates. Some of AJL’s recommendations—that companies should offer bias bounties, publicly report major incidents, and develop internal systems for the escalation of harm incidents—are drawn from cybersecurity.

I found the nexus between cybersecurity and AI to be interesting. Good article to read – there are clearly a LOT of new approaches to making AI accountable.

Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson