Ride the Lightning
Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.
Tech Firms are Laying Off Their AI Ethicists (Sigh)
April 4, 2023
The Washington Post reported (gift article) on March 30 that, last year, Amazon-owned streaming platform Twitch had a problem.
For a long time, women and people of color argued the platform was biased. Sexist and racist harassment were common and critics said the company’s all-important recommendation algorithms, which use artificial intelligence to decide which streamers to promote to viewers, were amplifying the problem.
The company set up a responsible AI team to look specifically at the algorithms. At its semiannual conference, TwitchCon, the team’s principal product manager told Twitch streamers, “We are committed to being a leader in this area of responsible and fair recommendations.” He urged them to fill out demographic surveys to track potential discrimination.
Last week, the handful of people who made up the responsible AI team were laid off, part of a broader round of cuts affecting about 400 of the company’s 2,500 employees. Others who worked on the issue as part of their current jobs were moved to other topics, according to a former member of the responsible AI team, who spoke on the condition of anonymity to discuss internal company matters.
“We wanted to make Twitch more equitable and also more safe for creators from all backgrounds,” the former employee said. “This is very much a step back.”
Responsible AI teams are being cut. Twitter did so as Elon Musk took over the company and cut three-quarters of the workforce. Microsoft cut its Ethics and Society team, which was one of the groups that led research on responsible AI at the company, as part of its massive round of layoffs in January.
All these moves show a pattern of companies scaling back on ethical AI research, often as part of broader cost-cutting, even as new applications of the technology are booming. Ethical AI experts say the breakup of these teams could result in products that are harmful being released before their consequences are fully analyzed.
“To me, it feels like they’re in a race, and they just want to win the race, and anybody who’s doing anything else is useless,” said Timnit Gebru, a computer scientist who once helped lead Google’s ethical AI team, before she was controversially ousted in December 2020.
All this is happening as generative AI takes the tech world by storm. OpenAI, a smaller company that was founded as a nonprofit, began pushing its products out to the public last year, giving people the chance to interact with tools that had been confined to the testing labs of companies like Google and Microsoft.
The amazing success of those start-ups’ tools caused concern at the most powerful companies that they would fall behind the cutting edge, according to conversations with current and former employees of Facebook and Google, who spoke on the condition of anonymity to discuss internal company conversations. Companies that had moved more carefully, with feedback from internal teams that asked probing questions about the social ramifications of new products, are moving faster to keep up with competitors and take advantage of the hype surrounding the technology. And make a lot of profits of course.
Last week, a large group of academics and business leaders including Musk, veteran AI researcher Yoshua Bengio and Apple co-founder Steve Wozniak signed a letter asking AI companies to pause the training of new, more powerful chatbots.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
Gebru, who went on to start a nonprofit dedicated to researching AI’s potential harms and seeking solutions, said she now views tech companies’ internal AI ethics efforts as “window dressing” that they’re quick to cast abandon when it’s inconvenient or when they’re cutting costs. Since firing Gebru, Google has also dismissed two other leading AI researchers over the publication of critical papers.
Google has been quickly moving to generative AI products in the past few months, working to keep up with archrival Microsoft and keep its reputation as the top AI company in the world, according to current and former employees. All of its blog posts and product launches have mentioned the importance of developing the tech responsibly, and the company has been careful to call new, unproven products “experiments” or “previews” even as it makes them available to more and more people.
Rumman Chowdhury led Twitter’s acclaimed META team — an acronym for Machine Learning, Ethics, Transparency and Accountability — until Musk laid her off in November, along with every member of her 18-person team except one.
The team had been credited with innovative programs such as a “bias bounty,” in which the company offered prizes to outsiders who could demonstrate bias in its systems.
AI ethics is “seen as a cost center, not a revenue center,” Chowdhury said. “I think smart companies know this will cost them in the long run. But a lot of them are thinking short-term right now.”
Some of the ethics cuts are coming as waves of layoffs strike the tech industry.
There’s a lot of attention on the big questions of whether sentient AI may be developed soon and what risks could come with that, as shown by the letter signed by Musk and other leaders asking for a pause in AI development. But focusing on those future questions may distract from problems that are real right now, Chowdhury said.
The new publicly available bots have had problems of their own. When Microsoft launched its Bing chatbot in February, some users quickly discovered that the bot would adopt an alternate persona with an aggressive tone, contradicting the human asking it questions and calling itself “Sydney.” Microsoft said the problem happened because of people making leading prompts and pushing the bot into a certain conversational direction. The company limited the number of questions users could ask Bing in a row.
The bots also repeatedly make up information and present it as fact, mixing it with legitimate information. Microsoft and Google have begun proactively pointing out this flaw in new-product announcements.
OpenAI, which helped kick off the current wave of AI excitement by launching its DALL-E image generator and ChatGPT conversation bot to the public before Big Tech companies had done the same with their own tools, is increasing its investments in responsible AI along with its investments in the technology, spokesperson Hannah Wong said. “While the entire company works closely together to develop and release safe and advanced AI systems, we are continuing to grow our teams dedicated to policy research, alignment, and trust and safety, which are critical to this work.”
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson