Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

Microsoft Lays Off its AI Ethics and Society Team

March 21, 2023

The rush to monetize and capture the AI market is having some unsettling effects.

The Verge reported on March 13 that Microsoft has laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company.

This leaves Microsoft without a dedicated team to make sure its AI principles are closely tied to product design while the company is leading the charge to make AI tools available to the mainstream, according to current and former employees.

Microsoft still has an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company claims its overall investment in responsibility work is increasing despite the recent layoffs.

“Microsoft is committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this,” the company said in a statement. “Over the past six years we have increased the number of people across our product teams and within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice. […] We appreciate the trailblazing work the Ethics & Society did to help us on our ongoing responsible AI journey.”

I am inclined to agree with employees who said the ethics and society team played a critical role in ensuring that the company’s responsible AI principles are reflected in the design of the products that ship.

To quote one employee, “Our job was to … create rules in areas where there were none.”

“People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies,’” one former employee says. “Our job was to show them and to create rules in areas where there were none.”

In recent years, the team designed a role-playing game called Judgment Call that helped designers envision potential harms that could result from AI and discuss them during product development. It was part of a larger “responsible innovation toolkit” that the team posted publicly.

More recently, the team has been working to identify risks posed by Microsoft’s adoption of OpenAI’s technology throughout its suite of products. Both of those functions seem like good ideas to me.

The ethics and society team was at its largest in 2020, when it had roughly 30 employees including engineers, designers, and philosophers. In October, the team was cut to roughly seven people as part of a reorganization.

In a meeting with the team following the reorganization, John Montgomery, corporate vice president of AI, told employees that company leaders had instructed them to move swiftly. “The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very, very high to take these most recent OpenAI models and the ones that come after them and move them into customers hands at a very high speed,” he said, according to audio of the meeting.

Some members of the team made a case for their position. “I’m going to be bold enough to ask you to please reconsider this decision,” one employee said on the call. “While I understand there are business issues at play … what this team has always been deeply concerned about is how we impact society and the negative impacts that we’ve had. And they are significant.”

Montgomery declined. “Can I reconsider? I don’t think I will,” he said. “Cause unfortunately the pressures remain the same. You don’t have the view that I have, and probably you can be thankful for that. There’s a lot of stuff being ground up into the sausage.”

In response to questions, though, Montgomery said the team would not be eliminated.

That, as we know, proved to be false.

One employee says the move leaves a foundational gap on the user experience and holistic design of AI products. “The worst thing is we’ve exposed the business to risk and human beings to risk in doing this,” they explained.

Big technology companies build divisions dedicated to making their products more socially responsible. When they work well, they help product teams anticipate potential misuses of technology and fix any problems before they ship.

But they also have the job of saying “no” or “slow down” inside organizations that often don’t want to hear it — or spelling out risks that could lead to legal headaches for the company if surfaced in legal discovery.

Microsoft has become focused on shipping AI tools more quickly than its rivals, perhaps at the expense of long-term thinking.

Consider what Microsoft has to gain. Microsoft may have a once-in-a-generation chance to gain significant traction against Google in search, productivity software, cloud computing, and other areas where the two titans compete. When it relaunched Bing with AI, the company told investors that every 1 percent of market share it could take away from Google in search would result in $2 billion in annual revenue.

That potentially explains why Microsoft has so far invested $11 billion into OpenAI and is currently moving quickly to integrate the startup’s technology into every corner of Microsoft. It’s working. Bing now has 100 million daily active users, with one third of them new since the search engine relaunched with OpenAI’s technology.

However, everyone involved in the development of AI agrees that the technology poses potent and possibly existential risks, both known and unknown. Technology giants have signaled that they are taking those risks seriously — Microsoft alone has three different groups working on the issue, even after the elimination of the ethics and society team. But given how high the stakes are, any cuts to teams focused on responsible work are troubling.

The elimination of the ethics and society team came just as the group’s remaining employees had trained their focus on arguably their biggest challenge yet: anticipating what would happen when Microsoft released tools powered by OpenAI to a global audience.

Last year, the team wrote a memo detailing brand risks associated with the Bing Image Creator, which uses OpenAI’s DALL-E system to create images based on text prompts. The image tool launched in a handful of countries in October, making it one of Microsoft’s first public collaborations with OpenAI.

Text-to-image technology has proved hugely popular, but Microsoft researchers correctly predicted that it could also threaten artists’ livelihoods by letting anyone to easily copy their style.

“In testing Bing Image Creator, it was discovered that with a simple prompt including just the artist’s name and a medium (painting, print, photography, or sculpture), generated images were almost impossible to differentiate from the original works,” researchers wrote in the memo.

“The risk of brand damage … is real and significant enough to require redress.”

They added: “The risk of brand damage, both to the artist and their financial stakeholders, and the negative PR to Microsoft resulting from artists’ complaints and negative public reaction is real and significant enough to require redress before it damages Microsoft’s brand.”

Another problem? Last year OpenAI updated its terms of service to give users “full ownership rights to the images you create with DALL-E.” The move left Microsoft’s ethics and society team worried.

“If an AI-image generator mathematically replicates images of works, it is ethically suspect to suggest that the person who submitted the prompt has full ownership rights of the resulting image,” the team wrote in the memo.

Microsoft researchers created a list of mitigation strategies, including blocking Bing Image Creator users from using the names of living artists as prompts and creating a marketplace to sell an artist’s work that would be surfaced if someone searched for their name.

Employees assert that neither of these strategies were implemented, and Bing Image Creator launched into test countries anyway.

Microsoft says the tool was modified before launch to address concerns raised in the document and prompted additional work from its responsible AI team.

Legal questions about the technology remain unresolved. In February 2023, Getty Images filed a lawsuit against Stability AI, makers of the AI art generator Stable Diffusion. Getty accused the AI startup of improperly using more than 12 million images to train its system.

The accusations echoed concerns raised by Microsoft’s own AI ethicists. “It is likely that few artists have consented to allow their works to be used as training data, and likely that many are still unaware how generative tech allows variations of online images of their work to be produced in seconds,” employees wrote last year.

My own conclusion is that Microsoft is moving too quickly in pursuit of profits and dominance in the marketplace. Others will likely follow suit and ethics will become something big tech will talk about more than invest in. As AI becomes increasingly sophisticated, we may find ourselves unable to back pedal to more innocent times.

When you’re dealing with AI, what could go wrong? It’s a long list.

Sharon D. Nelson, Esq., PresidentSensei Enterprises, Inc.
3975 University Drive, Suite 225Fairfax, VA 22030
Email:   Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson