Ride the Lightning
Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.
The Titans of Tech Commit to AI Safeguards
July 25, 2023
The Guardian reported on July 21 that Amazon, Google, Meta, Microsoft and OpenAI have committed to implementing new guardrails for AI.
Among the guidelines brokered by the Biden administration are watermarks for AI content to make it easier to identify and third-party testing of the technology that will attempt to identify dangerous flaws.
Biden said the companies’ commitments were “real and concrete” and will help “develop safe, secure and trustworthy” technologies that benefit society and uphold values.
“Americans are seeing how advanced artificial intelligence and the pace of innovation have the power to disrupt jobs in industries,” he said. “These commitments are a promising step that we have a lot more work to do together.”
The White House said seven US companies had agreed to the voluntary commitments, which are meant to ensure their AI products are safe before they release them.
Remember that these are voluntary commitments. We still need laws, but this is regarded as a stopgap measure until laws can be enacted.
Here are the eight measures the companies have agreed to:
- Using watermarking on audio and visual content to help identify content generated by AI.
- Allowing independent experts to try to push models into bad behavior – a process known as “red-teaming”.
- Sharing trust and safety information with the government and other companies.
- Investing in cybersecurity measures.
- Encouraging third parties to uncover security vulnerabilities.
- Reporting societal risks such as inappropriate uses and bias.
- Prioritizing research on AI’s societal risks.
- Using the most cutting-edge AI systems, known as frontier models, to solve society’s greatest problems.
I like – and agree with – the comment below.
“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said a statement from James Steyer, founder and CEO of the non-profit Common Sense Media.
I also agree with these comments:
“We need a much more wide-ranging public deliberation and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models,” said Amba Kak, the executive director of research group the AI Now Institute.
“A closed-door deliberation with corporate actors resulting in voluntary safeguards isn’t enough,” Kak said. “What this list covers is a set of problems that are comfortable to business as usual, but we also need to be looking at what’s not on the list – things like competition concerns, discriminatory impacts of these systems. The companies have said they’ll ‘research’ privacy and bias, but we already have robust bodies of research on both – what we need is accountability.”
Ultimately, I don’t believe self-regulation will work – but it is better than nothing at all.
Senate majority leader Chuck Schumer has said he will introduce legislation to regulate AI. Schumer has held briefings with government officials to educate senators about AI which has attracted bipartisan interest.
Other countries have been considering how to regulate AI, including European Union lawmakers who have been negotiating expansive AI rules for the 27-country bloc.
While no final agreement has been reached, the EU AI Act contains strict regulations that would create significant consumer protections against the overreach, privacy violations and biases of certain types of high-risk AI models.
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology