Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

ChatGPT CEO Tells Congress That AI Must Be Regulated

May 18, 2023

The Washington Post reported (gift article) on May 16 that OpenAI CEO Sam Altman appeared at a Senate hearing on May 16th and told Congress that government intervention will be critical to mitigating the risk of increasingly power AI systems.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman said.

Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

Recently, there has been a sharp increase in concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and take away some jobs.

While there’s no immediate sign Congress will craft broad new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal’s floor speeches and reciting ChatGPT-written opening remarks.

The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

It was an interesting session, with senators questioning politely, looking for Altman’s expertise on averting problems that aren’t here yet – but might be here much sooner than we imagined.

Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them, and expressed particular concern about how future AI systems could destabilize the job market. Altman was largely in agreement, though he had a more optimistic take on the future of work.

Pressed on his own worst fear about AI, Altman mostly avoided specifics, except to say that the industry could cause “significant harm to the world” and that “if this technology goes wrong, it can go quite wrong.”

Had he been specific, my guess is that he would have offered some very sobering possibilities for the future of AI, especially if it remains unregulated.

He proposed that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild” — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

Now that’s a scary thought shared by many.

That focus on a far-off “science fiction trope” of super-powerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation, said a former Biden administration official who co-authored its plan for an AI bill of rights.

OpenAI has expressed existential concerns since its inception. Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk, the startup has evolved from a nonprofit research lab with a safety-focused mission into a business. Its other popular AI products include the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Also testifying were IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI’s latest model, GPT-4, described as more powerful than ChatGPT.

Altman’s presence before the Senate and his remarks should give everyone pause. While Altman avoided describing true disaster scenarios, he certainly hinted at the harm that might come from unregulated AI. If ever there was a reason to be bipartisan, this is it.

Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email:  Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology