Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

EU Proposes Strict Regulations for Artificial Intelligence

April 27, 2021

The New York Times reported on April 21 that the European Union has issued proposed strict regulations to govern the use of artificial intelligence.

The draft rules would set limits on the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrollment selections and the exam scoring. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten people’s safety or fundamental rights.

Some uses would be prohibited, including live facial recognition in public spaces, though there would be several exemptions for national security and other purposes.

The 108-page policy is an attempt to regulate AI before it becomes mainstream. The rules have far-reaching implications for major technology companies that have devoted significant resources into developing artificial intelligence, including Amazon, Google, Facebook and Microsoft, but also scores of other companies that use the software to develop medicine, underwrite insurance policies and judge credit worthiness. Governments have used AI in criminal justice and the allocation of public services like income support.

Companies that violate the new regulations, which might take several years to move through the European Union policy-making process, could face fines of up to 6 percent of global sales.

That is not an inconsiderable sum of money!

“On artificial intelligence, trust is a must, not a nice-to-have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”

The EU regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how the technology is making decisions. The companies must also guarantee human oversight in how the systems are created and used.

Some AI applications, like chatbots that provide humanlike conversation in customer service situations, and software that creates hard-to-detect manipulated images like “deepfakes,” would have to make clear to users that what they were seeing was computer generated.

Artificial intelligence — in which machines are trained to perform jobs and make decisions on their own by studying huge volumes of data — is seen by technologists, business leaders and government officials as one of the world’s most transformative technologies, promising major gains in productivity.

As AI becomes more sophisticated it can be more difficult to understand why the software is making a decision, a problem that will likely worsen as computers become more powerful. Researchers have raised ethical questions about the use of AI, suggesting that it could perpetuate existing biases in society, invade privacy or result in more jobs being automated.

EU has been consistently at the forefront of AI regulation, way ahead of the U.S. The new proposals will probably be the subject of a lot of tinkering, but the U.S. should keep a close eye on what the EU is doing as we slowly start to come to grips with the necessity of regulating artificial intelligence before things spin out of control.

Sharon D. Nelson, Esq., PresidentSensei Enterprises, Inc.
3975 University Drive, Suite 225|Fairfax, VA 22030
Email:  Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson