Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

Would “Killer Robots” Destabilize the World?

August 15, 2018

I think they would – and a broad coalition of tech leaders seems to agree. As The Washington Post (sub. req.) reported last month, more than 160 organizations and 2,460 individuals from 90 countries promised not to participate in or support the development and use of lethal autonomous weapons. The pledge says artificial intelligence is expected to play an increasing role in military systems and calls upon governments and politicians to introduce laws regulating such weapons "to create a future with strong international norms."

Among those making that promise are billionaire inventor and OpenAI founder Elon Musk, Skype co-founder Jaan Tallinn, artificial intelligence researcher Stuart Russell, as well as the three founders of Google DeepMind — the company's premier machine learning research group.

"Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems," the pledge says.

"Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage," the pledge adds.

Lethal autonomous weapons systems can identify, target and kill without human input, according to the Future of Life Institute, a Boston-based charity that organized the pledge and seeks to reduce risks posed by AI. The organization says autonomous weapons systems do not include drones, which rely on human pilots and decision-makers to operate.

According to Human Rights Watch, autonomous weapons systems are being developed in many nations around the world — "particularly the United States, China, Israel, South Korea, Russia and the United Kingdom." FLI claims autonomous weapons systems will be at risk for hacking and likely to end up on the black market. The organization argues the systems should be subject to the same sort of international bans as biological and chemical weapons.

FLI has even coined a name for these weapons systems — "slaughterbots."

"Lethal autonomous weapons threaten to become the third revolution in warfare," Elon Musk and 115 other experts, including Alphabet's artificial intelligence expert, Mustafa Suleyman, warned in an open letter. "Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend."

I don't know about you, but whether they are called "killer robots" or "slaughterbots," I don't think their existence should be allowed. We are surrounded by enough terrors of human making.

E-mail: Phone: 703-359-0700
Digital Forensics/Information Security/Information Technology
https://www.senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson