Ride the Lightning
Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.
Killer Robots Are Real: Majority of a United Nations Conference Favors Banning Them
December 21, 2021
OK, not a cheerful subject for a holiday blog post. But you have to sit up and take notice when The New York Times (sub.req.) publishes an article entitled, “Killer Robots Aren’t Science Fiction. A Push to Ban Them is Growing.”
The December 17 article covered a United Nations meeting in Geneva which was followed intently by experts in artificial intelligence, military strategy, disarmament and humanitarian law.
The interest was driven by drones, guns and bombs that decide on their own, with artificial intelligence, whether to attack and kill — and what should be done, if anything, to regulate or ban them.
Anonymous Weapons Systems have been invented and tested at with blazing speed and little oversight. Some prototypes have been used in actual conflicts. This represents an extraordinary event in warfare, much like the invention of gunpowder and nuclear bombs.
This year, for the first time, a majority of the 125 nations that belong to an agreement called the Convention on Certain Conventional Weapons (C.C.W.) said they wanted curbs on killer robots. But they were opposed by members that are developing these weapons, most notably the United States and Russia.
Interesting indeed that Russia and the U.S. do not want restrictions. Enemies with a common cause it seems.
As a result, the group’s conference concluded with only a vague statement about considering possible measures acceptable to all. The Campaign to Stop Killer Robots, a disarmament group, said the outcome fell “drastically short.”
The C.C.W., sometimes known as the Inhumane Weapons Convention, is a framework of rules that ban or restrict weapons considered to cause unnecessary, unjustifiable and indiscriminate suffering, such as incendiary explosives, blinding lasers and booby traps that don’t distinguish between fighters and civilians. The convention has no provisions for killer robots.
The definition of “killer robots” is a bit murky, but they are generally considered to be weapons that make decisions with little or no human involvement. Rapid improvements in robotics, artificial intelligence and image recognition are making such armaments possible.
Here is a clarifying example: The drones the United States has used extensively in Afghanistan, Iraq and elsewhere are not considered robots because they are operated remotely by people, who choose targets and decide whether to shoot.
Why would anyone be in favor of killer robots? The weapons offer the promise of keeping soldiers out of harm’s way, and making quicker decisions than a human would, by giving more battlefield responsibilities to autonomous systems like pilotless drones and driverless tanks that independently decide when to strike.
But critics argue it is morally indefensible to give lethal decision-making to machines, regardless of technological sophistication. AS the article asks, “How does a machine distinguish an adult from a child, a fighter with a bazooka from a civilian with a broom, a hostile combatant from a wounded or surrendering soldier?”
“Fundamentally, autonomous weapon systems raise ethical concerns for society about substituting human decisions about life and death with sensor, software and machine processes,” Peter Maurer, the president of the International Committee of the Red Cross and an outspoken opponent of killer robots, told the Geneva conference.
Prior to the conference, Human Rights Watch and Harvard Law School’s International Human Rights Clinic called for steps toward a legally binding agreement that always requires human control.
“Robots lack the compassion, empathy, mercy, and judgment necessary to treat humans humanely, and they cannot understand the inherent worth of human life,” the groups argued in a briefing paper to support their recommendations.
“Mass produced killer robots could lower the threshold for war by taking humans out of the kill chain and unleashing machines that could engage a human target without any human at the controls,” said Phil Twyford, New Zealand’s disarmament minister.
Why was so much attention focused on the Geneva conference? It was widely considered by disarmament experts to be the best opportunity so far to devise ways to regulate, if not prohibit, the use of killer robots under the C.C.W.
It represented the culmination of years of discussions by a group of experts who had been asked to identify the challenges and potential approaches to reducing the threats from killer robots. But the experts could not even reach agreement on basic questions.
Opponents like Russia insist that any decisions on limits must be unanimous, effectively giving opponents a veto.
The U. S. argues that existing international laws are sufficient and that banning autonomous weapons technology would be premature. The chief U.S. delegate to the conference, Joshua Dorosin, proposed a nonbinding “code of conduct” for use of killer robots — an idea that disarmament advocates rejected as a delaying tactic.
The American military has invested heavily in artificial intelligence, working with the biggest defense contractors, including Lockheed Martin, Boeing, Raytheon and Northrop Grumman. The work has included projects to develop long-range missiles that detect moving targets based on radio frequency, swarm drones that can identify and attack a target, and automated missile-defense systems, according to research by opponents of the weapons systems.
The complexity and many uses of artificial intelligence make it more difficult to regulate than nuclear weapons or land mines, said Maaike Verbruggen, an expert on emerging military security technology at the Centre for Security, Diplomacy and Strategy in Brussels. She said lack of transparency about what different countries are building has created “fear and concern” among military leaders that they must keep up with the competition.
The U.S. defense establishment has been reluctant to use autonomous weapons in combat because of fears of mistakes being made.
Daan Kayser, an autonomous weapons expert at PAX, a Netherlands-based peace advocacy group, said the conference’s failure to agree to even negotiate on killer robots was “a really plain signal that the C.C.W. isn’t up to the job.”
Noel Sharkey, an artificial intelligence expert and chairman of the International Committee for Robot Arms Control, said the meeting had demonstrated that a new treaty was preferable to further C.C.W. deliberations.
“There was a sense of urgency in the room,” he said, that “if there’s no movement, we’re not prepared to stay on this treadmill.”
Sorry for the grim post, but it is sure is worthy reading.
And now the holidays are upon us and I am determined to enjoy them with family and leave weightier issues to 2022. Ride the Lightning will be back in January! To all RTL readers, I wish the merriest of holidays!
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson