AMU Cyber & AI Editor's Pick Original

The Questionable Ethics of Using Robotic Weapons

By Wes O’Donnell
Managing Editor of In Military, InCyberDefense and In Space News. Veteran U.S. Army & U.S. Air Force.

The idea of a completely autonomous robot, one that can think and make decisions without human input, has fascinated humans since the industrial revolution.

In pop culture, we’ve been entertained by the likes of Johnny 5 from “Short Circuit,” Sonny from “I, Robot” and T.A.R.S. from “Interstellar,” among many others. But robotics is rife with villains as well: Skynet from the “Terminator” series, HAL 9000 from “2001” and the machines in “The Matrix” come to mind.

Get started on your cybersecurity degree at American Military University.

Our fascination with AI is only gaining speed. Research into authentic artificial intelligence (AI) has been valued at $1.2 trillion. To its credit, AI’s proponents claim that artificial intelligence will solve a host of human-caused issues like climate change and human-centric issues like a cure for cancer. But just because humans can build something doesn’t mean we should.

AI has its detractors. SpaceX founder Elon Musk has called for regulation of the burgeoning AI industry and warned that AI poses an existential threat to our species. It’s not necessarily that AI scientists are afraid of an omnipotent, omnipresent evil computer brain. Researchers’ fears are more along the lines of a competent AI whose goals are misaligned with ours.

Robotic Weapons and Autonomous Weapons Systems (AWS)

In the argument for the development of lethal military robotic weapons, the question of “should” we build it has often been sacrificed on the altar of the new technological arms race between nations. In other words, we need to build it because our adversaries are building it.

At any given moment, the United States has dozens of Unmanned Aerial Vehicles (UAVs) in the air, performing various missions from surveillance to missile strikes. But such drones are always controlled by a human operator and executing a strike requires at least some degree of human command and control.

But autonomous technology is under development in several countries. What happens when we give the machines the decision-making authority to kill or spare people?

With or without AI, an autonomous weapon system could be programmed to fire on any legal enemy combatant. But here’s where the ethical issues get murky.

For example, when I served as an infantryman, I and a group of others were once tasked with guarding Patriot missile batteries on the Iraq border. Occasionally a Bedouin, his son and his flock of sheep would venture too close to our perimeter, resulting in the deployment of our Quick Reactionary Force (QRF) to turn them around.

The rules of war allowed us to shoot once our perimeter was “threatened.” But for us, shooting a shepherd and his flock was not an option.

If an AI-controlled robot had been there instead, the outcome may have been very different. Had the robot been programmed to comply with the rules of war, the Bedouin and his son would have been fired upon.

The ultimate question is: How would you design a robot to know the difference between what is legal and what is morally right?

The Path Forward for Robotic Weapons

From a legal perspective, humans are the legal agents who must be held responsible for wartime conduct. But a machine has no such legal status. This fact alone is why robotic weapons may change the very face of war.

A strict, global regulatory structure needs to be in place before such machines are deployed, not after. Fortunately, experts on military artificial intelligence from more than 80 world governments converged on the U.N. offices in Geneva last year for talks on autonomous weapons systems.

It’s a good start. However, the U.S., Russia, the U.K., Israel, South Korea, and China can’t even agree on a common definition of autonomous weapons.

According to Time, the world’s most powerful nations are already at the starting blocks of a secretive and potentially deadly arms race, while regulators lag behind.

This is a complicated issue and it’s clear that an interdisciplinary approach is needed. In addition to thought leaders within the U.N. and the Department of Defense, we need scientists, lawyers, ethicists, and sociologists to have a say on giving robotic weapons the authority to kill on their own.

The potential future development of a self-aware AI will only complicate these issues further. While machines without true AI can be trained to operate in error-sensitive conditions like medical diagnoses, autonomous vehicles and (maybe) autonomous weapons, a truly self-aware AI may just decide that humans aren’t worth all of the trouble. As Elon Musk fears, it may just exterminate us with the cold efficiency of its artificial brain, because it may see that as the simplest solution.

In either case, we are entering a potentially dangerous era represented by the third revolution in warfare: first the invention of gunpowder, then nuclear weapons and now fully autonomous weapons that could decide who to target and kill without human input.

With history as our guide, perhaps now is the right time to slow down and develop a legal framework for the use of robotic weapons. It would be better to have this framework in place before we are pressured to deploy these weapons prematurely due to anxiety that our adversaries will get there first.

Wes O'Donnell

Wes O’Donnell is an Army and Air Force veteran and writer covering military and tech topics. As a sought-after professional speaker, Wes has presented at U.S. Air Force Academy, Fortune 500 companies, and TEDx, covering trending topics from data visualization to leadership and veterans’ advocacy. As a filmmaker, he directed the award-winning short film, “Memorial Day.”

Comments are closed.