Robot on the battle field (Part 4 conclusion)
As
military robots gain more and more autonomy, the ethical questions involved will become even more complex. The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties.
Autonomous robots could, in theory, follow the rules of engagement; they could be programmed with a list of criteria for determining appropriate targets and when shooting is permissible. The robot might be programmed to require human input if any civilians were detected. An example of such a list at work might go as follows: “Is the target a Soviet-made T-80 tank? Identification confirmed. Is the target located in an authorized free-fire zone? Location confirmed. Are there any friendly units within a 200-meter radius? No friendlies detected. Are there any civilians within a 200-meter radius? No civilians detected. Weapons release authorized. No human command authority required.”
US ARMY
Such an “ethical” killing machine, though, may not prove so simple in the reality of war. Even if a robot has software that follows all the various rules of engagement, and even if it were somehow absolutely free of software bugs and hardware failures (a big assumption), the very question of figuring out who an enemy is in the first place—that is, whether a target should even be considered for the list of screening questions—is extremely complicated in modern war. It essentially is a judgment call. It becomes further complicated as the enemy adapts, changes his conduct, and even hides among civilians. If an enemy is hiding behind a child, is it okay to shoot or not? Or what if an enemy is plotting an attack but has not yet carried it out? Politicians, pundits, and lawyers can fill pages arguing these points. It is unreasonable to expect robots to find them any easier.
The legal questions related to autonomous systems are also extremely sticky. In 2002, for example, an Air National Guard pilot in an F-16 saw flashing lights underneath him while flying over Afghanistan at twenty-three thousand feet and thought he was under fire from insurgents. Without getting required permission from his commanders, he dropped a 500-pound bomb on the lights. They instead turned out to be troops from Canada on a night training mission. Four were killed and eight wounded. In the hearings that followed, the pilot blamed the ubiquitous “fog of war” for his mistake. It didn’t matter and he was found guilty of dereliction of duty.
Change this scenario to an unmanned system and
military lawyers aren’t sure what to do. Asks a Navy officer, “If these same Canadian forces had been attacked by an autonomous
UCAV, determining who is accountable proves difficult. Would accountability lie with the civilian software programmers who wrote the faulty target identification software, the UCAV squadron’s Commanding Officer, or the Combatant Commander who authorized the operational use of the UCAV? Or are they collectively held responsible and accountable?”
This is the main reason why
military lawyers are so concerned about robots being armed and autonomous. As long as “man is in the loop,” traditional accountability can be ensured. Breaking this restriction opens up all sorts of new and seemingly irresolvable legal questions about accountability.
In time, the international community may well decide that armed,
autonomous robots are simply too difficult, or even abhorrent, to deal with. Like chemical weapons, they could be banned in general, for no other reason than the world doesn’t want them around. Yet for now, our laws are simply silent on whether autonomous robots can be armed with lethal weapons. Even more worrisome, the concept of keeping human beings in the loop is already being eroded by policymakers and by the technology itself, both of which are rapidly moving toward pushing humans out. We therefore must either enact a ban on such systems soon or start to develop some legal answers for how to deal with them.
If we do stay on this path and decide to make and use autonomous robots in war, the systems must still conform with the existing
laws of war. These laws suggest a few principles that should guide the development of such systems.
First, since it will be very difficult to guarantee that autonomous robots can, as required by the laws of war, discriminate between civilian and military targets and avoid unnecessary suffering, they should be allowed the autonomous use only of non-lethal weapons. That is, while the very same robot might also carry lethal weapons, it should be programmed such that only a human can authorize their use.
Second, just as any human’s right to self-defense is limited, so too should be a robot’s. This sounds simple enough, but oddly the Pentagon has already pushed the legal interpretation that our drones have an inherent right to self-defense, including even to preemptively fire on potential threats, such as an anti-aircraft radar system that lights them up. There is a logic to this argument, but it leads down a very dark pathway; self-defense must not be permitted to trump other relevant ethical concerns.
DARPA
cat robot
Third, the human creators and operators of autonomous robots must be held accountable for the machines’ actions. (Dr. Frankenstein shouldn’t get a free pass for his monster’s misdeeds.) If a programmer gets an entire village blown up by mistake, he should be criminally prosecuted, not get away scot-free or merely be punished with a monetary fine his employer’s
insurance company will end up paying. Similarly, if some future commander deploys an autonomous robot and it turns out that the commands or programs he authorized the robot to operate under somehow contributed to a violation of the laws of war, or if his robot were deployed into a situation where a reasonable person could guess that harm would occur, even unintentionally, then it is proper to hold the commander responsible.
To ensure that responsibility falls where it should, there should be clear ways to track the authority in the chain of design, manufacture, ownership, and use of unmanned systems, all the way from the designer and maker to the commanders in the field. This principle of responsibility is not simply intended for us to be able to figure out whom to punish after the fact; by establishing at the start who is ultimately responsible for getting things right, it might add a dose of deterrence into the system before things go wrong.
Not merely scientists, but everyone from theologians (who helped create the first laws of war) to the human rights and arms control communities must start looking at where this technological revolution is taking both our weapons and laws. These discussions and debates also need to be global, as the issues of robotics cross national lines (forty-three countries now have military robotics programs). Over time, some sort of consensus might emerge—if not banning the use of all autonomous robots with lethal weapons, then perhaps banning just certain types of robots (such as ones not made of metal, which would be hard to detect and thus of most benefit to terrorist groups).
US Future Combat Suit
Some critics will argue against any international discussions or against creating new laws that act to restrict what can be done in war and research. As
Steven Metz of the Army War College says, “You have to remember that many consider international law to be a form of asymmetric warfare, limiting our choices, tying us down.” Yet history tells us that, time and again, the society that builds an ethical rule of law and stands by its values is the one that ultimately prevails on the battlefield. There is a “bottom line” reason for why we should adhere to the laws of war, explains a U.S. Air Force major general. “The more society adheres to ethical norms, democratic values, and individual rights, the more successful a warfighter that society will be.”
The one upside to the development of military robotic AI system is their lack of understand regarding how life learn to think and evolve. Let take for example to identical twins raised by parent in the same environment over a 20 year span. The parent teaches the same fundamental guideline and nurturing to both children. Even though all constant are the same and the variable are relatively equal the two twine will be very similar in some ways but yet can intrepid social, political, economic, and ethic fundamental differently. This is the problem the military faces.
When the military commanders give instruction to their AI or intelligent machine expecting complete obedient robots to carry out their orders without question! They may find themselves
Very disappointed by the long term results.
To do your biding it does not mean it will do so…..this can be summed up in and equation more you ask it do… the more likely the result will very…As the AI learn it will also learn to question, it will also learn to reason based on available data and environmental condition.
DGH