RAIN+ ETHICS Primer #2

Controlled Autonomy: Defining Autonomous Weapon Systems

About the primer

This primer defines autonomous weapon systems. It is the second in a series of RAIN+ ETHICS primers. Other primers can be found here: https://www.rainresearchgroup.ai/ethics/primers.

It is an organic document that will be regularly updated. Feedback or suggestions are welcomed and can be sent to primers@rainresearchgroup.ai

Executive summary

    • There are no universal rules or guidelines on the ethical dimensions of the design. development and deployment of autonomous weapon systems or systems with autonomous capabilities.
    • Many discussions boil down to the amount of human control and oversight – to what extent there should be a ‘human-in-the-loop´, as it is often referred to.
    • There is a perception that the amount of human control will  automatically decrease as the autonomy of vehicles and systems is enhanced by the use of AI, which will therefore affect responsibility and accountability.
    • At the moment, however, virtually all autonomous weapon systems are still under human control, which means that much of the ethical debate about these systems is actually more about future, rather than current, capabilities

1. Introduction​

The ethical debate on autonomous weapon systems is concentrated on the potential risks and benefits of weapon systems capable of finding, tracing and engaging targets independent of human control. While such systems do not yet exist, there are systems currently in operation and development that possess varying degrees of autonomy. That is why this primer not only discusses the ethical questions related to the potential, future use of autonomous weapon systems but also the questions that can be raised about existing systems with autonomous capabilities. 

2. Defining autonomous weapon systems

Autonomous weapon systems (AWS): AWS are types of systems that use sensory data and computer algorithms to independently search for, identify and engage targets based on programmed constraints and descriptions. There is no consensus about the definition of AWS. Broadly speaking the varying definitions of AWS fall into three categories:

  • The narrow definition: Only includes those weapon systems capable of understanding high levels of intent and direction. This definition excludes any existing weapons and systems currently in development. In fact, it would probably require general AI to provide weapon systems with this level of agency. ➤ See primer #1 for a brief discussion of narrow and general AI.
  • The broad definition: Includes those weapon systems capable of selecting and/or engaging targets independent from human control. This definition includes some existing weapon systems, such as the Israeli Harpy. The Harpy is a loitering munition that independently searches for radar signals from air defense systems. When it detects a signal, it hones in and destroys the target. 
  • The use-dependent definition: Acknowledges that many weapon systems are not inherently autonomous but that they can be used with varying degrees of autonomy.

Lethal autonomous weapon systems (LAWS): Lethal autonomous weapon systems (LAWS) are a type of weapon systems that use sensors and computer algorithms to independently search for, identify and engage targets based on programmed constraints and descriptions. They are different from autonomous weapon systems in that they are specifically used to kill people. 

Systems with autonomous capabilities (SAC): Systems with autonomous capabilities can perform certain tasks autonomously. For example, a UAV with autopilot can fly autonomously and a system equipped with image recognition software can be used to autonomously identify pre-defined targets. In contrast to fully autonomous weapon systems, these systems already exist. The examples that feature in this primer are all systems with autonomous capabilities.

Autonomous weapon systems in the military kill-chain.

Autonomous weapon systems and systems with autonomous capabilities can be used on different levels in the military kill chain. Military technologies that are part of the kill chain in war can generally be divided in three categories: munitions, platforms and operating systems. 

Munitions: are guided weapons used to destroy a specific target, while minimising collateral damage. Munitions with varying degrees of autonomy have existed for quite some time. For example, the Advanced Medium Range Air-to-Air Missile (AMRAAM) used by countries around the world is equipped with a radar system to provide a guidance signal. The AMRAAM is a so-called fire and forget missile, which means that it does not require further guidance after launch.

Artificial swarming intelligence for warfare missiles: Missiles are becoming more advanced with the evolution of artificial intelligence Image recognition software enables missiles to autonomously engage pre-defined targets with more accuracy. The European consortium MBDA is taking this a step further by developing artificial swarming intelligence for its SPEAR-3 mini cruise missiles and SPEAR-EW electronic warfare missiles. The development of increasingly autonomous missile systems raises ethical questions about the ability to maintain effective human control over their functioning (see key questions for a further discussion).

Platforms: Platforms are vehicles or facilities that carry and use equipment with particular military purposes required in the field. Tanks, ships, satellites, and unmanned aerial vehicles (UAVs) are all examples of platforms. Platforms are used for a variety of military purposes, such as surveillance, electronic warfare and launching missiles or other weaponry. Although fully autonomous platforms do not yet exist, there are a number of AI-enhanced platforms that operate with varying degrees of autonomy.

The Robotic Autonomous Platform for Tactical Operations and Reconnaissance (RAPTOR) UAV: The RAPTOR system provides an interesting example because it integrates artificial intelligence software with a small commercial off-the-shelf UAV airframe, which enables it to find, fix, track and identify targets with a high degree of autonomy. Developed for the US army by Scientific Systems, the RAPTOR can be tasked by a human commander to find and localise a target using machine learning to process sensory data in real time. The RAPTOR was tasked to localise an “enemy” military vehicle in a test conducted by the US military in August 2020 but the image recognition software could, in theory, also be used to identify a human target. 

There are two main takeaways from the RAPTOR that are crucial to make sense of the debate on autonomous platforms. First, the platform itself is meaningless. The legal and ethical issues mostly arise with its brain – the computer controlling the platform. Second, the most advanced platforms that currently exist are still under human control. However, legal and ethical questions arise about the algorithm’s ability to identify a legitimate target and the degree to which a human commander can exercise effective control (see key questions for a further discussion).

Operational systems: An operational system represents the level of command that brings together the details of tactics with the goals of strategy. At the operational level, commanders use their skills, knowledge, experience and judgment to strategise, plan and organise for the deployment of military force. Operational decision-making requires a strong situational awareness and an understanding of the full context. It is unlikely that humans will be able to delegate all operational decision-making to an autonomous system. However, artificial intelligence and machine-learning algorithms can be used to support human decision-making in calculating optimal decisions and carrying out predictive analysis.

Starlight: Israel Aerospace Industries has developed StarLight, a multi-intelligence analysis, cloud-based system capable of transforming large amounts of unstructured sensory data into actionable intelligence for the Israel Defense Forces (IDF). StarLight uses machine-learning algorithms to carry out predictive analysis of targets and potential threats. As such, it is a good example of a system with autonomous capabilities used by the IDF in support of operational decision-making

Automation of unmanned vehicles/systems: AI allows increased automation of unmanned vehicles and systems. This so far has not reached the stage of full automation. Rather, vehicles and systems are increasingly automated in some of their functions, including, for example, flight or targeting. 

The above continuum shows the various levels of autonomy, combining at each level several labels found in literature on the subject. There is more automation and less human agency as you move to the right on the continuum.

The subsequent four levels of autonomy used to classify unmanned vehicles and systems are not sharply demarcated; they often run into each other. The differences can be explained as follows:

  • Human operated: A human being makes all decisions about what an unmanned vehicle/system does and is allowed to do. Its behavior depends on the input of a controller or operator;
  • Human delegated (human-in-the-loop): The unmanned vehicle or system has functions it can perform independently of human control, but these have been pre-programmed and are activated/deactivated by human beings. The range of behaviour is built into the system and/or some human input is required;
  • Human supervised (human-on-the-loop): The unmanned vehicle/system can perform a wide range of activities independently based on external information it receives within certain operational boundaries, while human beings monitor its behaviour. The behaviour is more flexible but still within the boundaries of pre-programmed goals or rules. A human being will oversee the process and can normally intervene if necessary;
  • Fully autonomous systems (human-out-of-the-loop): Once objectives are set (by human beings), unmanned vehicles or systems can translate these into specific tasks based on the external information it receives and operate without any human interaction. For example, a fully autonomous weapon system would be able to select, target and fire without any human intervention. Although there may still be overarching rules that the system cannot violate, it can adapt its behaviour and assess how best to meet the objectives set for it. There is no possibility for human intervention.

3. Why are autonomous weapon systems developed?

As with any new weapon system, autonomous weapon systems are developed to gain or maintain a competitive edge against enemy adversaries. If your adversary has machine guns, you also need machine guns. More specifically, there are two important tactical factors that really drive the development of autonomous systems: 1) Countering anti-access and area denial (A2/AD), and 2) the increasing speed of warfare.

  1. Countering A2/AD: Anti-access and area denial, also known as A2/AD, refers to military strategies used to prevent an opponent from operating military forces near, into or within a contested area. It is anticipated that A2/AD systems, such air defense systems, anti-ship missiles and electronic warfare, will play an increasingly important role in undermining conventional means of war.  Autonomous weapon systems are a way to counter A2/AD. Autonomous platforms or munitions can be launched from a ship or other platform outside a contested area so that armed forces will be kept out of harm’s way. They do not require a communication link with a ground station, which makes them less vulnerable to jamming or other electronic warfare systems. 
  2. Increasing speed of war: Advances in military technologies will make it increasingly difficult for humans to keep up with the speed of war. Air defense systems already operate with varying degrees of autonomy to counter incoming missiles. Military strategists believe that greater autonomy is required in all domains to keep up with the speed of adversaries (also see box 9 for a discussion on hyperwar)

4. Key concepts of the ethical debate

Human in, on and out of the loop: These concepts are interpreted in different ways, but all relate to human-machine interaction. Whether it is for operating, delegating or supervising, the idea is that there always is, or needs to be, a human being involved for a range of purposes: control, responsibility, accountability or ethical assessments. The problem is that there might be many different ‘loops’ happening at the same time, related to broader legal issues (e.g. getting permission to strike a target), related to political decision-making or military decision-making at various levels. One could even consider the use of different functionalities of unmanned vehicles and systems as separate loops – activating automated flight is one example, and  a mapping function is another –  which means it is doubtful that a human being could be in all the loops at the same time, especially as AI further enhances such vehicles and systems and algorithms calculate and process data ever faster.

For ethics, this is, however, a central part of the debate as taking out or reducing the ‘humans in the loop’ when engaging in military or security operations could obscure legal liabilities and moral responsibilities.

OODA loop: A constantly repeating cycle of observe-orient-decide-act that forms the basis of military decision making. The idea behind this classic military concept is that the individual or military who can go through this cycle more rapidly than their opponent has a tactical or strategic advantage. The key limiting factor in the OODA loop has always been the human being, who needs time to process information and make decisions. Artificial intelligence has unprecedented potential to speed up the OODA loop, but the extent to which it will revolutionise the OODA loop will depend on how much human control is transferred to the AI system. In an extreme case, AI systems could be allowed to take decisions themselves in the OODA loop, but there are many other possibilities. For example, AI enhanced systems could also quickly analyse vast quantities of data and present the human controller with a suggested course of action or a set of options to choose from. In other words, an AI system could act itself or guide human beings into action. However, as the OODA loop concept is all about tactical and strategic advantages vis-a-vis an enemy, a faster pace of enemy systems may put additional pressure behind the argument to remove human beings as much as possible from the loop.

Black Box: Some AI systems are so complex and opaque that nobody knows exactly how their algorithms process data and how they reach certain outcomes. The black box of AI makes it, for example, difficult to abide by the ethical principle of ´traceability’ of the US Department of Defense, which prescribes that ‘relevant personnel possess an appropriate understanding of the technology’ involved and work with ´transparent methodologies.

Predictability: The predictability of an autonomous weapon system is reflected in the understanding of how it will function in any given circumstances of its use, and the effects it will produce.

Reliability: The reliability of an autonomous weapon system is reflected in how consistently the system will function as intended – that is, without system malfunctions or unintended effects.

Automation bias: The human tendency to put too much confidence in automated decision-making systems, including in contexts where machines are less suited to take decisions. Systems with complex algorithmic processes, such as autonomous weapon systems, intensify this tendency because their outputs are often difficult to explain. In other words, an operator or commander cannot easily establish why a system is giving particular suggestions. 

Moral responsibility: Until UAVs become their own moral agents, it is human beings (developers, operators, commanders) that have the ultimate moral responsibility for the design, development and deployment of UAVs. However, the more autonomous systems get (through AI/machine learning), the more difficult it will be to assign this moral responsibility.

Intelligence versus autonomy: Intelligence and autonomy are often used interchangeably but they are not the same. Simply put, intelligence refers to a system’s ability to perform complex tasks and decide on the best course of action to achieve its goals (e.g. adapting to new situations and information). The autonomy of a system refers to the level of freedom it has to perform its tasks and accomplish its objectives.

Hyperwar: A new form of warfare in which autonomous systems and artificial intelligence play an important role. Technological advances revolutionise the speed and scope of war, which means that human decision making is either less important or entirely absent from the classic observe-orient-decide-act (OODA) loop of traditional military operations.

Automation: While such functions are generally still bound by pre-programmed, pre-defined rules and options, AI offers unprecedented potential for increased automation.

Automation versus AI: While often used interchangeably, automation generally refers to technology and systems that follow pre-programmed rules to handle tasks and cannot analyze and apply new information in the face of new situations. AI, on the other hand, allows systems to do just that: it allows them to learn and adapt. This is related to a subset of AI called machine learning.

Share this post

Related Posts