Lt Col Joseph O. Chapa, USAF

In a press gaggle shortly after he took office as Secretary of Defense in 2018, a reporter asked Secretary Jim Mattis if artificial intelligence would change the nature of war. Mattis—an officer who knows Clausewitz as well as anyone—referred to the well-known distinction between the nature and character of war. Ultimately, he told the reporter, “I can’t answer your question, but I’m certainly questioning my original premise that the fundamental nature of war will not change...You’ve [got] to question that now. I just don’t have the answers yet.”

The tension Mattis grapples with in this conversation is about the relationship between human ends and machine means. “If we ever get to the point where [a weapons system is] completely on automatic pilot and we’re all spectators,” Mattis said, “then it’s [no] longer serving a political purpose.” In war as in every other area of life, machines are designed to serve human ends.

Senior leaders within the Pentagon are tracking the recent surge of artificial intelligence (AI) developments and autonomy capabilities. They also recognize the importance of submitting machine capabilities to a human decision-maker. However, as senior leaders reach for a conceptual framework to understand the relationship between emerging technology and human oversight, many have grabbed hold of the wrong one.

I have lost count of the number of panels I have participated in or conferences I have attended in which a U.S. Department of Defense (DoD) official—either in or out of uniform—has reassured audiences who are concerned about the prospect of AI-enabled autonomous weapons by offering some variation on: “don’t worry, it’s DoD policy that we’ll always have a human in the loop.”

There is a glaring problem with such statements, however: that is not DoD policy. The DoD has never had a policy that requires autonomous weapons to have a human in the loop.

What DoD does have is Directive 3000.09, “Autonomy in Weapons Systems,” which both defines what an autonomous weapons system is and says that any such weapon developed by the United States “will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Contrary to popular belief, however, this still does not require a human to be in the loop. And, by repeating a conventional wisdom that is not present in policy, we obscure the importance of human judgment and its connection to the laws of war and just war principles.

Strategic and Ethical Implications

This is not a mere quibble about terminology. There are very real strategic and ethical implications of the in-the-loop, on-the-loop, and out-of-the-loop language. At the most foundational level, the human in-, on-, or out-of-the-loop framing misrepresents the nature of AI warfare. At a more practical level, a commitment to having a human in the loop fails to deliver the safeguards that proponents of the term often believe it does.

The idiom “human-in-the-loop" presumes a machine decision loop and then asks: where is the human relative to that pre-existing loop? Rather than give primacy to the machine’s work, why not prioritize and make the human’s decision cycle central?

If those who paraphrase Clausewitz are right that war is a human endeavor, we would do well to define first human task and then ask where the machine is best positioned to help in relation to the human. Indeed, several co-authors and I have made this argument at greater length elsewhere. More generally, though, we move too quickly and skip over these important questions when we start with a conceptual frame in which the machine decision loop is at the center.

Muddled Concepts

There is a second argument for rejecting the in-the-loop framing: it muddles more than it clarifies.

Whether there is a human in the loop depends upon where we draw the loop. I once participated in a Scientific Advisory Board study on responsible AI where, during a panel discussion, a senior engineering PhD—having spent considerable time working in the autonomous vehicle industry—made the point that we have had human out-of-the-loop systems in civilian applications for many years. He explained that anti-lock brake systems are human out-of-the-loop systems that have been standard on personal vehicles for decades. This anti-lock brake system engages the braking mechanism faster than the driver can themselves.

The anti-lock brake system example illustrates the importance of clearer thinking regarding human-machine interactions. After the human chooses to engage the vehicle’s brakes, the machine is responsible for choosing how to do so most effectively. If anti-lock brakes are human out-of-the-loop systems, then, which is the relevant decision loop? If we are talking about engaging the brake mechanism after the human slams on the brake pedal, then anti-lock brakes are human out-of-the-loop systems. But if we are talking about the entire braking process—including both the human’s decision to hit the brake pedal and the actuation of the brake mechanism—then the brake system very much remains a human in-the-loop system.

In other words, the same mechanism can be framed as human in the loop or out of the loop, depending on how we describe the whole system. And this has implications, not just for how we think about the policy of autonomous weapons systems, but how we think about the ethics of autonomous weapons systems. If we cannot accurately describe where the machine autonomy begins and ends, how can we possibly describe human responsibility and the role of human judgment?

Counterfactual Application: Iraq 1991

The same principles apply to combat. On February 27, 1991, General Norman Schwarzkopf took the US Central Command podium in Riyadh, Saudi Arabia to give what would later be called the “mother of all briefings.” That briefing—conducted three decades ago with nothing but poster board charts and Schwarzkopf’s extensible metal pointer—has something to teach us about the lethal autonomous weapons systems of the future.

In that iconic briefing, Schwarzkopf described the Iraqi order of battle prior to US military action. Iraq’s most capable armor units formed a “front line barrier” arrayed along Kuwait’s southern border with Saudi Arabia. Schwarzkopf’s direction to his air component commander Lt Gen Chuck Horner, was to degrade the Iraqi armor units by 50% or more. Reducing the armor capacity to 50% or below was a “go sign” for the land component to engage Iraqi forces inside Kuwait.

Now, instead of an air component made up of traditionally piloted F-16s, A-10s, and F/A-18s, suppose that Schwarzkopf had a swarm of AI-enabled lethal autonomous weapons systems at his disposal. Suppose Schwarzkopf had given the order to degrade Iraqi armor by 50% not to Horner and his traditional pilots, but rather to the lethal autonomous swarm.

Is this notional employment of the swarm a human-in-the-loop, on-the-loop, or out-of-the-loop system? On the one hand, Schwarzkopf is a human and his order, when paired with the autonomous swarm’s ability to carry it out, is a decision loop. So, this looks like a human-in-the-loop system. But, on the other hand, I suspect that most critics of autonomous weapons and most proponents of the in-the-loop taxonomy would argue that they have a different loop in mind.

What is typically under consideration in the ethical employment of autonomous systems are the tactical decisions about which objects to target, when, and under what circumstances. In our counterfactual, Schwarzkopf doesn’t have much visibility on these discrete questions from his purview at U.S. Central Command headquarters. So, from this view, the notional Schwarzkopf example is a human-out-of-the-loop system.

Appealing to the loop taxonomy without a careful explication of exactly what we are asking the autonomous system to do leads to ambiguity and confusion about what autonomy is and how the Defense Department intends to employ it in combat. Even a policy that requires a human in the loop can yield a far more permissive regulatory environment than many who use the loop framing would prefer.

Understanding Current Guidance

So, if DoD policy does not commit us to having a human in the loop for autonomous weapons, what does it commit us to? What counts as “appropriate levels of human judgment?"

DoD Directive 3000.09 requires that autonomous weapons undergo a review twice in the system lifecycle, once before formal development and again before fielding. It also names the trio of senior leaders responsible for overseeing the review panel: the Undersecretary of Defense for Policy, the Undersecretary of Defense for Research and Engineering, and the Vice Chairman of the Joint Chiefs of Staff. But as the document also makes clear, those who authorize or employ autonomous weapons systems are still responsible for ensuring compliance with the “law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement.”

Though 3000.09 is unambiguous on these points, it does not commit the United States to always having a human-in-the-loop. The document’s primary function is to establish a clear process for oversight. Having been on the books for twelve years, what the DoD now needs is better public messaging that moves away from the misguided human-in-the-loop framing and instead focuses on how DoD policy will enable the U.S. military to maintain its commitments to the laws of war and just war principles. The United States is better served by the Department of Defense signaling concrete commitments to centering human judgment and the laws of war—and by Department officials who reaffirm those commitments—rather than talking about machine decision loops.

Joseph Chapa is a lieutenant colonel in the U.S. Air Force and holds a doctorate in philosophy from the University of Oxford. His areas of expertise include just war theory, military ethics, and especially the ethics of remote weapons and the ethics of artificial intelligence. He is a senior pilot with more than 1,400 pilot and instructor pilot hours. He currently serves as a military faculty member at the Marine Command and Staff College, Quantico, VA and previously served as the Department of the Air Force’s first Chief Responsible AI Ethics Officer.