Tag Archives: autonomy

Naval Escalation in an Unmanned Context

By Jonathan Panter

On March 14, two Russian fighter jets intercepted a U.S. Air Force MQ-9 Reaper in international airspace, breaking one of the drone’s propellers and forcing it to crash into the Black Sea. The Russians probably understood that U.S. military retaliation – or, more importantly, escalation – was unlikely; wrecking a drone is not like killing people. Indeed, the incident contrasts sharply with the recent revelation of another aerial face-off. In late 2022, Russian aircraft nearly shot down a manned British RC-135 Rivet Joint surveillance aircraft. With respect to escalation, senior defense officials later indicated, the latter incident could have been severe.1

There is an emerging view among scholars and policymakers that unmanned aerial vehicles can reduce the risk of escalation, by providing an off-ramp during crisis incidents that, were human beings involved, might otherwise spark public calls for retaliation. Other recent events, such as the Iranian shoot-down of a U.S. RQ-4 Global Hawk in the Persian Gulf in 2019 – which likewise did not spur U.S. military kinetic retaliation – lend credence to this view. But in another theater, the Indo-Pacific, the outlook for unmanned escalation dynamics is uncertain, and potentially much worse. There, unmanned (and soon, autonomous) military competition will occur not just between aircraft, but between vessels on and below the ocean.

Over the past two decades, China has substantially enlarged its navy and irregular maritime forces. It has deployed these forces to patrol its excessive maritime claims and to threaten Taiwan, expanded its nuclear arsenal, and built a conventional anti-access, area-denial capacity whose overlap with its nuclear deterrence architecture remains unclear. Unmanned and autonomous maritime systems add a great unknown variable to this mix. Unmanned ships and submarines may strengthen capabilities in ways not currently anticipated; introduce unexpected vulnerabilities across entire warfare areas; lower the threshold for escalatory acts; or complicate each side’s ability to make credible threats and assurances.

Forecasting Escalation Dynamics

Escalation is a transition from peace to war, or an increase in the severity of an ongoing conflict. Many naval officers assume that unmanned ships are inherently de-escalatory assets due to their lack of personnel onboard. Recent high-profile incidents – such as the MQ-9 Reaper and RQ-4 Global Hawk incidents mentioned previously – seem, at first glance, to confirm this assumption. The logic is simple: if one side destroys the other’s unmanned asset, the victim will feel less compelled to respond, since no lives were lost.

While enticing, this assumption is also illusory. First, the example is of limited applicability: most unmanned ships and submarines under development will not be deployed independently. They will work in tandem with each other and with manned assets, such that the compromise of one vessel – potentially by cyber means – often affects others, changing a force’s overall readiness. The most serious escalation risk thus lies at a systemic, or fleetwide, level – not at the level of individual shoot-downs.

Second, lessons about escalation from two decades of operational employment of unmanned aircraft cannot be imported, wholesale, to the surface and subsurface domains – where there is little to no operational record of unmanned vessel employment. The technology, operating environments, expected threats, tactics, and other factors differ substantially.

Our understanding of one variant of escalation, that in the nuclear realm, is famously theoretical – the result of deductive logic, modeling, or gaming – rather than empirical, since nuclear weapons have only been used once in conflict, and never between two nuclear powers. Right now, the story is similar for unmanned surface and subsurface vessels. Neither side has deployed unmanned vessels at in sufficient numbers or duration, and across a great enough variety of contexts, for researchers to draw evidence-based conclusions. Everything remains a projection.

Fortunately, three existing areas of academic scholarship – crisis bargaining, inadvertent nuclear escalation, and escalation in cyberspace – provide some clues about what naval escalation in an unmanned context might look like.

Crisis Bargaining

During international crises, a state may try to convince its opponent that it is willing to fight over an issue – and that, if war were to break out, it would prevail. The goal is to get what you want without actually fighting. To intimidate an opponent, a state might inflate its capabilities or hide its weaknesses. To convince others of its willingness to fight, a state might take actions that create a risk of war, such as mobilizing troops (so-called “costly signals”). Ascertaining capability and intent in international crises is therefore quite difficult, and misjudging either may lead to war.2

Between nuclear-armed states, these phenomena are more severe. Neither side wants nuclear war, nor believes that the other is willing to risk it. To make threats credible, therefore, states may initiate an unstable situation (“rock the boat”) but then “tie their own hands” so that catastrophe can be averted only if the opponent backs down. States do this by, for example, automating decision-making, or stationing troops in harm’s way.3

The proliferation of unmanned and autonomous vessels promises to impact all of these crisis bargaining strategies. First – as noted previously – unmanned vessels may be perceived as “less escalatory,” since deploying them does not risk sailors’ lives. But this perception could have the opposite effect, if states – believing the escalation risk to be lower – deploy their unmanned vessels closer to an adversary’s territory or defensive systems. The adversary might, in turn, believe that his opponent is preparing the battlespace, or even that an attack is imminent. Economists call this paradox “moral hazard.” The classic example is an insured person’s willingness to take on more risk.

Second, a truly autonomous platform – one lacking a means of being recalled or otherwise controlled after its deployment – would be ideal for “tying hands” strategies. A state could send such vessels to run a blockade, for instance, daring the other side to fire first. Conversely, an unmanned (but not autonomous) vessel might have remote human operators, giving a state some leeway to back down after “rocking the boat.” In a crisis, it may be difficult for an adversary to distinguish between the two types of vessels.

A further complication arises if a state misrepresents a recallable vessel as non-recallable, perhaps to gain the negotiating leverage of “tying hands,” while maintaining a secret exit option. And even if an autonomous vessel is positively identified as such, attributing “intent” to it is a gray area. The more autonomously a vessel operates, the easier it is to attribute its behaviors to its programming, but the harder it is to determine whether its actions in a specific scenario are intended by humans (versus being default actions or error).

Unmanned Aerial Vehicles?

Scholars have begun to address such questions by studying unmanned aerial systems.4 To give two recent examples, one finding suggests that unmanned aircraft may be de-escalatory assets, since the absence of a pilot means domestic publics would be less likely to demand blood-for-blood if a drone gets shot down.5 Another scholar finds that because drones combine persistent surveillance with precision strike, they can “increase the certainty of punishment” – making threats more credible.6

Caution should be taken in applying such lessons to the maritime realm. First, unmanned ships and submarines are decades behind unmanned aerial vehicles in sophistication. Accordingly, current plans point to a (potentially decades-long) roll-out period during which unmanned vessels will be partially or optionally manned.7 Such vessels could appear unmanned to an adversary, when in fact crews are simply not visible. This complicates rules of engagement, and warps expectations for retaliation if a state targets an apparently-unmanned vessel that in fact has a skeleton crew.

Second, ships and submarines have much longer endurance times than aircraft. Hence, mechanical and software problems will receive less frequent local troubleshooting and digital forensic examination. An aerial drone that suffers an attempted hack will return to base within a few hours; not so with unmanned ships and submarines because their transit and on-station times are much longer, especially those dispersed across a wide geographic area for distributed maritime operations. This complicates efforts to attribute failures to “benign” causes or adversarial compromise. The question may not be whether an attempted attack merits a response due to loss of life, but rather whether it represents the opening salvo in a conflict.

Finally, with regard to the combination of persistent surveillance and precision strike, most unmanned maritime systems in advanced stages of development for the U.S. Navy do not combine sensing and shooting. Small- and medium-sized surface craft, for instance, are much closer to deployment than the U.S. Navy’s “Large Unmanned Surface Vessel,” which is envisioned as an adjunct missile magazine. The small- and medium-sized craft are expected to be scouts, minesweepers, and distributed sensors. Accordingly, they do little for communicating credible threats, but do present attractive targets for a first mover in a conflict, whose opening goal would be to blind the adversary.

Inadvertent Nuclear Escalation

During conventional war, even if adversaries carefully avoid targeting the other side’s nuclear weapons, other parts of a military’s nuclear deterrent may be dual-use systems. An attack on an enemy’s command-and-control, early warning systems, attack submarines, or the like – even one conducted purely for conventional objectives – could make the target state fear that its nuclear deterrent is in danger of being rendered vulnerable.8 This fear could encourage a state to launch its nuclear weapons before it is too late. Incremental improvements to targeting and sensing in the past two decades – especially in the underwater realm – have exacerbated the problem by making retaliatory assets easier to find and destroy.9

In the naval context, the risk is that one side may perceive a “use it or lose it” scenario if it feels that its ballistic missile submarines have all been (or are close to being) located. In particular, the ever-wider deployment of assets that render the underwater battlespace more transparent – such as small, long-duration underwater vehicles equipped with sonar – could undermine an adversary’s second-strike capability. Today, the US Navy’s primary anti-submarine platforms aggregate organic sensing and offensive capabilities (surface combatants, attack submarines, and maritime patrol aircraft). The shift to distributed maritime operations using unmanned platforms, however, portends a future of disaggregated capabilities. Small platforms without onboard weapons systems will still provide remote sensing capability to the joint force. If these sensing platforms are considered non-escalatory because they lack offensive capabilities and sailors onboard, the US Navy might deploy them more widely.10

Escalation in Cyberspace

The US government’s shift to persistent engagement in cyberspace, a strategy called “Defend Forward,”11 has underscored two debates on cyber escalation. The first concerns whether operations in the cyber domain expose previously secure adversarial capabilities to disruption, shifting incentives for preemption on either side.12 The second concerns whether effects generated by cyberattacks (i.e., cyber effects or physical effects) can trigger a “cross-domain” response.13

These debates remain unresolved. Narrowing the focus to cyberattacks on unmanned or autonomous vessels presents an additional challenge for analysis, because these technologies are nascent and efforts to ensure their cyber resilience remain classified. Platforms without crews may present an attractive cyber target, perhaps because interfering with the operation of an unmanned vessel is perceived as less escalatory since human life is not directly at risk.

But a distinction must be made between the compromise of a single vessel and its follow-on effects at a system, or fleetwide level. Based on current plans, unmanned vessels are most likely to be employed as part of an extended, networked hybrid fleet. If penetrating one unmanned vessel’s cyber defenses can allow an adversary to move laterally across a network, this “effect” may be severe, potentially affecting a whole mission or warfare area. The subsequent decline in offensive or defensive capacity at the operational level of war could shift incentives for preemption. Since unmanned vessels operating as part of a team (with other unmanned vessels or with manned ones) are dependent on beyond-line-of-sight communications, interruption of one of these pathways (e.g., disabling a geostationary satellite over the area of operations) could have a similar systemic effect.

The Role of Human Judgment

Modern naval operations already depend on automated combat systems, lists of “if-then” statements, and data links. For decades, people have increasingly assigned mundane and repetitive (or computationally laborious) shipboard tasks to computers, leaving officers and sailors in a supervisory role. This state of affairs is accelerating with the introduction of unmanned and autonomous vessels, especially when combined with artificial intelligence. These technologies are likely to make human judgment more, not less, important.14 Many future naval officers will be designers, regulators, or managers of automated systems. So too will civilian policymakers directing the use of unmanned and autonomous maritime systems to signal capability and intent in crisis. For both policymakers and officers, questions requiring substantial judgment will include:

The “moral hazard” problem. If unmanned vessels are perceived as less escalatory – because they lack crews, or because they carry only sensors and no offensive capabilities – are they more likely to be employed in ways that incur other risks (such as threatening adversary defensive or nuclear deterrent capabilities in peacetime)?

The autonomy/intent paradox. When will an autonomous vessel’s action be considered a signal of an adversary’s intent (since the adversary designed and coded the vessel to act a certain way), versus an action that the vessel “decided” to take on its own? If an adversary claims ignorance – that he did not intend an autonomous vessel to act a certain way – when will he be taken at his word?15

The attribution problem. Since unmanned vessels have no crews, local troubleshooting of equipment – along with digital forensics – will occur less frequently than it does on manned vessels. Remotely attributing a problem to routine component or software failure, versus to adversarial cyberattack, will often be harder than it would be with physical access. Will there have to be a higher “certainty threshold” for positive attribution of an attack on an unmanned vessel?

The “roll-out” uncertainty. How will the first few decades of hybrid fleet operations (utilizing partial and optional-manning constructs) complicate the decision to target or compromise unmanned vessels? If a vessel appears unmanned, but has an unseen skeleton crew – and then suffers an attack – how should the target state assess the attacker’s claim of ignorance about the presence of personnel onboard?

The cyber problem. Do unmanned systems’ attractiveness as a cyber target (due to their absence of personnel, often highly-networked employment) present a system-wide vulnerability to those warfare areas than lean more heavily on unmanned systems than others? Which warfare areas would have to be affected to change incentives for preemption?

Since unmanned vessels have not yet been broadly integrated into fleet operations, these questions have no definitive, evidence-based answers. But they can help frame the problem. The maritime domain in East Asia is already particularly susceptible to escalation. Interactions between potential foes should, ideally, never escalate without the consent and direction of policymakers. But in practice, interactions-at-sea can escalate due to hyper-local misperceptions, influenced by factors like command, control, and communications, situational awareness, or relative capabilities. All of these factors are changing with the advent of unmanned and autonomous platforms. Escalation in this context cannot be an afterthought.

Jonathan Panter is a Ph.D. candidate in Political Science at Columbia University. His research examines Congressional oversight over U.S. naval operations. Prior to attending Columbia, Mr. Panter served as a Surface Warfare Officer in the United States Navy. He holds an M.Phil. and M.A. in Political Science from Columbia, and a B.A. in Government from Cornell University.

The author thanks Johnathan Falcone, Anand Jantzen, Jenny Jun, Shuxian Luo, and Ian Sundstrom for comments on earlier drafts of this article.


1. Thomas Gibbons-Neff and Eric Schmitt, “Miscommunication Nearly Led to Russian Jet Shooting Down British Spy Plane, U.S. Officials Say,” New York Times, April 12, 2023, https://www.nytimes.com/2023/04/12/world/europe/russian-jet-british-spy-plane.html.

2. James D. Fearon, “Rationalist Explanations for War,” International Organization 49, no. 3 (Summer 1995): 379-414.

3. Thomas C. Schelling, Arms and Influence (New Haven: Yale University Press, [1966] 2008), 43-48, 99-107.

4. See, e.g., Michael C. Horowitz, Sarah E. Kreps, and Matthew Fuhrmann, “Separating Fact from Fiction in the Debate over Drone Proliferation,” International Security 41, no. 2 (Fall 2016): 7-42.

5. Erik Lin-Greenberg, “Wargame of Drones: Remotely Piloted Aircraft and Crisis Escalation,” Journal of Conflict Resolution (2022). See also Erik Lin-Greenberg, “Game of Drones: What Experimental Wargames Reveal About Drones and Escalation,” War on the Rocks, January 10, 2019, https://warontherocks.com/2019/01/game-of-drones-what-experimental-wargames-reveal-about-drones-and-escalation/.

6. Amy Zegart, “Cheap flights, credible threats: The future of armed drones and coercion,” Journal of Strategic Studies 43, no. 1 (2020): 6-46.

7. Sam Lagrone, “Navy: Large USV Will Require Small Crews for the Next Several Years,” USNI News, August 3, 2021, https://news.usni.org/2021/08/03/navy-large-usv-will-require-small-crews-for-the-next-several-years.

8. Barry D. Posen, Inadvertent Escalation (Ithaca: Cornell University Press, 1991); James Acton, “Escalation through Entanglement: How the Vulnerability of Command-and-Control Systems Raises the Risks of an Inadvertent Nuclear War,” International Security 43, no. 1 (Summer 2018): 56-99. For applications to contemporary Sino-US security competition, see: Caitlin Talmadge, “Would China Go Nuclear? Assessing the Risk of Chinese Nuclear Escalation in a Conventional War with the United States,” International Security 41, no. 4 (Spring 2017): 50-92; Fiona S. Cunningham and M. Taylor Fravel, “Dangerous Confidence? Chinese Views on Nuclear Escalation,” International Security 44, no. 2 (Fall 2019): 61-109; and Wu Riqiang, “Assessing China-U.S. Inadvertent Nuclear Escalation,” International Security 46, no. 3 (Winter 2021/2022): 128-162.

9. Keir A. Lieber and Daryl G. Press, “The New Era of Counterforce,” International Security 41, no. 4 (Spring 2017): 9-49; Rose Goettemoeller, “The Standstill Conundrum: The Advent of Second-Strike Vulnerability and Options to Address It,” Texas National Security Review 4, no. 4 (Fall 2021): 115-124.

10. Jonathan D. Caverley and Peter Dombrowski suggest that one component of crisis stability – the distinguishability of offensive and defensive weapons – is more difficult at sea because naval platforms are designed to perform multiple missions. From this perspective, disaggregating capabilities might improve offense-defense distinguishability and prove stabilizing, rather than escalatory. See: “Cruising for a Bruising: Maritime Competition in an Anti-Access Age.” Security Studies 29, no. 4 (2020): 680-681.

11. For an introduction to this strategy, see: Michael P. Fischerkeller and Robert J. Harknett, “Persistent Engagement, Agreed Competition, and Cyberspace Interaction Dynamics and Escalation,” Cyber Defense Review (2019), https://cyberdefensereview.army.mil/Portals/6/CDR-SE_S5-P3-Fischerkeller.pdf.

12. Erik Gartzke and John R. Lindsay, “Thermonuclear Cyberwar,” Journal of Cybersecurity 3, no. 1 (March 2017): 37-48; Erica D. Borghard and Shawn W. Lonergan, “Cyber Operations as Imperfect Tools of Escalation,” Strategic Studies Quarterly 13, no. 3 (Fall 2019): 122-145.

13. See, e.g., Sarah Kreps and Jacquelyn Schneider, “Escalation firebreaks in the cyber, conventional, and nuclear domains: moving beyond effects-based logics,” Journal of Cybersecurity 5, no. 1 (Fall 2019): 1-11; Jason Healey and Robert Jervis, “The Escalation Inversion and Other Oddities of Situational Cyber Stability,” Texas National Security Review 3, no. 4 (Fall 2020): 30-53.

14. Avi Goldfarb and John R. Lindsay, “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War,” International Security 46, no. 3 (Winter 2021/2022): 7-50.

15. The author thanks Tove Falk for this insight.

Featured Image: A medium displacement unmanned surface vessel and an MH-60R Sea Hawk helicopter from Helicopter Maritime Strike Squadron (HSM) 73 participate in U.S. Pacific Fleet’s Unmanned Systems Integrated Battle Problem (UxS IBP) April 21, 2021. (U.S. Navy photo by Chief Petty Officer Shannon Renf)

Unmanned Mission Command, Pt. 2

By Tim McGeehan

The following two-part series discusses the command and control of future autonomous systems. Part 1 describes how we have arrived at the current tendency towards detailed control. Part 2 proposes how to refocus on mission command.

Adjusting Course

Today’s commanders are accustomed to operating in permissive environments and have grown addicted to the connectivity that makes detailed control possible. This is emerging as a major vulnerability. For example, while the surface Navy’s concept of “distributed lethality” will increase the complexity of the detection and targeting problems presented to adversaries, it will also increase the complexity of its own command and control. Even in a relatively uncontested environment, tightly coordinating widely dispersed forces will not be a trivial undertaking. This will tend toward lengthening decision cycles, at a time when the emphasis is on shortening them.1 How will the Navy execute operations in a future Anti-Access/Area-Denial (A2/AD) scenario, where every domain is contested (including the EM spectrum and cyberspace) and every fraction of a second counts? 

The Navy must “rediscover” and fully embrace mission command now, to both address current vulnerabilities as well as unleash the future potential of autonomous systems. These systems offer increased precision, faster reaction times, longer endurance, and greater range, but these advantages may not be realized if the approach to command and control remains unchanged. For starters, to prepare for future environments where data links cannot be taken for granted, commanders must be prepared to give all subordinates, human and machine, wide latitude to operate, which is only afforded by mission command. Many systems will progress from a man “in” the loop (with the person integral to the functioning), to a man “on” the loop (where the person oversees the system and executes command by negation), and then to complete autonomy. In the future, fully autonomous systems may collaborate with one another across a given echelon and solve problems based on the parameters communicated to them as commander’s intent (swarms would fall into this category). However, it may go even further. Mission command calls for adaptable leaders at every level; what if at some level the leaders are no longer people but machines? It is not hard to imagine a forward deployed autonomous system tasking its own subordinates (fellow machines), particularly in scenarios where there is no available bandwidth to allow backhaul communications or enable detailed control from afar. In these cases, mission command will not just be the preferred option, it will be the only option. This reliance on mission command may be seen as a cultural shift, but in reality, it is a return to the Navy’s cultural roots.

Back to Basics

Culturally, the Navy should be well-suited to embrace the mission command model to employ autonomous systems. Traditionally once a ship passed over the horizon there was little if any communication for extended periods of time due to technological limitations. This led to a culture of mission command: captains were given basic orders and an overall intent; the rest was up to them. Indeed, captains might act as ambassadors and conduct diplomacy and other business on behalf of the government in remote areas with little direct guidance.2 John Paul Jones himself stated that “it often happens that sudden emergencies in foreign waters make him [the Naval Officer] the diplomatic as well as the military representative of his country, and in such cases he may have to act without opportunity of consulting his civic or ministerial superiors at home, and such action may easily involve the portentous issue of peace or war between great powers.”3  This is not to advocate that autonomous systems will participate in diplomatic functions, but it does illustrate the longstanding Navy precedent for autonomy of subordinate units.

Another factor in support of the Navy favoring mission command is that the physics of the operating environment may demand it. For example, the physical properties of the undersea domain prohibit direct, routine, high-bandwidth communication with submerged platforms. This is the case with submarines and is being applied to UUVs by extension. This has led to extensive development of autonomous underwater vehicles (AUVs) vice remotely operated ones; AUVs clearly favor mission command.

Finally, the Navy’s culture of decentralized command is the backbone of the Composite Warfare Commander (CWC) construct. CWC is essentially an expression of mission command. Just as technology (the telegraph cable, wireless, and global satellite communication) has afforded the means of detailed control and micromanagement, it has also increased the speed of warfighting, necessitating decentralized execution. Command by negation is the foundation of CWC, and has been ingrained in the Navy’s officer corps for decades. Extending this mindset to autonomous systems will be key to realizing their full capabilities.

Training Commanders

This begs the question: how does one train senior commanders who rose through the ranks during the age of continuous connectivity to thrive in a world of autonomous systems where detailed control is not an option? For a start, they could adopt the mindset of General Norman Schwarzkopf, who described how hard it was to resist interfering with his subordinates:

“I desperately wanted to do something, anything, other than wait, yet the best thing I could do was stay out of the way. If I pestered my generals I’d distract them:  I knew as well as anyone that commanders on the battlefield have more important things to worry about than keeping higher headquarters informed…”4

That said, even while restraining himself, at the height of OPERATION DESERT STORM, his U.S. Central Command used more than 700,000 telephone calls and 152,000 radio messages per day to coordinate the actions of their subordinate forces. In contrast, during the Battle of Trafalgar in 1805, Nelson used only three general tactical flag-hoist signals to maneuver the entire British fleet.5

Commanders must learn to be satisfied with the ambiguity inherent in mission command. They must become comfortable clearly communicating their intent and mission requirements, whether tasking people or autonomous systems. Again, there isn’t a choice; the Navy’s adversaries are investing in A2/AD capabilities that explicitly target the means that make detailed control possible. Furthermore, the ambiguity and complexity of today’s operating environments prohibit “a priori” composition of complete and perfect instructions.

Placing commanders into increasingly complex and ambiguous situations during training will push them toward mission command, where they will have to trust subordinates closer to the edge who will be able to execute based on commander’s intent and their own initiative. General Dempsey, former Chairman of the Joint Chiefs of Staff, stressed training that presented commanders with fleeting opportunities and rewarding those who seized them in order to encourage commanders to act in the face of uncertainty.

Familiarization training with autonomous systems could take place in large part via simulation, where commanders interact with the actual algorithms and rehearse at a fraction of the cost of executing a real-world exercise. In this setting, commanders could practice giving mission type orders and translating them for machine understanding. They could employ their systems to failure, analyze where they went wrong, and learn to adjust their level of supervision via multiple iterations. This training wouldn’t be just a one-way evolution; the algorithms would also learn about their commander’s preferences and thought process by finding patterns in their actions and thresholds for their decisions. Through this process, the autonomous system would understand even more about commander’s intent should it need to act alone in the future. If the autonomous system will be in a position to task its own robotic subordinates, that algorithm would be demonstrated so the commander understands how the system may act (which will have incorporated what it has learned about how its commander commands).

With this in mind, while it may seem trivial, consideration must be made for the fact that future autonomous systems may have a detailed algorithmic model of their commander’s thought process, “understand” his intent, and “know” at least a piece of “the big picture.” As such, in the future these systems cannot simply be considered disposable assets performing the dumb, dirty, dangerous work that exempt a human from having to go in harm’s way. They will require significant anti-tamper capabilities to prevent an adversary from extracting or downloading this valuable information if they are somehow taken or recovered by the enemy. Perhaps they could even be armed with algorithms to “resist” exploitation or give misleading information. 

The Way Ahead

Above all, commanders will need to establish the same trust and confidence in autonomous systems that they have in manned systems and human operators.6 Commanders trust manned systems, even though they are far from infallible. This came to international attention with the airstrike on the Medecins Sans Frontieres hospital operating in Kunduz, Afghanistan. As this event illustrated, commanders must acknowledge the potential for human error, put mitigation measures in place where they can, and then accept a certain amount of risk. In the future, advances in machine learning and artificial intelligence will yield algorithms that far exceed human processing capabilities. Autonomous systems will be able to sense, process, coordinate, and act faster than their human counterparts. However, trust in these systems will only come from time and experience, and the way to secure that is to mainstream autonomous systems into exercises. Initially these opportunities should be carefully planned and executed, not just added in as an afterthought. For example, including autonomous systems in a particular Fleet Battle Experiment solely to check a box that they were used raises the potential for negative training, where the observers see the technology fail due to ill-conceived employment. As there may be limited opportunities to “win over” the officer corps, this must be avoided. Successfully demonstrating the capabilities (and the legitimate limitations) of autonomous systems is critical. Increased use over time will ensure maximum exposure to future commanders, and will be key to widespread adoption and full utilization.  

The Navy must return to its roots and rediscover mission command in order to fully leverage the potential of autonomous systems. While it may make commanders uncomfortable, it has deep roots in historic practice and is a logical extension of existing doctrine. Former General Dempsey wrote that mission command “must pervade the force and drive leader development, organizational design and inform material acquisitions.”Taking this to heart and applying it across the board will have profound and lasting impacts as the Navy sails into the era of autonomous systems.

Tim McGeehan is a U.S. Navy Officer currently serving in Washington. 

The ideas presented are those of the author alone and do not reflect the views of the Department of the Navy or Department of Defense.


[1] Dmitry Filipoff, Distributed Lethality and Concepts of Future War, CIMSEC, January 4, 2016, https://cimsec.org/distributed-lethality-and-concepts-of-future-war/20831

[2] Naval Doctrine Publication 6: Naval Command and Control, 1995, http://www.dtic.mil/dtic/tr/fulltext/u2/a304321.pdf, p. 9      

[3] Connell, Royal W. and William P. Mack, Naval Customs, Ceremonies, and Traditions, 1980, p. 355.

[4] Schwartzkopf, Norman, It Doesn’t Take a Hero:  The Autobiography of General Norman Schwartzkopf, 1992, p.523

[5] Ibid 2, p. 4

[6] Greg Smith, Trusting Autonomous Systems: It’s More Than Technology, CIMSEC, September 18, 2015, https://cimsec.org/trusting-autonomous-systems-its-more-than-technology/18908     

[7] Martin Dempsey, Mission Command White Paper, April 3, 2012, http://www.dtic.mil/doctrine/concepts/white_papers/cjcs_wp_missioncommand.pdf

Featured Image: SOUTH CHINA SEA (April 30, 2017) Sailors assigned to Helicopter Sea Combat Squadron 23 run tests on the the MQ-8B Firescout, an unmanned aerial vehicle, aboard littoral combat ship USS Coronado (LCS 4). (U.S. Navy photo by Mass Communication Specialist 3rd Class Deven Leigh Ellis/Released)

Unmanned Mission Command, Pt. 1

By Tim McGeehan

The following two-part series discusses the command and control of future autonomous systems. Part 1 describes how we have arrived at the current tendency towards detailed control. Part 2 proposes how to refocus on mission command.


In recent years, the U.S. Navy’s unmanned vehicles have achieved a number of game-changing “firsts.” The X-47B Unmanned Combat Air System (UCAS) executed the first carrier launch and recovery in 2013, first combined manned/unmanned carrier operations in 2014, and first aerial refueling in 2015.1 In 2014, the Office of Naval Research demonstrated the first swarm capability for Unmanned Surface Vehicles (USV).2 In 2015, the NORTH DAKOTA performed the first launch and recovery of an Unmanned Underwater Vehicle (UUV) from a submarine during an operational mission.3 While these successes may represent the vanguard of a revolution in military technology, the larger revolution in military affairs will only be possible with the optimization of the command and control concepts associated with these systems. Regardless of specific mode (air, surface, or undersea), Navy leaders must fully embrace mission command to fully realize the power of these capabilities.

Unmanned History

“Unmanned” systems are not necessarily new. The U.S. Navy’s long history includes the employment of a variety of such platforms. For example, in 1919, Coast Battleship #4 (formerly USS IOWA (BB-1)) became the first radio-controlled target ship to be used in a fleet exercise.4 During World War II, participation in an early unmanned aircraft program called PROJECT ANVIL ultimately killed Navy Lieutenant Joe Kennedy (John F. Kennedy’s older brother), who was to parachute from his bomb-laden aircraft before it would be guided into a German target by radio-control.5 In 1946, F6F Hellcat fighters were modified for remote operation and employed to collect data during the OPERATION CROSSROADS atomic bomb tests at Bikini.6 These Hellcat “drones” could be controlled by another aircraft acting as the “queen” (flying up to 30 miles away). These drones were even launched from the deck of an aircraft carrier (almost 70 years before the X-47B performed that feat).

A Hellcat drone takes flight. Original caption: PILOTLESS HELLCAT (above), catapulted from USS Shangri-La, is clear of the carrier’s bow and climbs rapidly. Drones like this one will fly through the atomic cloud. (All Hands Magazine June 1946 issue)

However, the Navy’s achievements over the last few years were groundbreaking because the platforms were autonomous (i.e. controlled by machine, not remotely operated by a person). The current discussion of autonomy frequently revolves around the issues of ethics and accountability. Is it ethical to imbue these machines with the authority to use lethal force? If the machine is not under direct human control but rather evaluating for itself, who is responsible for its decisions and actions when faced with dilemmas? Much has been written about these topics, but there is a related and less discussed question: what sort of mindset shift will be required for Navy leaders to employ these systems to their full potential?

Command, Control, and Unmanned Systems

According to Naval Doctrine Publication 6 – Command and Control (NDP 6), “a commander commands by deciding what must be done and exercising leadership to inspire subordinates toward a common goal; he controls by monitoring and influencing the action required to accomplish what must be done.”7 These enduring concepts have new implications in the realm of unmanned systems. For example, while a commander can assign tasks to any subordinate (human or machine), “inspiring subordinates” has varying levels of applicability based on whether his units consist of “remotely piloted” aircraft (where his subordinates are actual human pilots) or autonomous systems (where the “pilot” is an algorithm controlling a machine). “Command” also includes establishing intent, distributing guidance on allocation of roles, responsibilities, and resources, and defining constraints on actions.8 On one hand, this could be straightforward with autonomous systems as this guidance could be translated into a series of rules and parameters that define the mission and rules of engagement. One would simply upload the mission and deploy the vehicle, which would go out and execute, possibly reporting in for updates but mostly operating on its own, solving problems along the way. On the other hand, in the absence of instructions that cover every possibility, an autonomous system is only as good as the internal algorithms that control it. Even as machine learning drastically improves and advanced algorithms are developed from extensive “training data,” an autonomous system may not respond to novel and ambiguous situations with the same judgment as a human. Indeed, one can imagine a catastrophic military counterpart to the 2010 stock market “flash crash,” where high-frequency trading algorithms designed to act in accordance with certain, pre-arranged criteria did not understand context and misread the situation, briefly erasing $1 trillion in market value.9

“Control” includes the conduits and feedback from subordinates to their commander that allow them to determine if events are on track or to adjust instructions as necessary. This is reasonably straightforward for a remotely piloted aircraft with a constant data link between platform and operator, such as the ScanEagle or MQ-8 Fire Scout unmanned aerial systems. However, a fully autonomous system may not be in positive communication. Even if it is ostensibly intended to remain in communication, feedback to the commander could be limited or non-existent due to emissions control (EMCON) posture or a contested electromagnetic (EM) spectrum. 

Mission Command and Unmanned Systems

In recent years, there has been a renewed focus across the Joint Force on the concept of “mission command.” Mission command is defined as “the conduct of military operations through decentralized execution based upon mission-type orders,” and it lends itself well to the employment of autonomous systems.10 Joint doctrine states:

“Mission command is built on subordinate leaders at all echelons who exercise disciplined initiative and act aggressively and independently to accomplish the mission. Mission-type orders focus on the purpose of the operation rather than details of how to perform assigned tasks. Commanders delegate decisions to subordinates wherever possible, which minimizes detailed control and empowers subordinates’ initiative to make decisions based on the commander’s guidance rather than constant communications.”11

Mission command for an autonomous system would require commanders to clearly confer their intent, objectives, constraints, and restraints in succinct instructions, and then rely on the “initiative” of said system. While this decentralized arrangement is more flexible and better suited to deal with ambiguity, it opens the door to unexpected or emergent behavior in the autonomous system. (Then again, emergent behavior is not confined to algorithms, as humans may perform in unexpected ways too.) 

In addition to passing feedback and information up the chain of command to build a shared understanding of the situation, mission command also emphasizes horizontal flow across the echelon between the subordinates. Since it relies on subordinates knowing the intent and mission requirements, mission command is much less vulnerable to disruption than detailed means of command and control.

However, some commanders today do not fully embrace mission command with human subordinates, much less feel comfortable delegating trust to autonomous systems.  They issue explicit instructions to subordinates in a highly-centralized arrangement, where volumes of information flow up and detailed orders flow down the chain of command. This may be acceptable in deliberate situations where time is not a major concern, where procedural compliance is emphasized, or where there can be no ambiguity or margin for error. Examples of unmanned systems suitable to this arrangement include a bomb disposal robot or remotely piloted aircraft that requires constant intervention and re-tasking, possibly for rapid repositioning of the platform for a better look at an emerging situation or better discrimination between friend and foe. However, this detailed control does not “function well when the vertical flow of information is disrupted.”12 Furthermore, when it comes to autonomous systems, such detailed control will undermine much of the purpose of having an autonomous system in the first place.

A fundamental task of the commander is to recognize which situations call for detailed control or mission command and act appropriately. Unfortunately, the experience gained by many commanders over the last decade has introduced a bias towards detailed control, which will hamstring the potential capabilities of autonomous systems if this tendency is not overcome.

Current Practice

The American military has enjoyed major advantages in recent conflicts due to global connectivity and continuous communications. However, this has redefined expectations and higher echelons increasingly rely on detailed control (for manned forces, let alone unmanned ones). Senior commanders (or their staffs) may levy demands to feed a seemingly insatiable thirst for information. This has led to friction between the echelons of command, and in some cases this interaction occurs at the expense of the decision-making capability of the unit in the field. Subordinate staff watch officers may spend more time answering requests for information and “feeding the beast” of higher headquarters than they spend overseeing their own operations.

It is understandable why this situation exists today. The senior commander (with whom responsibility ultimately resides) expects to be kept well-informed. To be fair, in some cases a senior commander located at a fusion center far from the front may have access to multiple streams of information, giving them a better overall view of what is going on than the commander actually on the ground. In other cases, it is today’s 24-hour news cycle and zero tolerance for mistakes that have led senior commanders to succumb to the temptation to second-guess their subordinates and micromanage their units in the field. A compounding factor that may be influencing commanders in today’s interconnected world is “Fear of Missing Out” (FoMO), which is described by psychologists as apprehension or anxiety stemming from the availability of volumes of information about what others are doing (think social media). It leads to a strong, almost compulsive desire to stay continually connected.  13

Whatever the reason, this is not a new phenomenon. Understanding previous episodes when leadership has “tightened the reins” and the subsequent impacts is key to developing a path forward to fully leverage the potential of autonomous systems.

Veering Off Course

The recent shift of preference away from mission command toward detailed control appears to echo the impacts of previous advances in the technology employed for command and control in general. For example, when speaking of his service with the U.S. Asiatic Squadron and the introduction of the telegraph before the turn of the 20th century, Rear Admiral Caspar Goodrich lamented “Before the submarine cable was laid, one was really somebody out there, but afterwards one simply became a damned errand boy at the end of a telegraph wire.”14

Later, the impact of wireless telegraphy proved to be a mixed blessing for commanders at sea. Interestingly, the contrasting points of view clearly described how it would enable micromanagement; the difference in opinion was whether this was good or bad. This was illustrated by two 1908 newspaper articles regarding the introduction of wireless in the Royal Navy. One article extolled its virtues, describing how the First Sea Lord in London could direct all fleet activities “as if they were maneuvering beneath his office windows.”15 The other article described how those same naval officers feared “armchair control… by means of wireless.”16 In century-old text that could be drawn from today’s press, the article quoted a Royal Navy officer:

“The paramount necessity in the next naval war will be rapidity of thought and of execution…The innovation is causing more than a little misgiving among naval officers afloat. So far as it will facilitate the interchange of information and the sending of important news, the erection of the [wireless] station is welcomed, but there is a strong fear that advantage will be taken of it to interfere with the independent action of fleet commanders in the event of war.”

Military historian Martin van Creveld related a more recent lesson of technology-enabled micromanagement from the U.S. Army. This time the technology in question was the helicopter, and its widespread use by multiple echelons of command during Viet Nam drove the shift away from mission command to detailed control:

“A hapless company commander engaged in a firefight on the ground was subjected to direct observation by the battalion commander circling above, who was in turn supervised by the brigade commander circling a thousand or so feet higher up, who in his turn was monitored by the division commander in the next highest chopper, who might even be so unlucky as to have his own performance watched by the Field Force (corps) commander. With each of these commanders asking the men on the ground to tune in his frequency and explain the situation, a heavy demand for information was generated that could and did interfere with the troops’ ability to operate effectively.”17

However, not all historic shifts toward detailed control are due to technology; some are cultural. For example, leadership had encroached so much on the authority of commanders in the days leading up to World War II that Admiral King had to issue a message to the fleet with the subject line “Exercise of Command – Excess of Detail in Orders and Instructions,” where he voiced his concern. He wrote that the:

“almost standard practice – of flag officers and other group commanders to issue orders and instructions in which their subordinates are told how as well as what to do to such an extent and in such detail that the Custom of the service has virtually become the antithesis of that essential element of command – initiative of the subordinate.”18

Admiral King attributed this trend to several cultural reasons, including anxiety of seniors that any mistake of a subordinate be attributed to the senior and thereby jeopardize promotion, activities of staffs infringing on lower echelon functions, and the habit and expectation of detailed instructions from junior and senior alike. He went on to say that they were preparing for war, when there would be neither time nor opportunity for this method of control, and this was conditioning subordinate commanders to rely on explicit guidance and depriving them from learning how to exercise initiative. Now, over 70 years later, as the Navy moves forward with autonomous systems the technology-enabled and culture-driven drift towards detailed control is again becoming an Achilles heel.

Read Part 2 here.

Tim McGeehan is a U.S. Navy Officer currently serving in Washington. 

The ideas presented are those of the author alone and do not reflect the views of the Department of the Navy or Department of Defense.


[1] Northrup Grumman, X-47B Capabilities, 2015, http://www.northropgrumman.com/Capabilities/x47bucas/Pages/default.aspx

[2] David Smalley, The Future Is Now: Navy’s Autonomous Swarmboats Can Overwhelm Adversaries, ONR Press Release, October 5, 2014, http://www.onr.navy.mil/en/Media-Center/Press-Releases/2014/autonomous-swarm-boat-unmanned-caracas.aspx

[3] Associated Press, Submarine launches undersea drone in a 1st for Navy, Military Times, July 20, 2015, http://www.militarytimes.com/story/military/tech/2015/07/20/submarine-launches-undersea-drone-in-a-1st-for-navy/30442323/

[4] Naval History and Heritage Command, Iowa II (BB-1), July 22, 2015, http://www.history.navy.mil/research/histories/ship-histories/danfs/i/iowa-ii.html

[5] Trevor Jeremy, LT Joe Kennedy, Norfolk and Suffolk Aviation Museum, 2015, http://www.aviationmuseum.net/JoeKennedy.htm

[6] Puppet Planes, All Hands, June 1946, http://www.navy.mil/ah_online/archpdf/ah194606.pdf, p. 2-5

[7] Naval Doctrine Publication 6:  Naval Command and Control, 1995, http://www.dtic.mil/dtic/tr/fulltext/u2/a304321.pdf, p. 6

[8] David Alberts and Richard Hayes, Understanding Command and Control, 2006, http://www.dodccrp.org/files/Alberts_UC2.pdf, p. 58

[9] Ben Rooney, Trading program sparked May ‘flash crash’, October 1, 2010, CNN, http://money.cnn.com/2010/10/01/markets/SEC_CFTC_flash_crash/

[10] DoD Dictionary of Military and Associated Terms, March, 2017, http://www.dtic.mil/doctrine/new_pubs/jp1_02.pdf

[11] Joint Publication 3-0, Joint Operations, http://www.dtic.mil/doctrine/new_pubs/jp3_0.pdf

[12] Ibid

[13] Andrew Przybylski, Kou Murayama, Cody DeHaan , and Valerie Gladwell, Motivational, emotional, and behavioral correlates of fear of missing out, Computers in Human Behavior, Vol 29 (4), July 2013,  http://www.sciencedirect.com/science/article/pii/S0747563213000800

[14] Michael Palmer, Command at Sea:  Naval Command and Control since the Sixteenth Century, 2005, p. 215

[15] W. T. Stead, Wireless Wonders at the Admiralty, Dawson Daily News, September 13, 1908, https://news.google.com/newspapers?nid=41&dat=19080913&id=y8cjAAAAIBAJ&sjid=KCcDAAAAIBAJ&pg=3703,1570909&hl=en

[16] Fleet Commanders Fear Armchair Control During War by Means of Wireless, Boston Evening Transcript, May 2, 1908, https://news.google.com/newspapers?nid=2249&dat=19080502&id=N3Y-AAAAIBAJ&sjid=nVkMAAAAIBAJ&pg=470,293709&hl=en

[17] Martin van Creveld, Command in War, 1985, p. 256-257.

[18] CINCLANT Serial (053), Exercise of Command – Excess of Detail in Orders and Instructions, January 21, 1941

Featured Image: An X-47B drone prepares to take off. (U.S. Navy photo)

Will Artificial Intelligence Be Disruptive to Our Way of War?

By Marjorie Greene


At a recent Berkshire Hathaway shareholder meeting Warren Buffett said that Artificial Intelligence – the collection of technologies that enable machines to learn on their own – could be “enormously disruptive” to our human society. More recently, Stephen Hawking, the renowned physicist, predicted that planet Earth will only survive for the next one hundred years. He believes that because of the development of Artificial Intelligence, machines may no longer simply augment human activities but will replace and eliminate humans altogether in the command and control of cognitive tasks.

In my recent presentation to the annual Human Systems conference in Springfield, Virginia, I suggested that there is a risk that human decision-making may no longer be involved in the use of lethal force as we capitalize on the military applications of Artificial Intelligence to enhance war-fighting capabilities. Humans should never relinquish control of decisions regarding the employment of lethal force. How do we keep humans in the loop? This is an area of human systems research that will be important to undertake in the future.       


Norbert Wiener in his book, Cybernetics, was perhaps the first person to discuss the notion of “machine-learning.” Building on the behavioral models of animal cultures such as ant colonies and the flocking of birds, he describes a process called “self-organization” by which humans – and by analogy – machines learn by adapting to their environment. Self-organization refers to the emergence of higher-level properties of the whole that are not possessed by any of the individual parts making up the whole. The parts act locally on local information and global order emerges without any need for external control. The expression “swarm intelligence” is often used to describe the collective behavior of self-organized systems that allows the emergence of “intelligent” global behavior unknown to the individual systems.

Swarm Warfare

Military researchers are especially concerned about recent breakthroughs in swarm intelligence that could enable “swarm warfare” for asymmetric assaults against major U.S. weapons platforms, such as aircraft carriers.  The accelerating speed of computer processing, along with rapid improvements in the development of autonomy-increasing algorithms also suggests that it may be possible for the military to more quickly perform a wider range of functions without needing every individual task controlled by humans.

Drones like the Predator and Reaper are still piloted vehicles, with humans controlling what the camera looks at, where the drone flies, and what targets to hit with the drone’s missiles. But CNA studies have shown that drone strikes in Afghanistan caused 10 times the number of civilian casualties compared to strikes by manned aircraft. And a recent book published jointly with the Marine Corps University Press builds on CNA studies in national security, legitimacy, and civilian casualties to conclude that it will be important to consider International Humanitarian Law (IHL) in rethinking the drone war as Artificial Intelligence continues to flourish.

The Chinese Approach

Meanwhile, many Chinese strategists recognize the trend towards unmanned and autonomous warfare and intend to capitalize upon it. The PLA has incorporated a range of unmanned aerial vehicles into its force structure throughout all of its services. The PLA Air Force and PLA Navy have also started to introduce more advanced multi-mission unmanned aerial vehicles. It is clear that China is intensifying the military applications of Artificial Intelligence and, as we heard at a recent hearing by the Senate’s U.S. – China Economic and Security Review Commission (where CNA’s China Studies Division also testified), the Chinese defense industry has made significant progress in its research and development of a range of cutting-edge unmanned systems, including those with swarming capabilities. China is also viewing outer space as a new domain that it must fight for and seize if it is to win future wars.

Armed with artificial intelligence capabilities, China has moved beyond just technology developments to laying the groundwork for operational and command and control concepts to govern their use. These developments have important consequences for the U.S. military and suggest that Artificial Intelligence plays a prominent role in China’s overall efforts to establish an effective military capable of winning wars through an asymmetric strategy directed at critical military platforms.

Human-Machine Teaming

Human-machine teaming is gaining importance in national security affairs, as evidenced by a recent defense unmanned systems summit conducted internally by DoD and DHS in which many of the speakers explicitly referred to efforts to develop greater unmanned capabilities that intermix with manned capabilities and future systems.

Examples include: Michael Novak, Acting Director of the Unmanned Systems Directorate, N99, who spoke of optimizing human-machine teaming to multiply capabilities and reinforce trust (incidentally, the decision was made to phase out N99 because unmanned capabilities are being “mainstreamed” across the force); Bindu Nair, the Deputy Director, Human Systems, Training & Biosystems Directorate, OASD, who emphasized efforts to develop greater unmanned capabilities that intermix with manned capabilities and future systems; and Kris Kearns, representing the Air Force Research Lab, who discussed current efforts to mature and update autonomous technologies and manned-unmanned teaming.


Finally, it should be noted that the Defense Advanced Projects Agency (DARPA) has recently issued a relevant Broad Agency Announcement (BAA) titled “OFFensive Swarm-Enabled Tactics” – as part of the Defense Department OFFSET initiative.  Notably, it includes a section asking for the development of tactics that look at collaboration between human systems and the swarm, especially for urban environments. This should certainly reassure the human systems community that future researchers will not forget them, even as swarm intelligence makes it possible to achieve global order without any need for external control.


As we capitalize on the military applications of Artificial Intelligence, there is a risk that human decision-making may no longer be involved in the use of lethal force. In general, Artificial Intelligence could indeed be disruptive to our human society by replacing the need for human control, but machines do not have to replace humans in the command and control of cognitive tasks, particularly in military contexts. We need to figure out how to keep humans in the loop. This area of research would be a fruitful one for the human systems community to undertake in the future.  

Marjorie Greene is a Research Analyst with the Center for Naval Analyses. She has more than 25 years’ management experience in both government and commercial organizations and has recently specialized in finding S&T solutions for the U. S. Marine Corps. She earned a B.S. in mathematics from Creighton University, an M.A. in mathematics from the University of Nebraska, and completed her Ph.D. course work in Operations Research from The Johns Hopkins University. The views expressed here are her own.

Featured Image: Electronic Warfare Specialist 2nd Class Sarah Lanoo from South Bend, Ind., operates a Naval Tactical Data System (NTDS) console in the Combat Direction Center (CDC) aboard USS Abraham Lincoln. (U.S. Navy photo by Photographer’s Mate 3rd Class Patricia Totemeier)