Unmanned Mission Command, Pt. 2

By Tim McGeehan

The following two-part series discusses the command and control of future autonomous systems. Part 1 describes how we have arrived at the current tendency towards detailed control. Part 2 proposes how to refocus on mission command.

Adjusting Course

Today’s commanders are accustomed to operating in permissive environments and have grown addicted to the connectivity that makes detailed control possible. This is emerging as a major vulnerability. For example, while the surface Navy’s concept of “distributed lethality” will increase the complexity of the detection and targeting problems presented to adversaries, it will also increase the complexity of its own command and control. Even in a relatively uncontested environment, tightly coordinating widely dispersed forces will not be a trivial undertaking. This will tend toward lengthening decision cycles, at a time when the emphasis is on shortening them.1 How will the Navy execute operations in a future Anti-Access/Area-Denial (A2/AD) scenario, where every domain is contested (including the EM spectrum and cyberspace) and every fraction of a second counts? 

The Navy must “rediscover” and fully embrace mission command now, to both address current vulnerabilities as well as unleash the future potential of autonomous systems. These systems offer increased precision, faster reaction times, longer endurance, and greater range, but these advantages may not be realized if the approach to command and control remains unchanged. For starters, to prepare for future environments where data links cannot be taken for granted, commanders must be prepared to give all subordinates, human and machine, wide latitude to operate, which is only afforded by mission command. Many systems will progress from a man “in” the loop (with the person integral to the functioning), to a man “on” the loop (where the person oversees the system and executes command by negation), and then to complete autonomy. In the future, fully autonomous systems may collaborate with one another across a given echelon and solve problems based on the parameters communicated to them as commander’s intent (swarms would fall into this category). However, it may go even further. Mission command calls for adaptable leaders at every level; what if at some level the leaders are no longer people but machines? It is not hard to imagine a forward deployed autonomous system tasking its own subordinates (fellow machines), particularly in scenarios where there is no available bandwidth to allow backhaul communications or enable detailed control from afar. In these cases, mission command will not just be the preferred option, it will be the only option. This reliance on mission command may be seen as a cultural shift, but in reality, it is a return to the Navy’s cultural roots.

Back to Basics

Culturally, the Navy should be well-suited to embrace the mission command model to employ autonomous systems. Traditionally once a ship passed over the horizon there was little if any communication for extended periods of time due to technological limitations. This led to a culture of mission command: captains were given basic orders and an overall intent; the rest was up to them. Indeed, captains might act as ambassadors and conduct diplomacy and other business on behalf of the government in remote areas with little direct guidance.2 John Paul Jones himself stated that “it often happens that sudden emergencies in foreign waters make him [the Naval Officer] the diplomatic as well as the military representative of his country, and in such cases he may have to act without opportunity of consulting his civic or ministerial superiors at home, and such action may easily involve the portentous issue of peace or war between great powers.”3  This is not to advocate that autonomous systems will participate in diplomatic functions, but it does illustrate the longstanding Navy precedent for autonomy of subordinate units.

Another factor in support of the Navy favoring mission command is that the physics of the operating environment may demand it. For example, the physical properties of the undersea domain prohibit direct, routine, high-bandwidth communication with submerged platforms. This is the case with submarines and is being applied to UUVs by extension. This has led to extensive development of autonomous underwater vehicles (AUVs) vice remotely operated ones; AUVs clearly favor mission command.

Finally, the Navy’s culture of decentralized command is the backbone of the Composite Warfare Commander (CWC) construct. CWC is essentially an expression of mission command. Just as technology (the telegraph cable, wireless, and global satellite communication) has afforded the means of detailed control and micromanagement, it has also increased the speed of warfighting, necessitating decentralized execution. Command by negation is the foundation of CWC, and has been ingrained in the Navy’s officer corps for decades. Extending this mindset to autonomous systems will be key to realizing their full capabilities.

Training Commanders

This begs the question: how does one train senior commanders who rose through the ranks during the age of continuous connectivity to thrive in a world of autonomous systems where detailed control is not an option? For a start, they could adopt the mindset of General Norman Schwarzkopf, who described how hard it was to resist interfering with his subordinates:

“I desperately wanted to do something, anything, other than wait, yet the best thing I could do was stay out of the way. If I pestered my generals I’d distract them:  I knew as well as anyone that commanders on the battlefield have more important things to worry about than keeping higher headquarters informed…”4

That said, even while restraining himself, at the height of OPERATION DESERT STORM, his U.S. Central Command used more than 700,000 telephone calls and 152,000 radio messages per day to coordinate the actions of their subordinate forces. In contrast, during the Battle of Trafalgar in 1805, Nelson used only three general tactical flag-hoist signals to maneuver the entire British fleet.5

Commanders must learn to be satisfied with the ambiguity inherent in mission command. They must become comfortable clearly communicating their intent and mission requirements, whether tasking people or autonomous systems. Again, there isn’t a choice; the Navy’s adversaries are investing in A2/AD capabilities that explicitly target the means that make detailed control possible. Furthermore, the ambiguity and complexity of today’s operating environments prohibit “a priori” composition of complete and perfect instructions.

Placing commanders into increasingly complex and ambiguous situations during training will push them toward mission command, where they will have to trust subordinates closer to the edge who will be able to execute based on commander’s intent and their own initiative. General Dempsey, former Chairman of the Joint Chiefs of Staff, stressed training that presented commanders with fleeting opportunities and rewarding those who seized them in order to encourage commanders to act in the face of uncertainty.

Familiarization training with autonomous systems could take place in large part via simulation, where commanders interact with the actual algorithms and rehearse at a fraction of the cost of executing a real-world exercise. In this setting, commanders could practice giving mission type orders and translating them for machine understanding. They could employ their systems to failure, analyze where they went wrong, and learn to adjust their level of supervision via multiple iterations. This training wouldn’t be just a one-way evolution; the algorithms would also learn about their commander’s preferences and thought process by finding patterns in their actions and thresholds for their decisions. Through this process, the autonomous system would understand even more about commander’s intent should it need to act alone in the future. If the autonomous system will be in a position to task its own robotic subordinates, that algorithm would be demonstrated so the commander understands how the system may act (which will have incorporated what it has learned about how its commander commands).

With this in mind, while it may seem trivial, consideration must be made for the fact that future autonomous systems may have a detailed algorithmic model of their commander’s thought process, “understand” his intent, and “know” at least a piece of “the big picture.” As such, in the future these systems cannot simply be considered disposable assets performing the dumb, dirty, dangerous work that exempt a human from having to go in harm’s way. They will require significant anti-tamper capabilities to prevent an adversary from extracting or downloading this valuable information if they are somehow taken or recovered by the enemy. Perhaps they could even be armed with algorithms to “resist” exploitation or give misleading information. 

The Way Ahead

Above all, commanders will need to establish the same trust and confidence in autonomous systems that they have in manned systems and human operators.6 Commanders trust manned systems, even though they are far from infallible. This came to international attention with the airstrike on the Medecins Sans Frontieres hospital operating in Kunduz, Afghanistan. As this event illustrated, commanders must acknowledge the potential for human error, put mitigation measures in place where they can, and then accept a certain amount of risk. In the future, advances in machine learning and artificial intelligence will yield algorithms that far exceed human processing capabilities. Autonomous systems will be able to sense, process, coordinate, and act faster than their human counterparts. However, trust in these systems will only come from time and experience, and the way to secure that is to mainstream autonomous systems into exercises. Initially these opportunities should be carefully planned and executed, not just added in as an afterthought. For example, including autonomous systems in a particular Fleet Battle Experiment solely to check a box that they were used raises the potential for negative training, where the observers see the technology fail due to ill-conceived employment. As there may be limited opportunities to “win over” the officer corps, this must be avoided. Successfully demonstrating the capabilities (and the legitimate limitations) of autonomous systems is critical. Increased use over time will ensure maximum exposure to future commanders, and will be key to widespread adoption and full utilization.  

The Navy must return to its roots and rediscover mission command in order to fully leverage the potential of autonomous systems. While it may make commanders uncomfortable, it has deep roots in historic practice and is a logical extension of existing doctrine. Former General Dempsey wrote that mission command “must pervade the force and drive leader development, organizational design and inform material acquisitions.”Taking this to heart and applying it across the board will have profound and lasting impacts as the Navy sails into the era of autonomous systems.

Tim McGeehan is a U.S. Navy Officer currently serving in Washington. 

The ideas presented are those of the author alone and do not reflect the views of the Department of the Navy or Department of Defense.

References

[1] Dmitry Filipoff, Distributed Lethality and Concepts of Future War, CIMSEC, January 4, 2016, https://cimsec.org/distributed-lethality-and-concepts-of-future-war/20831

[2] Naval Doctrine Publication 6: Naval Command and Control, 1995, http://www.dtic.mil/dtic/tr/fulltext/u2/a304321.pdf, p. 9      

[3] Connell, Royal W. and William P. Mack, Naval Customs, Ceremonies, and Traditions, 1980, p. 355.

[4] Schwartzkopf, Norman, It Doesn’t Take a Hero:  The Autobiography of General Norman Schwartzkopf, 1992, p.523

[5] Ibid 2, p. 4

[6] Greg Smith, Trusting Autonomous Systems: It’s More Than Technology, CIMSEC, September 18, 2015, https://cimsec.org/trusting-autonomous-systems-its-more-than-technology/18908     

[7] Martin Dempsey, Mission Command White Paper, April 3, 2012, http://www.dtic.mil/doctrine/concepts/white_papers/cjcs_wp_missioncommand.pdf

Featured Image: SOUTH CHINA SEA (April 30, 2017) Sailors assigned to Helicopter Sea Combat Squadron 23 run tests on the the MQ-8B Firescout, an unmanned aerial vehicle, aboard littoral combat ship USS Coronado (LCS 4). (U.S. Navy photo by Mass Communication Specialist 3rd Class Deven Leigh Ellis/Released)

3 thoughts on “Unmanned Mission Command, Pt. 2”

  1. To this excellent article I would add that the surface Navy’s concept of “distributed lethality” will increase the complexity of the detection and targeting problems presented to adversaries, but only if the force remains hard to find. The communications connectivity that makes detailed control from afar possible also makes it relatively easy for the adversary to find and target the force. This, suggests that SAGs will have to exercise mission command, operate in EMCON, and receive situational awareness and threat warning information, and targeting via broadcast e.g. operate like the submarine force.

  2. Are commanders ready for the leadership aspects of managing autonomous machines? Military commanders learn over a two- or three-decade career that subordinates can be cajoled, coerced, bullied, flattered, guilted, embarrassed, encouraged, threatened, rewarded, terrified, and inspired to comply with orders; and motivated by everything from money to promotions to extra liberty to a beer on the pier and the lure of camaraderie.

    For now, machines can only be programmed.

    Perhaps this is simpler. Perhaps it is more complex.

    The fear of autonomous military systems is often rooted in scenarios of their “judgment” being inferior to humans’ – as in your Médecins sans Frontières example (though many humans were in and on that loop). On the flip side, are leaders prepared for a machine correctly challenging their human judgment? A machine correctly refusing to act in violation of rules of engagement or other programmed and learned ethical guidelines? What about a machine innovating in ways the commander finds uncomfortable, because they upstage the senior officer, paint them in a bad light, or make them feel inadequate? What happens when a machine says “no” – and the machine is right?

    Today’s young children, who converse with Siri much like with a human friend, will be ready to manage machines. But will our current adult generation of military leaders, raised with predominantly human interactions, be prepared psychologically for the switch?

  3. What i have seen in action is that commanders are not hesitent in utilizing unmanned systems because they do not have guaranteed connectivity. They are hesitent to use unmanned systems because when that comms tether is cut, the unmanned systems we are fielding do not have the programmed algorithms to be in complete compliance with COLREGS. The idea that a system they authorized to be deployed, could in a training scenario, be responsible for the death of civilians or destruction of personal property is the key concern that weighs heavily on their minds; not the inability to be in complete command and control. It is easy to say, “accept a certain amount of risk” when you’re not the one accepting it and the risk is, in my opinion, unacceptable.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.