The American maritime shipping industry is one of the most vulnerable critical infrastructures (CI) to ransomware and other forms of cybercrime. Maritime shipping accounts for 90-94 percent of world trade; any disruption to this sector will adversely affect the American economy and international trade more broadly. The July 2017 NotPetya ransomware attack that affected Maersk, a Dutch maritime shipping company, prompts timely action to protect American maritime infrastructure as the industry is ill-prepared to prevent and respond to attacks of this sophistication and scale. The recommended course of action encourages the U.S. Government to subsidize cybersecurity and training horizontally and vertically across the maritime shipping industry through the U.S. Coast Guard (USCG).
Cyber Assaulting Maritime Commerce
Any disruptions to global shipping companies, sea lanes of communication, or maritime chokepoints will have potentially disastrous implications for the economies and the supply chains of the U.S. and the global community. The economic impacts of cyber disruptions and damage to ships, ports, refineries, terminals, and support systems is estimated to be in the hundreds of billions of dollars. Moreover, the second- and third-order effects of a cyber attack are not limited to the maritime sector of CI; if more than one port is disrupted at the same time, a greater impact is “likely to occur” for the Critical Manufacturing, Commercial Facilities, Food and Agriculture, Energy, Chemical, and Transportation Systems of the nation’s CI.
Ransomware attacks eclipsed most other cybercrime threats in 2017. The July 2017 NotPeyta ransomware attack highlighted the vulnerabilities of the maritime shipping industry to cyber disruptions. One of the most high-profile victims of this ransomware attack included the Dutch maritime shipping company Maersk. The company estimates upwards of $300 million in losses from the attack, the majority of which relates to lost revenue. Maersk continued operating for ten days without information technology (IT) until its networks were back online, despite ships with 10,000 to 20,000 containers entering a port every fifteen minutes. NotPetya shut down several ports worldwide, reduced Maersk’s volume by 20 percent, and forced the company to handle the remaining 80 percent of its operations manually. Maersk was forced to replace 45,000 PCs, 4,000 servers and install 2,500 applications.
The maritime shipping industry is highly vulnerable to cybercrime – in particular, ransomware – because of its lack of encryption, increased use of computer services, a lack of standardized training in and awareness of cybersecurity among crew, the sheer cost of defending the maritime IT enterprise, and industry-wide complacence towards cybersecurity. Several navigation systems such as the Global Positioning System (GPS) and the Automatic Identification System (AIS) are neither encrypted nor authenticated, thus being a soft target for cyber criminals. Jamming or spoofing of these systems can ground ships or make two collide, which can close a port or shipping channel for days or weeks depending on the severity of the incident. Disruptions to Industrial Control Systems (ICS) can lead to injury or death, release harmful pollutants, and lead to extensive economic damage across the maritime shipping industry.
Course of Action A: Federal Subsidies for Mandated Cybersecurity Awareness and Training
A Federal Government-enabled focus on prevention and response would proliferate horizontally and vertically across the maritime shipping community. This approach subsidizes the buy-in for industry to approach cybersecurity as a cost-effective asset. Simultaneously, this educates lower echelons of the workforce on digital hygiene to understand the transmission of ransomware and other forms of cybercrime. A positive consequence is the mitigation of industry lacking robust cybersecurity capabilities due to complacence and overhead costs. This is highly probable due to NotPetya’s wake-up call to industry and the existing public-private cybersecurity partnerships.
As the lead agency responsible for maritime cybersecurity in the U.S., the USCG issued a cybersecurity strategy in 2015 to identify best practices and voluntary measures. However, others may argue it is not the place of the U.S. government to subsidize cybersecurity best practices, facilitate compliance, and serve as the arbiter of how industry should train and defend against ransomware and other forms of cybercrime, thus opting instead for only industry-led approaches.
Course of Action B: Leverage Manual Operations and Dated Communications Technologies
This no- and low-tech approach encourages the use of manual navigations operations and older long-range navigation (LORAN) systems to circumvent disruptions to navigational and operational systems. A positive consequence of this approach is the standardization of backup operations for seamless continuity of operations on land, while also mitigating the overreliance on technology at sea. This is a probable course of action given the existing LORAN infrastructure and Maersk operating at 80 percent capacity during the NotPetya attack. A negative consequence is a proliferation in ransomware attacks deliberately targeting this industry since the approach would be passive in nature. This is also probable in occurring given the interconnectedness of the maritime sector to other CIs. However, others may argue that manual training and a functional secondary means of communication mitigates adverse costs from future ransomware attacks.
Course of Action A provides the highest return on investment to address the ransomware threat to the American maritime shipping industry. This prevention-focused and proactive approach will induce a top-down, lateral, and public-private approach to address maritime cybersecurity. While Course of Action B identifies the existence and use of alternative approaches to circumvent – or, at worst, mitigate the consequences of – a ransomware attack, it fails to place a premium on industry-wide digital hygiene which is arguably the most cost-effective, scalable, and fastest approach to ransomware prevention.
Nicholas A. Glavin is a candidate for a Master of Arts in Law and Diplomacy (MALD) from The Fletcher School at Tufts University. He previously worked as a researcher at the U.S. Naval War College’s Center on Irregular Warfare and Armed Groups (CIWAG). The views expressed are the author’s own and do not represent those of the U.S. Government. Follow him on Twitter @nickglavin.
Featured Image: Albert Mærsk in the 70s (Wikimedia Commons)
The U.S. Navy got a lot of press in 2017, and a lot of it was negative. In the Pacific, there were two incidents where U.S. Navy ships collided with civilian vessels, and as a result 17 American Sailors lost their lives. In the wake of these incidents, report after report has come out detailing how the U.S. Navy’s surface fleet is overworked and overwhelmed.
After the collisions, several U.S. Navy commanders lost their jobs, and charges were filed against five Navy officers for offenses ranging up to negligent homicide. This is an almost unprecedented move, and the Navy is attempting to both satisfy the public outcry and remedy the training and readiness shortfalls that have plagued the surface warfare community for some time.
The point isn’t to shame Navy leadership, but rather to point out that the Navy’s surface fleet is terribly overworked. As a nation we are asking them to do too much. Reports show that while underway, Sailors typically work 18-hour days, and fatigue has been cited as a major factor in the collisions. While there may be a desire to generate more overall mine warfare capacity, it is unrealistic to expect the rest of the surface fleet to assume any additional burden for this mission area.
The surface fleet needs to refocus its training and resources on warfighting and lethality. Of all of its currently assigned missions, mine warfare in particular could be transferred to a seabed-specific command.
A Seabed Command would focus entirely on seabed warfare. It could unite many of the currently disparate functions found within the surface, EOD, aviation, and oceanographic communities. Its purview would include underwater surveying and bathymetric mapping, search and recovery, placing and finding mines, testing and operating unmanned submersibles, and developing future technologies that will place the U.S. on the forefront of future seabed battlegrounds.
Why It Is Important
The seabed is the final frontier of the battlespace. Even low earth and geosynchronous orbits have plenty of military satellites, whether they are for communication or surveillance, but the seabed, except for mines and a few small expeditionary vessels, remains largely unexplored.
There are several reasons for this. For one, it’s hard to access. While the U.S. Navy has a few vehicles and systems that allow for deployment to deep depths, the majority of the seabed remains inaccessible, at least not quickly. Since the collapse of the Soviet Union, this hasn’t been a huge problem. Except for in rare cases of submarine rescue, there has been little need for the Navy to deploy forces to extreme depths.
That is changing. Secretary of Defense Mattis has made it clear that in the coming years, threats from nations such as Russia and China will make conventional forces more relevant than they have been in the past 20 years. It is imperative that the U.S. Navy has a solution to rapidly deploy both offensive and defensive forces to the seabed, because right now it can’t.
While mine-hunting robots have been deployed to Arleigh Burke destroyers, it seems unlikely that in a full-scale war the Navy will be able to direct these assets to work full-time at seabed warfare. After all, they’re too valuable. The Arleigh Burke destroyer proved its mettle in Iraq; being able to place cruise missiles through the window of a building certainly has a deterrent effect. But this also means that any attempts to add mine warfare to the destroyers’ responsibilities will be put on the back burner, and that will allow enemies to gain an advantage on the U.S. Navy.
There is simply a finite amount of time, and the Sailors underway cannot possibly add yet more tasks to their already overflowing plate. It would take a great deal of time for Sailors onboard the destroyers to train and drill on seabed warfare, and that’s time they just don’t have. No matter how many ways you look at it, the surface fleet is already working at capacity.
What is needed is a new naval command, equipped with its own fleet of both littoral and deep-water ships and submarines, which focuses entirely on seabed warfare.
In this new command, littoral ships, like the new Freedom Class LCS, will be responsible for near shore seabed activities. This includes clearing friendly harbors of mines, placing mines in enemy harbors, searching for enemy submarines near the coast, and denying the enemy the ability to reach friendly seabeds.
The deep-water component will be equipped with powerful new technology that can seek out, map, and cut or otherwise exploit the enemy’s undersea communications cables on the ocean floor, while at the same time monitor, defend, maintain, and repair our own. It will also deploy stand-off style torpedo pods near enemy shipping lanes; they will be tasked with dominating the seabeds past the 12 nautical mile limit.
We have to be prepared to think of the next war between the U.S. and its enemies as total war. Supplies and the transfer of supplies between enemy countries will be a prime target for the U.S. Navy. We have to assume that in a full nation vs. nation engagement, the submarines, surface ships, aircraft carriers, and land-based aircraft will be needed elsewhere. Even if they are assigned to engage enemy shipping, there are just not enough platforms to hold every area at risk and still service the required targets.
For example, the U.S. will need the fast attacks to insert Special Forces troops, especially since the appetite to employ the Special Forces community has grown in the last 20 years. They will also be needed to do reconnaissance and surveillance. Likewise, the aircraft carriers will have their hands full executing strike missions, providing close air support to ground troops, working to achieve air superiority, and supporting Special Forces missions. Just like the surface fleet is today, the submarine fleet and the aircraft carriers will be taxed to their limit during an all-out war.
That’s why a seabed-specific command is needed to make the most of the opportunities in this domain while being ready to confront an adversary ready to exploit the seabed. Suppose that during a total war, the Seabed Command could place underwater torpedo turrets on the seabed floor, and control them remotely. A dedicated command could place, operate, and service these new weapons, freeing up both the surface and the submarine fleets to pursue other operations. Under control of Seabed Command, these cheap, unmanned torpedo launchers could wait at the bottom until an enemy sonar contact was identified and then engage. Just like pilots flying the MQ-9 Reaper control the aircraft from thousands of miles away, Sailors based in CONUS could operate these turrets remotely. Even the threat of these underwater torpedo pods would be enough to at least change the way an adversary ships crucial supplies across the ocean. If the pods were deployed in remote areas, it would force the enemy to attempt to shift shipping closer to the coast, where U.S. airpower could swiftly interdict.
The final component of Seabed Command would be a small fleet of submarines, equipped for missions like undersea rescue, repair, and reconnaissance. The submarines would also host saturation diving capabilities, enabling the delivery of personnel and equipment to the seafloor. Because these assets are only tasked with seabed operations, the Sailors would receive unique training that would make them specialists in operating in this unforgiving environment.
A brand new Seabed Command and fleet is order. It will be made up of both littoral and deep water surface ships, unmanned torpedo turrets that can be deployed to the ocean floor and operated from a remote base, and a small fleet of submarines specially equipped for seabed operations.
The U.S. Navy cannot rely on the surface warfare community to complete this mission; they are simply too busy as it is. While the submarine force might also seem like a logical choice, in a full-on nation vs. nation war, their top priorities will not be seabed operations. Only a standalone command and fleet will ensure America’s dominance at crush depth.
Joseph LaFave is a journalist covering the defense contracting industry, defense trends, and the Global War on Terror. He is a graduate of Florida State University and was an engineer at Lockheed Martin.
Featured Image: ROV Deep Discoverer investigates the geomorphology of Block Canyon (NOAA)
The following two-part series discusses the command and control of future autonomous systems. Part 1 describes how we have arrived at the current tendency towards detailed control. Part 2 proposes how to refocus on mission command.
Today’s commanders are accustomed to operating in permissive environments and have grown addicted to the connectivity that makes detailed control possible. This is emerging as a major vulnerability. For example, while the surface Navy’s concept of “distributed lethality” will increase the complexity of the detection and targeting problems presented to adversaries, it will also increase the complexity of its own command and control. Even in a relatively uncontested environment, tightly coordinating widely dispersed forces will not be a trivial undertaking. This will tend toward lengthening decision cycles, at a time when the emphasis is on shortening them.1 How will the Navy execute operations in a future Anti-Access/Area-Denial (A2/AD) scenario, where every domain is contested (including the EM spectrum and cyberspace) and every fraction of a second counts?
The Navy must “rediscover” and fully embrace mission command now, to both address current vulnerabilities as well as unleash the future potential of autonomous systems. These systems offer increased precision, faster reaction times, longer endurance, and greater range, but these advantages may not be realized if the approach to command and control remains unchanged. For starters, to prepare for future environments where data links cannot be taken for granted, commanders must be prepared to give all subordinates, human and machine, wide latitude to operate, which is only afforded by mission command. Many systems will progress from a man “in” the loop (with the person integral to the functioning), to a man “on” the loop (where the person oversees the system and executes command by negation), and then to complete autonomy. In the future, fully autonomous systems may collaborate with one another across a given echelon and solve problems based on the parameters communicated to them as commander’s intent (swarms would fall into this category). However, it may go even further. Mission command calls for adaptable leaders at every level; what if at some level the leaders are no longer people but machines? It is not hard to imagine a forward deployed autonomous system tasking its own subordinates (fellow machines), particularly in scenarios where there is no available bandwidth to allow backhaul communications or enable detailed control from afar. In these cases, mission command will not just be the preferred option, it will be the only option. This reliance on mission command may be seen as a cultural shift, but in reality, it is a return to the Navy’s cultural roots.
Back to Basics
Culturally, the Navy should be well-suited to embrace the mission command model to employ autonomous systems. Traditionally once a ship passed over the horizon there was little if any communication for extended periods of time due to technological limitations. This led to a culture of mission command: captains were given basic orders and an overall intent; the rest was up to them. Indeed, captains might act as ambassadors and conduct diplomacy and other business on behalf of the government in remote areas with little direct guidance.2 John Paul Jones himself stated that “it often happens that sudden emergencies in foreign waters make him [the Naval Officer] the diplomatic as well as the military representative of his country, and in such cases he may have to act without opportunity of consulting his civic or ministerial superiors at home, and such action may easily involve the portentous issue of peace or war between great powers.”3 This is not to advocate that autonomous systems will participate in diplomatic functions, but it does illustrate the longstanding Navy precedent for autonomy of subordinate units.
Another factor in support of the Navy favoring mission command is that the physics of the operating environment may demand it. For example, the physical properties of the undersea domain prohibit direct, routine, high-bandwidth communication with submerged platforms. This is the case with submarines and is being applied to UUVs by extension. This has led to extensive development of autonomous underwater vehicles (AUVs) vice remotely operated ones; AUVs clearly favor mission command.
Finally, the Navy’s culture of decentralized command is the backbone of the Composite Warfare Commander (CWC) construct. CWC is essentially an expression of mission command. Just as technology (the telegraph cable, wireless, and global satellite communication) has afforded the means of detailed control and micromanagement, it has also increased the speed of warfighting, necessitating decentralized execution. Command by negation is the foundation of CWC, and has been ingrained in the Navy’s officer corps for decades. Extending this mindset to autonomous systems will be key to realizing their full capabilities.
This begs the question: how does one train senior commanders who rose through the ranks during the age of continuous connectivity to thrive in a world of autonomous systems where detailed control is not an option? For a start, they could adopt the mindset of General Norman Schwarzkopf, who described how hard it was to resist interfering with his subordinates:
“I desperately wanted to do something, anything, other than wait, yet the best thing I could do was stay out of the way. If I pestered my generals I’d distract them: I knew as well as anyone that commanders on the battlefield have more important things to worry about than keeping higher headquarters informed…”4
That said, even while restraining himself, at the height of OPERATION DESERT STORM, his U.S. Central Command used more than 700,000 telephone calls and 152,000 radio messages per day to coordinate the actions of their subordinate forces. In contrast, during the Battle of Trafalgar in 1805, Nelson used only three general tactical flag-hoist signals to maneuver the entire British fleet.5
Commanders must learn to be satisfied with the ambiguity inherent in mission command. They must become comfortable clearly communicating their intent and mission requirements, whether tasking people or autonomous systems. Again, there isn’t a choice; the Navy’s adversaries are investing in A2/AD capabilities that explicitly target the means that make detailed control possible. Furthermore, the ambiguity and complexity of today’s operating environments prohibit “a priori” composition of complete and perfect instructions.
Placing commanders into increasingly complex and ambiguous situations during training will push them toward mission command, where they will have to trust subordinates closer to the edge who will be able to execute based on commander’s intent and their own initiative. General Dempsey, former Chairman of the Joint Chiefs of Staff, stressed training that presented commanders with fleeting opportunities and rewarding those who seized them in order to encourage commanders to act in the face of uncertainty.
Familiarization training with autonomous systems could take place in large part via simulation, where commanders interact with the actual algorithms and rehearse at a fraction of the cost of executing a real-world exercise. In this setting, commanders could practice giving mission type orders and translating them for machine understanding. They could employ their systems to failure, analyze where they went wrong, and learn to adjust their level of supervision via multiple iterations. This training wouldn’t be just a one-way evolution; the algorithms would also learn about their commander’s preferences and thought process by finding patterns in their actions and thresholds for their decisions. Through this process, the autonomous system would understand even more about commander’s intent should it need to act alone in the future. If the autonomous system will be in a position to task its own robotic subordinates, that algorithm would be demonstrated so the commander understands how the system may act (which will have incorporated what it has learned about how its commander commands).
With this in mind, while it may seem trivial, consideration must be made for the fact that future autonomous systems may have a detailed algorithmic model of their commander’s thought process, “understand” his intent, and “know” at least a piece of “the big picture.” As such, in the future these systems cannot simply be considered disposable assets performing the dumb, dirty, dangerous work that exempt a human from having to go in harm’s way. They will require significant anti-tamper capabilities to prevent an adversary from extracting or downloading this valuable information if they are somehow taken or recovered by the enemy. Perhaps they could even be armed with algorithms to “resist” exploitation or give misleading information.
The Way Ahead
Above all, commanders will need to establish the same trust and confidence in autonomous systems that they have in manned systems and human operators.6 Commanders trust manned systems, even though they are far from infallible. This came to international attention with the airstrike on the Medecins Sans Frontieres hospital operating in Kunduz, Afghanistan. As this event illustrated, commanders must acknowledge the potential for human error, put mitigation measures in place where they can, and then accept a certain amount of risk. In the future, advances in machine learning and artificial intelligence will yield algorithms that far exceed human processing capabilities. Autonomous systems will be able to sense, process, coordinate, and act faster than their human counterparts. However, trust in these systems will only come from time and experience, and the way to secure that is to mainstream autonomous systems into exercises. Initially these opportunities should be carefully planned and executed, not just added in as an afterthought. For example, including autonomous systems in a particular Fleet Battle Experiment solely to check a box that they were used raises the potential for negative training, where the observers see the technology fail due to ill-conceived employment. As there may be limited opportunities to “win over” the officer corps, this must be avoided. Successfully demonstrating the capabilities (and the legitimate limitations) of autonomous systems is critical. Increased use over time will ensure maximum exposure to future commanders, and will be key to widespread adoption and full utilization.
The Navy must return to its roots and rediscover mission command in order to fully leverage the potential of autonomous systems. While it may make commanders uncomfortable, it has deep roots in historic practice and is a logical extension of existing doctrine. Former General Dempsey wrote that mission command “must pervade the force and drive leader development, organizational design and inform material acquisitions.”7 Taking this to heart and applying it across the board will have profound and lasting impacts as the Navy sails into the era of autonomous systems.
Tim McGeehan is a U.S. Navy Officer currently serving in Washington.
The ideas presented are those of the author alone and do not reflect the views of the Department of the Navy or Department of Defense.
 Dmitry Filipoff, Distributed Lethality and Concepts of Future War, CIMSEC, January 4, 2016, http://cimsec.org/distributed-lethality-and-concepts-of-future-war/20831
 Naval Doctrine Publication 6: Naval Command and Control, 1995, http://www.dtic.mil/dtic/tr/fulltext/u2/a304321.pdf, p. 9
 Connell, Royal W. and William P. Mack, Naval Customs, Ceremonies, and Traditions, 1980, p. 355.
 Schwartzkopf, Norman, It Doesn’t Take a Hero: The Autobiography of General Norman Schwartzkopf, 1992, p.523
 Ibid 2, p. 4
 Greg Smith, Trusting Autonomous Systems: It’s More Than Technology, CIMSEC, September 18, 2015, http://cimsec.org/trusting-autonomous-systems-its-more-than-technology/18908
 Martin Dempsey, Mission Command White Paper, April 3, 2012, http://www.dtic.mil/doctrine/concepts/white_papers/cjcs_wp_missioncommand.pdf
Featured Image: SOUTH CHINA SEA (April 30, 2017) Sailors assigned to Helicopter Sea Combat Squadron 23 run tests on the the MQ-8B Firescout, an unmanned aerial vehicle, aboard littoral combat ship USS Coronado (LCS 4). (U.S. Navy photo by Mass Communication Specialist 3rd Class Deven Leigh Ellis/Released)
The following two-part series discusses the command and control of future autonomous systems. Part 1 describes how we have arrived at the current tendency towards detailed control. Part 2 proposes how to refocus on mission command.
In recent years, the U.S. Navy’s unmanned vehicles have achieved a number of game-changing “firsts.” The X-47B Unmanned Combat Air System (UCAS) executed the first carrier launch and recovery in 2013, first combined manned/unmanned carrier operations in 2014, and first aerial refueling in 2015.1 In 2014, the Office of Naval Research demonstrated the first swarm capability for Unmanned Surface Vehicles (USV).2 In 2015, the NORTH DAKOTA performed the first launch and recovery of an Unmanned Underwater Vehicle (UUV) from a submarine during an operational mission.3 While these successes may represent the vanguard of a revolution in military technology, the larger revolution in military affairs will only be possible with the optimization of the command and control concepts associated with these systems. Regardless of specific mode (air, surface, or undersea), Navy leaders must fully embrace mission command to fully realize the power of these capabilities.
“Unmanned” systems are not necessarily new. The U.S. Navy’s long history includes the employment of a variety of such platforms. For example, in 1919, Coast Battleship #4 (formerly USS IOWA (BB-1)) became the first radio-controlled target ship to be used in a fleet exercise.4 During World War II, participation in an early unmanned aircraft program called PROJECT ANVIL ultimately killed Navy Lieutenant Joe Kennedy (John F. Kennedy’s older brother), who was to parachute from his bomb-laden aircraft before it would be guided into a German target by radio-control.5 In 1946, F6F Hellcat fighters were modified for remote operation and employed to collect data during the OPERATION CROSSROADS atomic bomb tests at Bikini.6 These Hellcat “drones” could be controlled by another aircraft acting as the “queen” (flying up to 30 miles away). These drones were even launched from the deck of an aircraft carrier (almost 70 years before the X-47B performed that feat).
However, the Navy’s achievements over the last few years were groundbreaking because the platforms were autonomous (i.e. controlled by machine, not remotely operated by a person). The current discussion of autonomy frequently revolves around the issues of ethics and accountability. Is it ethical to imbue these machines with the authority to use lethal force? If the machine is not under direct human control but rather evaluating for itself, who is responsible for its decisions and actions when faced with dilemmas? Much has been written about these topics, but there is a related and less discussed question: what sort of mindset shift will be required for Navy leaders to employ these systems to their full potential?
Command, Control, and Unmanned Systems
According to Naval Doctrine Publication 6 – Command and Control (NDP 6), “a commander commands by deciding what must be done and exercising leadership to inspire subordinates toward a common goal; he controls by monitoring and influencing the action required to accomplish what must be done.”7 These enduring concepts have new implications in the realm of unmanned systems. For example, while a commander can assign tasks to any subordinate (human or machine), “inspiring subordinates” has varying levels of applicability based on whether his units consist of “remotely piloted” aircraft (where his subordinates are actual human pilots) or autonomous systems (where the “pilot” is an algorithm controlling a machine). “Command” also includes establishing intent, distributing guidance on allocation of roles, responsibilities, and resources, and defining constraints on actions.8 On one hand, this could be straightforward with autonomous systems as this guidance could be translated into a series of rules and parameters that define the mission and rules of engagement. One would simply upload the mission and deploy the vehicle, which would go out and execute, possibly reporting in for updates but mostly operating on its own, solving problems along the way. On the other hand, in the absence of instructions that cover every possibility, an autonomous system is only as good as the internal algorithms that control it. Even as machine learning drastically improves and advanced algorithms are developed from extensive “training data,” an autonomous system may not respond to novel and ambiguous situations with the same judgment as a human. Indeed, one can imagine a catastrophic military counterpart to the 2010 stock market “flash crash,” where high-frequency trading algorithms designed to act in accordance with certain, pre-arranged criteria did not understand context and misread the situation, briefly erasing $1 trillion in market value.9
“Control” includes the conduits and feedback from subordinates to their commander that allow them to determine if events are on track or to adjust instructions as necessary. This is reasonably straightforward for a remotely piloted aircraft with a constant data link between platform and operator, such as the ScanEagle or MQ-8 Fire Scout unmanned aerial systems. However, a fully autonomous system may not be in positive communication. Even if it is ostensibly intended to remain in communication, feedback to the commander could be limited or non-existent due to emissions control (EMCON) posture or a contested electromagnetic (EM) spectrum.
Mission Command and Unmanned Systems
In recent years, there has been a renewed focus across the Joint Force on the concept of “mission command.” Mission command is defined as “the conduct of military operations through decentralized execution based upon mission-type orders,” and it lends itself well to the employment of autonomous systems.10 Joint doctrine states:
“Mission command is built on subordinate leaders at all echelons who exercise disciplined initiative and act aggressively and independently to accomplish the mission. Mission-type orders focus on the purpose of the operation rather than details of how to perform assigned tasks. Commanders delegate decisions to subordinates wherever possible, which minimizes detailed control and empowers subordinates’ initiative to make decisions based on the commander’s guidance rather than constant communications.”11
Mission command for an autonomous system would require commanders to clearly confer their intent, objectives, constraints, and restraints in succinct instructions, and then rely on the “initiative” of said system. While this decentralized arrangement is more flexible and better suited to deal with ambiguity, it opens the door to unexpected or emergent behavior in the autonomous system. (Then again, emergent behavior is not confined to algorithms, as humans may perform in unexpected ways too.)
In addition to passing feedback and information up the chain of command to build a shared understanding of the situation, mission command also emphasizes horizontal flow across the echelon between the subordinates. Since it relies on subordinates knowing the intent and mission requirements, mission command is much less vulnerable to disruption than detailed means of command and control.
However, some commanders today do not fully embrace mission command with human subordinates, much less feel comfortable delegating trust to autonomous systems. They issue explicit instructions to subordinates in a highly-centralized arrangement, where volumes of information flow up and detailed orders flow down the chain of command. This may be acceptable in deliberate situations where time is not a major concern, where procedural compliance is emphasized, or where there can be no ambiguity or margin for error. Examples of unmanned systems suitable to this arrangement include a bomb disposal robot or remotely piloted aircraft that requires constant intervention and re-tasking, possibly for rapid repositioning of the platform for a better look at an emerging situation or better discrimination between friend and foe. However, this detailed control does not “function well when the vertical flow of information is disrupted.”12 Furthermore, when it comes to autonomous systems, such detailed control will undermine much of the purpose of having an autonomous system in the first place.
A fundamental task of the commander is to recognize which situations call for detailed control or mission command and act appropriately. Unfortunately, the experience gained by many commanders over the last decade has introduced a bias towards detailed control, which will hamstring the potential capabilities of autonomous systems if this tendency is not overcome.
The American military has enjoyed major advantages in recent conflicts due to global connectivity and continuous communications. However, this has redefined expectations and higher echelons increasingly rely on detailed control (for manned forces, let alone unmanned ones). Senior commanders (or their staffs) may levy demands to feed a seemingly insatiable thirst for information. This has led to friction between the echelons of command, and in some cases this interaction occurs at the expense of the decision-making capability of the unit in the field. Subordinate staff watch officers may spend more time answering requests for information and “feeding the beast” of higher headquarters than they spend overseeing their own operations.
It is understandable why this situation exists today. The senior commander (with whom responsibility ultimately resides) expects to be kept well-informed. To be fair, in some cases a senior commander located at a fusion center far from the front may have access to multiple streams of information, giving them a better overall view of what is going on than the commander actually on the ground. In other cases, it is today’s 24-hour news cycle and zero tolerance for mistakes that have led senior commanders to succumb to the temptation to second-guess their subordinates and micromanage their units in the field. A compounding factor that may be influencing commanders in today’s interconnected world is “Fear of Missing Out” (FoMO), which is described by psychologists as apprehension or anxiety stemming from the availability of volumes of information about what others are doing (think social media). It leads to a strong, almost compulsive desire to stay continually connected. 13
Whatever the reason, this is not a new phenomenon. Understanding previous episodes when leadership has “tightened the reins” and the subsequent impacts is key to developing a path forward to fully leverage the potential of autonomous systems.
Veering Off Course
The recent shift of preference away from mission command toward detailed control appears to echo the impacts of previous advances in the technology employed for command and control in general. For example, when speaking of his service with the U.S. Asiatic Squadron and the introduction of the telegraph before the turn of the 20th century, Rear Admiral Caspar Goodrich lamented “Before the submarine cable was laid, one was really somebody out there, but afterwards one simply became a damned errand boy at the end of a telegraph wire.”14
Later, the impact of wireless telegraphy proved to be a mixed blessing for commanders at sea. Interestingly, the contrasting points of view clearly described how it would enable micromanagement; the difference in opinion was whether this was good or bad. This was illustrated by two 1908 newspaper articles regarding the introduction of wireless in the Royal Navy. One article extolled its virtues, describing how the First Sea Lord in London could direct all fleet activities “as if they were maneuvering beneath his office windows.”15 The other article described how those same naval officers feared “armchair control… by means of wireless.”16 In century-old text that could be drawn from today’s press, the article quoted a Royal Navy officer:
“The paramount necessity in the next naval war will be rapidity of thought and of execution…The innovation is causing more than a little misgiving among naval officers afloat. So far as it will facilitate the interchange of information and the sending of important news, the erection of the [wireless] station is welcomed, but there is a strong fear that advantage will be taken of it to interfere with the independent action of fleet commanders in the event of war.”
Military historian Martin van Creveld related a more recent lesson of technology-enabled micromanagement from the U.S. Army. This time the technology in question was the helicopter, and its widespread use by multiple echelons of command during Viet Nam drove the shift away from mission command to detailed control:
“A hapless company commander engaged in a firefight on the ground was subjected to direct observation by the battalion commander circling above, who was in turn supervised by the brigade commander circling a thousand or so feet higher up, who in his turn was monitored by the division commander in the next highest chopper, who might even be so unlucky as to have his own performance watched by the Field Force (corps) commander. With each of these commanders asking the men on the ground to tune in his frequency and explain the situation, a heavy demand for information was generated that could and did interfere with the troops’ ability to operate effectively.”17
However, not all historic shifts toward detailed control are due to technology; some are cultural. For example, leadership had encroached so much on the authority of commanders in the days leading up to World War II that Admiral King had to issue a message to the fleet with the subject line “Exercise of Command – Excess of Detail in Orders and Instructions,” where he voiced his concern. He wrote that the:
“almost standard practice – of flag officers and other group commanders to issue orders and instructions in which their subordinates are told how as well as what to do to such an extent and in such detail that the Custom of the service has virtually become the antithesis of that essential element of command – initiative of the subordinate.”18
Admiral King attributed this trend to several cultural reasons, including anxiety of seniors that any mistake of a subordinate be attributed to the senior and thereby jeopardize promotion, activities of staffs infringing on lower echelon functions, and the habit and expectation of detailed instructions from junior and senior alike. He went on to say that they were preparing for war, when there would be neither time nor opportunity for this method of control, and this was conditioning subordinate commanders to rely on explicit guidance and depriving them from learning how to exercise initiative. Now, over 70 years later, as the Navy moves forward with autonomous systems the technology-enabled and culture-driven drift towards detailed control is again becoming an Achilles heel.
 David Smalley, The Future Is Now: Navy’s Autonomous Swarmboats Can Overwhelm Adversaries, ONR Press Release, October 5, 2014, http://www.onr.navy.mil/en/Media-Center/Press-Releases/2014/autonomous-swarm-boat-unmanned-caracas.aspx
 Associated Press, Submarine launches undersea drone in a 1st for Navy, Military Times, July 20, 2015, http://www.militarytimes.com/story/military/tech/2015/07/20/submarine-launches-undersea-drone-in-a-1st-for-navy/30442323/
 Naval History and Heritage Command, Iowa II (BB-1), July 22, 2015, http://www.history.navy.mil/research/histories/ship-histories/danfs/i/iowa-ii.html
 Trevor Jeremy, LT Joe Kennedy, Norfolk and Suffolk Aviation Museum, 2015, http://www.aviationmuseum.net/JoeKennedy.htm
 Puppet Planes, All Hands, June 1946, http://www.navy.mil/ah_online/archpdf/ah194606.pdf, p. 2-5
 Naval Doctrine Publication 6: Naval Command and Control, 1995, http://www.dtic.mil/dtic/tr/fulltext/u2/a304321.pdf, p. 6
 David Alberts and Richard Hayes, Understanding Command and Control, 2006, http://www.dodccrp.org/files/Alberts_UC2.pdf, p. 58
 Ben Rooney, Trading program sparked May ‘flash crash’, October 1, 2010, CNN, http://money.cnn.com/2010/10/01/markets/SEC_CFTC_flash_crash/
 DoD Dictionary of Military and Associated Terms, March, 2017, http://www.dtic.mil/doctrine/new_pubs/jp1_02.pdf
 Andrew Przybylski, Kou Murayama, Cody DeHaan , and Valerie Gladwell, Motivational, emotional, and behavioral correlates of fear of missing out, Computers in Human Behavior, Vol 29 (4), July 2013, http://www.sciencedirect.com/science/article/pii/S0747563213000800
 Michael Palmer, Command at Sea: Naval Command and Control since the Sixteenth Century, 2005, p. 215
 W. T. Stead, Wireless Wonders at the Admiralty, Dawson Daily News, September 13, 1908, https://news.google.com/newspapers?nid=41&dat=19080913&id=y8cjAAAAIBAJ&sjid=KCcDAAAAIBAJ&pg=3703,1570909&hl=en
 Fleet Commanders Fear Armchair Control During War by Means of Wireless, Boston Evening Transcript, May 2, 1908, https://news.google.com/newspapers?nid=2249&dat=19080502&id=N3Y-AAAAIBAJ&sjid=nVkMAAAAIBAJ&pg=470,293709&hl=en
 Martin van Creveld, Command in War, 1985, p. 256-257.
 CINCLANT Serial (053), Exercise of Command – Excess of Detail in Orders and Instructions, January 21, 1941
Featured Image: An X-47B drone prepares to take off. (U.S. Navy photo)