Tag Archives: AI

Alexa, Write my OPORD: Promise and Pitfalls of Machine Learning for Commanders in Combat

By Jeff Wong


Jump into the time machine and fast forward a few years into the future. The United States is at war, things are not going well, and the brass wants to try something new. John McCarthy,1 a Marine lieutenant colonel whose knowledge of technology is limited to the Microsoft Office applications on his molasses-slow government laptop, mulls over his tasks, as specified by his two-star boss, the commander of Joint Task Force 58:

1. Convene a joint planning group to develop a plan for the upcoming counteroffensive. (Check.)

2. Leverage representatives from every staff section and subject-matter experts participating virtually from headquarters in Hawaii or CONUS. (Roger.)

3. Use an experimental machine-learning application to support the planning and execution of the operation. (We’re screwed.)

Nearly 7,000 miles from a home he might never see again, McCarthy considered two aphorisms. The first was from Marcus Aurelius, the second-century Roman emperor and stoic thinker: “Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present.” 2 The second was from Mike Tyson, the fearsome boxer, convicted felon, and unlikely philosopher: “Everybody has a plan until they get punched in the mouth.”3

Artificial intelligence (AI), including large-language models (LLMs) such as ChatGPT, is driving a societal revolution that will impact all aspects of life, including how nations wage war and secure their economic prosperity. “The ability to innovate faster and better – the foundation on which military, economic, and cultural power now rest – will determine the outcome of great-power competition between the United States and China,” notes Eric Schmidt, the former chief executive officer of Google and chair of the Special Competitive Studies Project.4 The branch of AI that uses neural networks to mimic human cognition — machine learning (ML) — offers military planners a powerful tool for planning and executing missions with greater accuracy, flexibility, and speed. Senior political and military leaders in China share this view and have made global technological supremacy a national imperative.5

Through analyzing vast amounts of data and applying complex algorithms, ML can enhance situational awareness, anticipate threats, optimize resources, and adapt to generate more successful outcomes. However, as ML becomes more widespread and drives technological competition against China, American military thinkers must develop frameworks to address the role of human judgment and accountability in decision-making and the potential risks and unintended consequences of relying on automated systems in war.

To illustrate the promise and pitfalls of using ML to support decision-making in combat, imagine the pages of a wall calendar flying to some point when America must fight a war against a peer adversary. Taking center stage in this fictional journey are two figures: The first is McCarthy, an officer of average intelligence who only graduated from the Eisenhower School for National Security and Resource Strategy thanks to a miraculous string of B+ papers at the end of the academic year. The second is “MaL,” a multimodal, LLM that is arguably the most capable – yet most immature – member of McCarthy’s planning team. This four-act drama explores how McCarthy and his staff scope the problems they want MaL to help them solve, how they leverage ML to support operational planning, and how they use MaL to support decision-making during execution. The final act concludes by offering recommendations for doctrine, training and education, and leadership and policies to better harness this capability in the wars to come.

Act One: “What Problems Do We Want This Thing to Solve?”

The task force was previously known as “JTF-X,” an experimental formation that blended conventional legacy platforms with AI and autonomous capabilities. As the tides of war turned against the United States and its allies, the Secretary of Defense (SecDef) pressured senior commanders to expand the use of AI-enabled experimental capabilities. Rather than distribute its capabilities across the rest of the joint force, the SecDef ordered JTF-X into duty as a single unit to serve as the theater reserve for a geographic combatant commander. “Necessity breeds innovation… sometimes kicking and screaming,” she said.

 Aboard the JTF command ship in a cramped room full of maps, satellite imagery, and charts thick with unit designators, McCarthy stared at a blinking cursor on a big-screen projection of MaL. Other members of his OPT looked over his shoulder as he impulsively typed out, “Alexa, write my OpOrd.” Undulating dots suggested MaL was formulating a response before MaL responded, “I’m not sure what you’re asking for.”

The JPG chief, an Air Force senior master sergeant, voiced what the humans in the room were thinking: “Sir, what problems do we want this thing to solve?”

The incredible capacity of ML tools to absorb, organize, and generate insights from large volumes of data suggests that they hold great promise to support operational planning. Still, leaders, planners, and ML tool developers must determine how best to harness the capability to solve defined military problems. For instance, the Ukrainian military uses AI to collect and assess intelligence, surveillance, and reconnaissance (ISR) data from numerous sources in the Russia-Ukraine conflict.6 But as Ukrainian planners are probably discovering today, they must do more than throw current ML tools at problem sets. Current LLMs fall short of desired capabilities to help human planners infer and make sense within the operating environment. Military professionals must tailor the problem sets to match the capabilities and limitations of the ML solutions.

Although tools supporting decision advantage comprised a small fraction of the 685 AI-related projects and accounts within the DoD as of 2021, existing efforts align with critical functions such as the collection and fusion of data from multiple domains (akin to the DoD’s vision for Joint All-Domain Command and Control (JADC2)); multidomain decision support for a combatant command headquarters; automated analysis of signals across the electromagnetic spectrum; and location of bots to support defensive cyber operations.7 There are numerous tools with various tasks and functions, but the crux of the problem will be focusing the right tool or set of tools on the appropriate problem sets. Users need to frame planning tasks and precisely define the desired outputs for a more immature LLM capability, such as the fictional MaL.

McCarthy mapped out a task organization for the JPG to align deliverables with available expertise. The team chief scribbled dates and times for upcoming deliverables on a whiteboard, including the confirmation brief for the commander in 24 hours. An Army corporal sat before a desktop computer to serve as the group’s primary interface with MaL. To help the group develop useful queries and troubleshoot, MaL’s developers in Hawaii participated via a secure video teleconference.

MaL was already able to access volumes of existing data – operations and contingency plans, planning artifacts from previous operations, ISR data from sensors ranging from national assets to stand-in forces in theater, fragmentary orders, and mountains of open-source information.

Act Two: MaL Gets Busy to the ‘Left of the Bang’

Some observers believe that ML capabilities can flatten the “orient” and “decide” steps of the late military theorist John Boyd’s observe-orient-decide-act decision (OODA) loop, expanding a commander’s opportunity to understand context, gain an appreciation of the battlespace, frame courses of action, and explore branches and sequels to inform decisions.8 Nevertheless, the greater capacity that ML tools provide does not eliminate the need for leaders to remain intimately involved in the planning process and work with the platform to define decision points as they weigh options, opportunities, risks, and possible outcomes.

Planners should examine frameworks such as the OODA Loop, IPB, and the Joint Operational Planning Process (JOPP) to guide where they could apply ML to maximum effect. To support IPB, ML can automate aspects of collection and processing, such as identifying objects, selecting potential targets for collection, and guiding sensors. ML capabilities for image and audio classification and natural language processing are already in commercial use. They support essential functions for autonomous vehicles, language translation, and transit scheduling. These commercial examples mirror military use cases, as nascent platforms fuse disparate data from multiple sources in all five warfighting domains.9

MaL’s digital library included the most relevant intelligence reports; adversary tactics, techniques, and procedures summaries; videos of possible target locations taken by uncrewed systems; raw signals intelligence; and assessments of the enemy orders of battle and operational tendencies from the early months of the conflict. The corpus of data also included online news stories and social media postings scraped by an all-source intelligence aggregator.

 McCarthy said, “As a first step, let’s have MaL paint the picture for us based on the theater-level operational context, then create an intelligence preparation of the battlespace presentation with no more than 25 PowerPoint slides.” After the clerk entered the query, the graphic of a human hand drumming its fingers on a table appeared.

Two minutes later, MaL saved a PowerPoint file on the computer’s desktop and announced in a metallic voice, “Done, sir, done!” McCarthy and his J-2 reviewed the IPB brief, which precisely assessed the enemy, terrain, weather, and civil considerations. MaL detailed the enemy’s most likely and dangerous courses of action, discussed adversary capabilities and limitations across all domains, and provided a target-value analysis aligning with the most recent intelligence. The J-2 reviewed the product and said, “Not bad.” She added, “Should I worry about losing my job?”

“Maybe I should worry about losing mine,” McCarthy said. “Let’s go through the planning process with MaL and have it generate three COAs based on the commander’s planning guidance and intent statement.”

American military planning frameworks – JOPP and its nearly identical counterparts across the services – are systematic processes that detail the operational environment, the higher commander’s intent, specified and implied tasks, friendly and enemy COAs, and estimates of supportability across all warfighting functions. They benefit the joint force by establishing uniform expectations about the information needed to support a commander’s estimate of the situation. However, current planning frameworks may hinder decision-making because a commander and his staff may become too focused on the process instead of devoting their energies and mental bandwidth to quickly orienting themselves to a situation and making decisions. Milan Vego, a professor of operational art at the U.S. Naval War College, writes of “a great temptation to steadily expand scope and the amount of information in the (commander’s) estimate. All this could be avoided if commanders and staffs are focused on the mental process and making a quick and good decision.”10

An ML-enabled decision-support capability could help planners stay above process minutiae by suggesting options for matching available weapon systems to targets, generating COAs informed by real-time data, and assessing the likelihood of success for various options in a contemporary battlespace, which features space and cyberspace as contested warfighting domains.11

MaL developed three unacceptable COAs which variously exceeded the unit’s authorities as outlined in the JTF’s initiating directive or extended kinetic operations into the adversary’s mainland, risking escalation.

McCarthy rubbed his face and said, “Time for a reboot. Let’s better define constraints and restraints, contain operations in our joint operational area, and have it develop assessments for risks to mission and force.” He added, “And this time, we’ll try not to provoke a nuclear apocalypse.”

The planning team spent several more hours refining their thoughts, submitting prompts, and reviewing the results. Eventually, MaL generated three COAs that were feasible, acceptable, complete, and supportable. MaL tailored battlespace architecture, fire support coordination measures, and a detailed sustainment plan for each COA and mapped critical decision points throughout the operation. MaL also assisted the JTF air officer develop three distinct aviation support plans for each COA. 

The planning team worked with MaL to develop staff estimates for each COA. The logistics and communications representatives were surprised at how quickly MaL produced coherent staff estimates following a few hours of queries. The fires, intelligence, and maneuver representatives similarly worked with MaL to develop initial fire support plans to synchronize with the group’s recommended COA.

McCarthy marveled at MaL’s ability to make sense of large amounts of data, but he was also surprised at the ML platform’s tendency to misinterpret words. For instance, it continually conflated the words “delay,” “disrupt,” and “destroy,” which were distinct tactical tasks with different effects on enemy formations. The planning team reviewed MaL’s COA overviews and edited the platform’s work. The staff estimates were detailed, and insightful, but still required corrections.

During the confirmation brief, the JTF commander debated several details about MaL’s outputs and risk assessments of the planning team’s recommended COA. McCarthy said, “Respectfully, Admiral, this is your call. MaL is one tool in your toolbox. We can tweak the COA to suit your desires. We can also calibrate the level of automation in the kill chain based on your intent.”

After a moment, the admiral said, “I’ll follow the planning team’s recommendation. Notice that I didn’t say MaL’s recommendation because MaL is just one part of your team.”

Act Three: MaL Lends a Hand to the Right of the ‘Bang’

Contemporary military thinkers believe that ML-enabled tools could improve decision-making during the execution of an operation, leveraging better situational awareness to suggest more effective solutions for problems that emerge in a dynamic battlespace. However, critics argue that developing even a nascent ML-enabled capability is impossible because of the inherent limits of ML-enabled platforms to generate human-like reasoning and infer wisdom from incomplete masses of unstructured data emanating from a 21st-century battlefield. Some are also concerned about the joint force’s ability to send and receive data from a secure cloud subject to possible malicious cyber activities or adversarial ML. Prussian military thinker Carl von Clausewitz reminds practitioners of the operational art that “War is the realm of uncertainty; three-quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty.”12 Technological solutions such as ML-enabled capabilities can temporarily lift parts of this fog for defined periods. Still, users must understand the best use of these capabilities and be wary of inferring certainty from their outputs.

Emerging capabilities suggest that ML can augment and assist human decision-making in several ways. First, ML systems can enhance situational awareness by establishing and maintaining a real-time common operational picture derived from multiple sensors in multiple domains. Greater battlespace awareness provides context to support better decision-making by commanders and more accurate assessments of events on the battlefield. Second, ML can improve the effectiveness and efficiency of kill-chain analytics by quickly matching available sensors and shooters with high-value or high-payoff targets.13 This efficiency is essential in the contemporary battlespace, where ubiquitous sensors can quickly detect a target based on a unit’s emissions in the electromagnetic spectrum or signatures from previously innocuous activities like an Instagram post or a unit’s financial transaction with a host-nation civilian contractor.

Indeed, some AI observers in the U.S. defense analytic community argue that warfighters must quickly adopt ML to maintain a competitive edge against the People’s Liberation Army, which some observers believe will favor the possible gains in warfighting effectiveness and efficiency over concerns about ethical issues such as implementing human-off-the-loop AI strategies.14 ML-enabled feedback constructs will enhance the control aspects of command and control to employ a large, adaptable, and complex multidomain force.15

It was now D+21. JTF-58 achieved its primary objectives, but the campaign did not unfold as intended. During the shaping phase of the operation, several high-value targets, including mobile anti-ship cruise missile batteries, escaped kinetic targeting efforts, living to fight another day and putting U.S. naval shipping at risk for the latter phases of the campaign. MaL failed to update the developers’ advertised “dynamic operating picture” despite attempts by forward-deployed sensors and reconnaissance teams to report their movement. Incredibly, the DoD still did not have access to data from some weapon systems due to manufacturer stipulations.16

MaL’s developers insisted that the forward-deployed sensors should have had enough computing power to push edge data to the JTF’s secure cloud. The CommO speculated that environmental conditions or adversary jamming could have affected connectivity. McCarthy shook his head and said, “We need to do better.”

MaL performed predictably well in some areas to the right of the bang. The task force commander approved using MaL to run networked force protection systems, including a Patriot battery that successfully intercepted an inbound missile and electronic-warfare (EW) systems that neutralized small unmanned aerial systems targeting a fuel farm. MaL’s use in these scenarios did not stretch anyone’s comfort level since these employment methods were barely different than the automation of systems like the U.S. Navy’s Phalanx close-in weapon system (CIWS), which has detected, evaluated, tracked, engaged, and conducted kill assessments of airborne threats for more than four decades.17

MaL’s communications and logistics staff estimates were precise and valuable for the staff. The CommO adjusted the tactical communications architecture based on MaL’s predictions about enemy EW methods and the effects of weather and terrain on forward maneuver elements. Similarly, the LogO worked with the OpsO to establish additional forward-arming and refueling points (FARPs) based on MaL’s fuel and munitions consumption projections.

In the middle of the operation, the task force commander issued a fragmentary order to take advantage of an unfolding opportunity. MaL leveraged open-source data from host-nation news websites and social media postings by enemy soldiers to inform battle damage assessment of kinetic strikes. Some of that information was fake and skewed the assessment until the intelligence officer corrected it by comparing it with satellite imagery and human intelligence reporting.

As with any emerging capability, commanders and their staffs must consider the risks of integrating ML into the planning, execution, and assessment of operations. One of the risks is inherent in forecasting, as the ML platform artificially closes the feedback loop to a decision-maker sooner than one would expect during real-world operations. Retired U.S. Navy Captain Adam Aycock and Naval War College professor William Glenney IV assert that this lag might make ML outputs moot when a commander makes a decision. “The operational commander might not receive feedback, and the effects of one move might not be recognized until several subsequent moves have been made,” Aycock and Glenney write. “Furthermore, a competent enemy would likely attempt to mask or manipulate this feedback. Under such circumstances … it is difficult to ‘learn’ from a first move in order to craft the second.”18

Another risk is that the data used by ML platforms are, in some way, inaccurate, incomplete, or unstructured. Whether real or training, flawed data will lead to inaccurate outputs and likely foul an ML-enabled tool’s assessment of the environment and COA development. “Unintentional failure modes can result from training data that do not represent the full range of conditions or inputs that a system will face in deployment,” write Wyatt Hoffman and Heeu Millie Kim, researchers with the Center for Security and Emerging Technology at Georgetown University. “The environment can change in ways that cause the data used by the model during deployment to differ substantially from the data used to train the model.”19

The corollary to inaccurate data is adversarial ML, in which an enemy manipulates data to trick an ML system, degrade or disrupt optimal performance, and erode users’ trust in the capability. Adversarial ML tricks can trick an ML model into misidentifying potential targets or mischaracterizing terrain and weather. In one notable demonstration of adversarial ML, researchers at the Chinese technology giant Tencent placed stickers on a road to trick the lane recognition system of a Tesla semi-autonomous car, causing it to swerve into the wrong lane.20 Just the possibility of a so-called “hidden failure mode” could exacerbate fears about the reliability of any ML-enabled system. “Operators and military commanders need to trust that ML systems will operate reliably under the realistic conditions of a conflict,” Hoffman and Kim write. “Ideally, this will militate against a rush to deploy untested systems. However, developers, testers, policymakers, and commanders within and between countries may have very different risk tolerances and understandings of trust in AI.”21

Act Four: Hotwash

McCarthy took advantage of an operational pause to conduct a hotwash. Over lukewarm coffee and Monsters, the conversation gravitated toward how they could use MaL better. The group scribbled a few recommendations concerning integrating ML into doctrine, training and education, and leadership and policies until the ship’s 1-MC sounds general quarters. 

Doctrine: To realize the utility of ML, military leaders should consider two changes to existing doctrine. First, doctrine developers and the operational community should consider the concept of “human command, machine control,” in which ML would use an auction-bid process akin to ride-hailing applications to advertise and fulfill operational requirements across the warfighting functions. Under this construct, a commander publishes or broadcasts tasks, including constraints, priorities, metrics, and objectives. “A distributed ML-enabled control system would award-winning bids to a force package that could execute the tasking and direct the relevant forces to initiate the operation,” write naval theorists Harrison Schramm and Bryan Clark. “Forces or platforms that accept the commander’s bid conducts (or attempts to conduct) the mission and reports the results “when communications availability allows.”22 This concept supports mission-type orders/mission command and allows C2 architectures to flex to instances and areas subject to low-bandwidth constraints.23

Second, doctrine developers should adjust joint, service, and domain-centric planning processes to account for additional planning aids, such as LLMs, modeling and simulation, and digital twins, which can more deeply explore COAs, branches, and sequels and accelerate understanding of threats and the operating environment. Explicitly changing planning doctrine to account for these emerging capabilities will encourage their use and emphasize their criticality to effective planning.

Training and Education: Tactical units must train and continually develop ML technical experts capable of conducting on-the-spot troubleshooting and maintenance of the tool. Meanwhile, the services should develop curricula to train budding junior leaders — corporals, sergeants, lieutenants, ensigns, and captains — that introduce them to machine-learning tools applicable to their warfighting domains, provide best principles for generating productive outputs, and articulate risks – and risk mitigations – due to skewed data and poor problem framing.

Best practices should also be documented and shared across the DoD. Use of ML capabilities should become part of a JPG’s battle drill, featuring a designated individual whose primary duty is to serve as the human interface with a decision-support tool such as MaL. Rather than work from scratch at the start of every planning effort, JPGs should have a list of queries readily available for a range of scenarios that can inform a commander’s estimate of the situation and subsequent planning guidance, and formulation of intent based on an initial understanding of the operating environment. Prompts that solicit ML-enabled recommendations on task organization, force composition and disposition, risks to force or mission, targeting, and other essential decisions should be ready for immediate use to speed the JPG’s planning tempo and, ultimately, a unit’s speed of action as well. The information management officer (IMO), which in some headquarters staffs is relegated to updating SharePoint portals, should be the staff’s subject matter expert for managing ML capabilities. IMOs would be the military equivalent of prompt engineers to coax and guide AI/ML models to generate relevant, coherent, and consistent outputs to support the unit’s mission.24

Leadership and Policies: There are implications for senior leaders for warfighting and policy. Within a warfighting context, senior defense leaders must identify, debate, and develop frameworks for how commanders might use ML to support decision-making in wartime scenarios. It seems intuitive to use a multimodal LLM tool such as the fictitious MaL to support IPB, targeting, and kill chain actions; in the same way, campaign models are used to inform combatant commander planning for crises and contingencies.

However, leaders and their staffs must also understand the limitations of such tools to support a commander’s decision-making. “Do not ask for an AI-enabled solution without first deciding what decision it will influence and what the ‘left and right limits’ of the decision will be,” Schramm and Clark warn.25 Likewise, AI might not be the appropriate tool to solve all tactical and operational problems. “Large data-centric web merchants such as Amazon are very good at drawing inferences on what people may be interested in purchasing on a given evening because they have a functionally infinite sample space of previous actions upon which to build the model,” they write. “This is radically different from military problems where the amount of data on previous interactions is extremely small and the adversary might have tactics and systems that have not been observed previously. Making inference where there is little to no data is the domain of natural intelligence.”26

Meanwhile, future acquisition arrangements with defense contractors must provide the DoD with data rights – particularly data generated by purchased weapon systems and sensors – to optimize the potential of ML architecture in a warfighting environment.27 Such a change would require the DoD to work with firms in the defense industrial base to adjudicate disagreements over the right to use, licensing, and ownership of data – each of which might bear different costs to a purchaser.


Technologists and policy wonks constantly remind the defense community that the Department must “fail fast” to mature emerging technologies and integrate them into the joint force as quickly as possible. The same principle should guide the development of AI/ML-enabled warfighting solutions. Commanders and their staffs must understand that this is a capable tool that, if used wisely, can significantly enhance the joint force’s ability to absorb data from disparate sources, make sense of that information, and close kill chains based on an ML tool’s assessment.

If used unwisely, without a solid understanding of what decisions ML will support, the joint force may be playing a rigged game against a peer adversary. ML-enabled capabilities can absorb large amounts of data, process and organize it, and generate insights for humans who work at a relative snail’s pace. However, these nascent tools cannot reason and interpret words or events as a competent military professional can. As strategic competition between the United States and China intensifies over Taiwan, the South China Sea, the Russian-Ukraine war, and other geopolitical issues, American political and military leaders must develop a better understanding of when and how to use ML to support joint force planning, execution, and assessment in combat, lest U.S. service members pay an ungodly sum of the butcher’s bill.

Lieutenant Colonel Jeff Wong, a U.S. Marine Corps reserve infantry officer, studied artificial intelligence at the Eisenhower School for National Security and Resource Strategy, National Defense University in the 2022-2023 academic year. In his civilian work, he plans wargames and exercises for various clients across the Department of Defense.

The views expressed in this paper are those of the author and do not necessarily reflect the official policy or position of the National Defense University, the Department of Defense, or the U.S. Government.


1. The fictional hero of this story, John McCarthy, is named after the Massachusetts Institute of Technology researcher who first coined the term “artificial intelligence.” Gil Press, “A Very Short History of Artificial Intelligence,” Forbes, December 30, 2016, https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/?sh=51ea3d156fba.

2. Marcus Aurelius, Meditations, audiobook.

3. Mike Berardino, “Mike Tyson Explains One of His Most Famous Quotes,” South Florida Sun-Sentinel, November 9, 2012, https://www.sun-sentinel.com/sports/fl-xpm-2012-11-09-sfl-mike-tyson-explains-one-of-his-most-famous-quotes-20121109-story.html.

4. Eric Schmidt, “Innovation Power: Why Technology Will Define the Future of Geopolitics,” Foreign Affairs, March/April 2023.

5. “Military-Civil Fusion and the People’s Republic of China,” U.S. Department of State, May 2020.

6. Eric Schmidt, “Innovation Power: Why Technology Will Define the Future of Geopolitics,” Foreign Affairs, March/April 2023.

7. Wyatt Hoffman and Heeu Millie Kim, “Reducing the Risks of Artificial Intelligence for Military Decision Advantage,” Center for Security and Emerging Technology Policy Paper (Washington, D.C.: Georgetown University, March 2023), 12.

8. James Johnson, “Automating the OODA Loop in the Age of AI,” Center for Strategic and International Studies, July 25, 2022, https://nuclearnetwork.csis.org/automating-the-ooda-loop-in-the-age-of-ai/.

9. Hoffman and Kim, “Reducing the Risks of Artificial Intelligence for Military Decision Advantage,” 7.

10. Milan Vego, “The Bureaucratization of the U.S. Military Decision-making Process,” Joint Force Quarterly 88, January 9, 2018, https://ndupress.ndu.edu/Publications/Article/1411771/the-bureaucratization-of-the-us-military-decisionmaking-process/.

11. Hoffman and Kim, 7.

12. Carl von Clausewitz, On War, ed. and trans. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976), 101.

 13. Hoffman and Kim, 7.

 14. Elsa Kania, “AI Weapons” in China’s Military Innovation, Brookings Institution, April 2020.

 15. Harrison Schramm and Bryan Clark, “Artificial Intelligence and Future Force Design,” in AI at War (Annapolis, Md.: Naval Institute Press, 2021), 240-241.

16. Josh Lospinoso, Testimony on the State of Artificial Intelligence and Machine Learning Applications to Improve Department of Defense Operations before the Subcommittee on Cybersecurity, U.S. Senate Armed Services Committee, April 19, 2023, https://www.armed-services.senate.gov/hearings/to-receive-testimony-on-the-state-of-artificial-intelligence-and-machine-learning-applications-to-improve-department-of-defense-operations. “Today, the Department of Defense does not have anywhere near sufficient access to weapon system data. We do not – and in some cases, due to contractual obligations, the Department cannot — extract this data that feeds and enables the AI capabilities we will need to maintain our competitive edge.”

 17. MK15 – Phalanx Close-In Weapon System (CIWS), U.S. Navy, September 20, 2021, https://www.navy.mil/resources/fact-files/display-factfiles/article/2167831/mk-15-phalanx-close-in-weapon-system-ciws/.

18. Adam Aycock and William Glenney IV, “Trying to Put Mahan in a Box,” in AI at War, 269-270.

19. Hoffman and Kim, CSET Policy Brief, 8.

20. Ibid, 8-9.

21. Ibid, 11.

22. Schramm and Clark, “Artificial Intelligence and Future Force Design,” 239-241.

23. AI at War, 241.

24. Craig S. Smith, “Mom, Dad, I Want To Be A Prompt Engineer,” Forbes, April 5, 2023, https://www.forbes.com/sites/craigsmith/2023/04/05/mom-dad-i-want-to-be-a-prompt-engineer/amp/.

25. AI at War, 247-248.

26. AI at War, 248.

27. Heidi M. Peters, “Intellectual Property and Technical Data in DoD Acquisitions,” Congressional Research Service In-Focus, IF12083, April 22, 2022, https://crsreports.congress.gov/product/pdf/IF/IF12083.

Featured Image: PHILIPPINE SEA (Sept. 22, 2020) Cpl. Clayton A. Phillips, a network administrator with Marine Air Control Group 18 Detachment, 31st Marine Expeditionary Unit (MEU), and a native of Beech Bluff, Tennessee, tests the connectivity of the Networking On-the-Move Airborne (NOTM-A) communications system during flight operations from the amphibious assault ship, USS America (LHA 6). (U.S. Marine Corps photo by Lance Cpl. Brienna Tuck)

Upgrading the Mindset: Modernizing Sea Service Culture for Trust in Artificial Intelligence

By Scott A. Humr

Winning on the future battlefield will undoubtedly require an organizational culture that promotes human trust in artificial intelligent systems. Research within and outside of the US military has already shown that organizational culture has an impact on technology acceptance, let alone, trust. However, Dawn Meyerriecks, Deputy Director for CIA technology development, remarked in a November 2020 report by the Congressional Research Service that senior leaders may be unwilling, “to accept AI-generated analysis.” The Deputy Director goes on to state that, “the defense establishment’s risk-averse culture may pose greater challenges to future competitiveness than the pace of adversary technology development.” More emphatically, Dr. Adam Grant, a Wharton professor and well-known author, called the Department of Defense’s culture, “a threat to national security.” In light of those remarks, the Commandant of the Marine Corps, General David H. Berger, stated at a gathering of the National Defense Industrial Association that, “The same way a squad leader trusts his or her Marine, they have to trust his or her machine.” The points of view in the aforementioned quotes raise an important question: Do Service cultures influence how its military personnel trust AI systems?

While much has been written about the need for explainable AI (XAI) and need for increasing trust between the operator and AI tools, the research literature is sparse on how military organizational culture influences the trust personnel place in AI imbued technologies. If culture holds sway over how service personnel may employ AI within a military context, culture then becomes an antecedent for developing trust and subsequent use of AI technologies. As the Marine Corps’s latest publication on competing states, “culture will have an impact on many aspects of competition, including decision making and how information is perceived.” If true, military personnel will view information provided by AI agents through the lens of their Service cultures as well.

Our naval culture must appropriately adapt to the changing realities of the new Cognitive Age. The Sea Services must therefore evolve their Service cultures to promote the types of behaviors and attitudes that fully leverage the benefits of these advanced applications. To compete effectively with AI technologies over the next decade, the Sea Services must first understand their organizational cultures, implement necessary cultural changes, and promote double-loop learning to support beneficial cultural adaptations.

Technology and Culture Nexus

Understanding the latest AI applications and naval culture requires an environment where experienced personnel and technologies are brought together through experimentation to better understand trust in AI systems. Fortunately, the Sea Service’s preeminent education and research institution, the Naval Postgraduate School (NPS), provides the perfect link between experienced educators and students who come together to advance future naval concepts. The large population of experienced mid-grade naval service officers at NPS provides an ideal place to help understand Sea Service culture while exploring the benefits and limitations of AI systems.

Not surprisingly, NPS student research has investigated trust in AI, technology acceptance, and culture. Previous NPS research has explored trust through interactive Machine Learning (iML) in virtual environments for understanding Navy cultural and individual barriers to technology adoption. These and other studies have brought important insights on the intersection of people and technologies.

One important aspect of this intersection is culture and how it is measured. For instance, the Competing Values Framework (CVF) has helped researchers understand organizational culture. Paired with additional survey instruments such as E-Trust or the Technology Acceptance Models (TAM), researchers can better understand if particular cultures trust technologies more than other types. CVF is measured across six different organizational dimensions that are summarized by structure and focus. The structure axis ranges from control to flexibility, while focus axis ranges from people to organization, see figure 1.

Figure 1 – The Competing Values Framework – culture, leadership, value from Cameron, Kim S., Robert E. Quinn, Jeff DeGraff, and Anjan V. Thakor. Competing Values Leadership, Edward Elgar Publishing, 2014.

Most organizational cultures contain some measure of each of the four characteristics of the CVF. The adhocracy quadrant of the CVF, for instance, is characterized by innovation, flexibility, and increased speed of solutions. To this point, an NPS student researcher found that Marine Corps organizational culture was characterized as mostly hierarchical. The same researcher found that this particular group of Marine officers also preferred the Marine Corps move from a hierarchical culture towards an adhocracy culture. While the population in the study was by no means representative of the entire Marine Corps, it does generate useful insights for forming initial hypotheses and the need for additional research which explores whether hierarchical cultures impede trust in AI technologies. While closing this gap is important for assessing how a culture may need to adapt, actually changing deeply rooted cultures requires significant introspection and the willingness to change.

The ABCs: Adaptations for a Beneficial Culture

“Culture eats strategy for breakfast,” quipped the revered management guru, Peter Drucker—and for good reason. Strategies that seek to adopt new technologies which may replace or augment human capabilities, must also address culture. Cultural adaptations that require significant changes to behaviors and other deeply entrenched processes will not come easy. Modifications to culture require significant leadership and participation at all levels. Fortunately, organizational theorists have provided ways for understanding culture. One well-known organizational theorist, Edgar Schein, provides a framework for assessing organizational culture. Specifically, culture can be viewed at three different levels which consist of artifacts, espoused values, and underlying assumptions.

The Schein Model provides another important level of analysis for investigating the military organizational culture. In the Schein model, artifacts within militaries would include elements such as dress, formations, doctrine, and other visible attributes. Espoused values are the vision statements, slogans, and codified core values of an organization. Underlying assumptions are the unconscious and unspoken beliefs and thoughts that undergird the culture. Implementing cultural change without addressing underlying assumptions is the equivalent to rearranging the deck chairs on the Titanic. Therefore, what underlying cultural assumptions could prevent the Sea Services from effectively trusting AI applications?

One of the oldest and most ubiquitous underlying assumptions of how militaries function is the hierarchy. While hierarchy does have beneficial functions for militaries, it may overly inhibit how personnel embrace new technologies and decisions recommended by AI systems. Information, intelligence, and orders within the militaries largely flow along well-defined lines of communication and nodes through the hierarchy. In one meta-analytic review on culture and innovation, researchers found that hierarchical cultures, as defined by CVF, tightly control information distribution. Organizational researchers Christopher McDermott and Gregory Stock stated, “An organization whose culture is characterized by flexibility and spontaneity will most likely be able to deal with uncertainty better than one characterized by control and stability.” While hierarchical structures can help reduce ambiguity and promote stability, they can also be detrimental to innovation. NPS student researchers in 2018, not surprisingly, found that the hierarchical culture in one Navy command had a restraining effect on innovation and technology adoption.

CVF defined adhocracy cultures on the other hand are characterized by innovation and higher tolerances for risk taking. For instance, AI applications could also upend well-defined Military Decision Making Processes (MDMP). MDMP is a classical manifestation of codified processes that supports underlying cultural assumptions on how major decisions are planned and executed. The Sea Services should therefore reevaluate and update its underlying assumptions on decision making processes to better incorporate insights from AI.

In fact, exploring and promoting other forms of organizational design could help empower its personnel to innovate and leverage AI systems more effectively. The late, famous systems thinking researcher, Donella Meadows, aptly stated, “The original purpose of a hierarchy is always to help its originating subsystems do their jobs better.” Therefore, recognizing the benefits, and more importantly the limits of hierarchy, will help leaders properly shape Sea Service culture to appropriately develop trustworthy AI systems. Ensuring change goes beyond a temporary fix, however, requires continually updating the organization’s underlying assumptions. This takes double-loop learning.

Double-loop Learning

Double-loop learning is by no means a new concept. First conceptualized by Chris Argyris and Donald Schön in 1974, double-loop learning is the process of updating one’s underlying assumptions. While many organizations can often survive through regular use of single-loop learning, they will not thrive. Unquestioned organizational wisdom can perpetuate poor solutions. Such cookie-cutter solutions often fail to adequately address new problems and are discovered to no longer work. Rather than question the supporting underlying assumptions, organizations will instead double-down on tried-and-true methods only to fail again, thus neglecting deeper introspection.

Such failures should instead provide pause to allow uninhibited, candid feedback to surface from the deck-plate all the way up the chain of command. This feedback, however, is often rare and typically muted, thus becoming ineffectual to the people who need to hear it the most. Such problems are further exacerbated by endemic personnel rotation policies combined with feedback delays that rarely hold the original decision makers accountable for their actions (or inactions).

Implementation and trust of AI systems will take double-loop learning to change the underlying cultural assumptions which inhibit progress. Yet, this can be accomplished in several ways which go against the normative behaviors of entrenched cultures. Generals, Admirals, and Senior Executive Service (SES) leaders should create their own focus groups of diverse junior officers, enlisted personnel, and civilians to solicit unfiltered feedback on programs, technologies, and most importantly, organizational culture inhibitors which hold back AI adoption and trust. Membership and units could be anonymized in order to shield junior personnel from reprisals while promoting the unfiltered candor senior leadership needs to hear in order to change the underlying cultural assumptions. Moreover, direct feedback from the operators using AI technologies would also avoid the layers of bureaucracy which can slow the speed of criticisms back to leadership.

Why is this time different?

Arguably, the naval services have past records of adapting to shifts in technology and pursuing innovations needed to help win future wars. Innovators of their day such as Admiral William Sims developing advanced naval gunnery techniques and the Marine Corps developing and improving amphibious landing capabilities in the long shadow of the Gallipoli campaign reinforce current Service cultural histories. However, many technologies of the last century were evolutionary improvements to what was already accepted technologies and tactics. AI is fundamentally different and is akin to how electricity changed many aspects of society and could fundamentally disrupt how we approach war.

In the early 20th century, the change from steam to electricity did not immediately change manufacturing processes, nor significantly improve productivity. Inefficient processes and machines driven by steam or systems of belts were never reconfigured once they were individually equipped with electric motors. Thus, many benefits of electricity were not realized for some time. Similarly, Sea Service culture will need to make a step change to fully take advantage of AI technologies. If not, the Services will likely experience a “productivity paradox” where large investments in AI do not fully deliver the efficiencies promised. 

Today’s militaries are sociotechnical systems and underlying assumptions are its cultural operating system. Attempting to plug AI application into a culture that is not adapted to use it, nor trusts it, is the equivalent of trying to install an Android application on a Windows operating system. In other words, it will not work, or at best, not work as intended. We must, therefore, investigate how naval service cultures may need to appropriately adapt if we want to fully embrace the many advantages these technologies may provide.


In a 2017 report from Chatham House titled, “Artificial Intelligence and the Future of Warfare,” Professor Missy Cummings stated, “There are many reasons for the lack of success in bringing these technologies to maturity, including cost and unforeseen technical issues, but equally problematic are organizational and cultural barriers.” Echoing this point, the former Director of the Joint Artificial Intelligence Center (JAIC), Marine Lieutenant General Michael Groen, stated “culture” is the obstacle, not the technology, for developing the Joint All-Domain, Command and Control (JADC2) system, which is supported by AI. Yet, AI/ML technologies have the potential to provide a cognitive-edge that can potentially increase the speed, quality, and effectiveness of decision-making. Trusting the outputs of AI will undoubtedly require significant changes to certain aspects of our collective naval cultures. The Sea Services must take stock of their organizational cultures and apply the necessary cultural adaptations, while fostering double-loop learning in order to promote trust in AI systems.

Today, the Naval Services have a rare opportunity to reap the benefits of a double-loop learning. Through the COVID-19 pandemic, the Sea Services have shown that they can adapt responsively and effectively to dynamic circumstances while fulfilling their assigned missions. The Services have developed more efficient means to leverage technology to allow greater flexibility across the force through remote work and education. If, however, the Services return to the status quo after the pandemic, they will have failed to update many of its outdated underlying assumptions by changing the Service culture.

If we cannot change the culture in light of the last three years, it portends poor prospects for promoting trust in AI for the future. Therefore, we cannot squander these moments. Let it not be said of this generation of Sailors and Marines that we misused this valuable opportunity to make a step-change in our culture for a better approach to future warfighting.

Scott Humr is an active-duty Lieutenant Colonel in the United States Marine Corps. He is currently a PhD candidate at the Naval Postgraduate School as part of the Commandant’s PhD-Technical Program. His research interests include trust in AI, sociotechnical systems, and decision-making in human-machine teams. 

Featured Image: An F-35C Lightning aircraft, assigned to Strike Fighter Squadron (VFA) 125, prepares to launch from the flight deck of the aircraft carrier USS George H. W. Bush (CVN 77) during flight operations. (U.S. Navy photo by Mass Communication Specialist 3rd Class Brandon Roberson)

For Sea Control, First Control the Electromagnetic Spectrum

Sea Control Topic Week

By LCDR Damien Dodge

Rapidly maturing electromagnetic technology will revitalize U.S. Navy combat potential and enhance opportunities to establish sea control. As the new National Security Strategy aptly illustrates the United States is faced with resurgent great power competition. Simultaneously, the Joint Operating Environment of 2035 portends a future influenced by the proliferation of disruptive and asymmetric capabilities engendered through global advances in “science, technology, and engineering” expanding the innovation horizons of “robotics, Information Technology, nanotechnology and energy.”1 The Intelligence Community’s Worldwide Threat Assessment reinforces this view and highlights aggressive competition due to adversary advances in high-impact dual-use technologies. The creation of Google’s Artificial Intelligence (AI) center in Beijing and China’s recent testing of its “quantum satellite” followed by its rumored fielding of an at-sea railgun offer practical demonstrations of this outlook.2 Furthermore, retired Marine General John Allen and Amir Husain envision “hyperwar,” in which the future battlespace will churn with cross-domain action and counteraction at speeds nearly eclipsing human capacity for comprehension and reaction.3

Within the context of this near-future operating environment, current maritime Information Warfare (IW) capabilities, such as those contributing to Signals Intelligent (SIGINT), Electromagnetic Maneuver Warfare (EMW), Electronic Warfare (EW), and communications, do not afford sufficient operational agility or adaptability to gain advantage over or exploit the weaknesses of adversaries. Adversaries that are bent on projecting overlapping and reinforcing domains of combat power near their national shores could overwhelm and exploit seams in current Navy electromagnetic-dependent  capabilities.

Given this challenging, hypercompetitive environment the Chief of Naval Operations’ Design for Maintaining Maritime Superiority confronts this problem head-on. The CNO seeks to “strengthen naval power at and from the sea” and also to “advance and ingrain information warfare” capabilities across the Navy. This is to enable maritime commanders to achieve objectives through multi-domain maneuver and control “in a highly ‘informationalized’ and contested environment.”4  Additionally, the “Surface Force Strategy: Return to Sea Control” echoes the CNO’s direction by promoting “Distributed Lethality,” which advocates for “increasing the offensive and defensive capability of individual warships, employing them in dispersed formations across a wide expanse of geography, and generating distributed fires.” This is complemented by Defense Department officials advocating for human-machine teaming and an explosion in fielding unmanned systems. Finally, this accelerating competition compels the CNO to advocate not only for a larger fleet, but also one which “must improve faster” where “future ships… [are] made for rapid improvement with modular weapons canisters and swappable electronic sensors and systems.”5

Fortunately, rapid advances in technology, beyond solely enabling adversaries, can also support the CNO’s vision for the Navy – especially one primed to rapidly integrate and learn. With the advent of new designs for antennas and Radio Frequency (RF) components, the evolution of Software Defined Radios (SDR), and more practical instantiations of Artificial Intelligence (AI), these technologies can now be innovatively combined to operationalize envisioned, but not yet fully realized, IW and EMW warfighting capabilities. The capability nexus formed by these swiftly maturing technologies affords the Navy an unparalleled opportunity to maintain cross-domain battlespace decision superiority while outpacing and seeding uncertainty within an adversary’s decision cycle. To achieve this, the Navy must leverage longstanding research investments and aggressively transition these technologies from Defense Advanced Research Project Agency (DARPA) programs, Federally Funded Research and Development Center (FFRDC) initiatives, Office of Naval Research (ONR) workbenches, and warfighting center laboratories into fully integrated naval systems. These transitions will provide warfighters the needed tools and decision aids to dynamically control their electromagnetic signatures, provide optimal and low probability of detection communications, deliver more effective Electronic Warfare (EW) capabilities, revitalize signals intelligence collection, and engender greater freedom of action across the electromagnetic spectrum. This enabling electromagnetic superiority will present expanded opportunities for maritime commanders to seize sea control at times and places of their choosing.

Emerging Options and Tools in the Electromagnetic Domain 

Antennas and RF components accomplish many functions on a navy ship. These functions are traditionally performed by dedicated single-role RF apertures and components which operate radars, transmit or receive communications, establish tactical datalinks, collect adversary communication signals, and detect or electronically frustrate threat sensors. This stovepipe approach to accessing and influencing the electromagnetic spectrum has created warships bristling with single-purpose antennas awash in scarcely manageable electromagnetic interference (EMI) and subject to individualized, byzantine maintenance and logistic support tails. This situation is a contributing factor to the complexity of the Navy’s C5I architecture afloat, which VADM Kohler admitted requires a 50-person team at the cost of one million dollars to make a Carrier Strike Group fully effective prior to deployment.6 Also, when new capabilities are fielded, such as the F-35, existing systems are often not sufficiently adaptable to absorb their advanced capabilities. Marine Commandant General Robert Neller highlights this issue when lamenting the Marine Corps’ inability to benefit fully from the F-35’s sensors due to Navy amphibious ships being unable to optimally communicate with the aircraft.7 Additionally, shipboard antenna thickets create a significantly larger radar cross section (RCS), thus illuminating these ships to adversary active sensors. Finally, this collection of standalone systems complicates the ship’s ability to manage its electromagnetic emissions in order to hide from passive threat sensors and often the only option may be a tactically dissatisfying binary approach: gain battlespace awareness and communicate, or hide from the adversary.           

In contrast to this patchwork approach, more open architecture (OA) and dynamic phased array antennas combined with advanced element-level RF components are improving beamforming parameters. These include very low sidelobes and extended frequency range dynamics of RF system apertures as revealed by even superficial scans of Defense Technical Information Center (DTIC), Institute of Electrical and Electronics Engineers (IEEE), and International Telecommunication Union (ITU) websites.8 Georgia Tech Research Institute’s agile aperture antenna technology exemplifies these burgeoning capabilities.These capabilities could enable various, low-RCS antenna arrays to perform and synchronize a multitude of electromagnetic functions – evidenced by the Zumwalt class destroyer’s smooth exterior. Separate antenna array elements could form directional, purposeful transmitting or receiving beams pointing to traditional satellites, CubeSats, Aquila-like aircraft, UAVs, or other warships while other array elements establish links or sense the environment.10 These various arrays and elements would be kept from interfering with each other by orchestrating their assigned tasks across temporal (transmission timing), spectral (frequency allocation or waveform selection), and spatial (which element and/or beam) dimensions, or some combination thereof.

For example, an antenna array on the forward part of the ship could switch duties with those on the aft, thus eliminating cut-out zones and distracting ship maneuvers such as steering a “chat-corpen,” which is slang for a ship heading that will maintain satellite communications (SATCOM). Adjustable transmission power and frequency settings combined with narrower beamforming options may offer additional satellite pointing opportunities or improved low-on-the-horizon aircraft communications, while reducing probability of detection or interception by an adversary. Low power, narrow horizontal beams designed for intra-strike group communications could also multi-statically search for surface contacts – referred to in academic journals as “radar-communication convergence.”11 A majority of shipboard spectrum access and sensing could be performed through a more standardized and harmonious set of advanced interconnected antenna arrays, despite the remaining need for distinct electromagnetic array systems such as Aegis or Surface Electronic Warfare Improvement Program (SEWIP), which are beyond near-term integration into this concept due to their highly specialized functions. Nevertheless, more capable and dynamic antenna arrays and RF components are a source of increased efficiency, greater operational agility, and a potential aperture to confuse adversaries while maximizing friendly communications and sensing.

A necessary complement to advanced antennas and RF components is the flexibility of SDRs and their associated digital signal processing (DSP) capabilities. SDRs can accomplish a wide variety of functions previously relegated to system-specific hardware by using devices such as field-programmable gate arrays (FPGA) and more generalized, or even virtualized, computing platforms.12 Together these systems can generate, process, store, and share digital data about signals, either for transmission or upon reception. SDRs can generate waveforms electronically-molded for multiple purposes, allowing for backend DSP to differentiate the signal transmissions and, if combined with radar, reflected returns, maximizing the information recovery from each emitted electromagnetic field.

Evolving SDR performance is establishing the foundation for advanced capabilities such as cognitive radio or radar. “Cognitive” in this usage simply implies a capability designed to sense the electromagnetic environment and determine times and frequencies that are being underused, offering an opportunity for use by the system, which is also known as dynamic spectrum access.13 The concept was conceived as a way to achieve more efficient use of the commercial frequency spectrum, given its increasing congestion, but it also has obvious military applications. For example, if a frequency-hopping system was detected in an area, then a cognitive radio could hop to a different sequencing algorithm, or if a radar was sweeping the spectrum at a certain periodicity, a cognitive radar could sweep at a synchronized offset and use both returns for a more refined depiction of contacts in the area. There are even proposals where radar can work collaboratively with cellular signals to detect contacts with a low probability of interception.14 This could be a useful capability during stealthy naval littoral operations. Additionally, within the bounding parameters of the antenna arrays and RF hardware components, new waveform generation only requires a software update enabling an SDR to facilitate communications with new capabilities such as the F-35, a newly launched CubeSat, a friendly unmanned system, a newly arrived coalition partner, or a recently invented low probability of detection waveform designed to defeat the adversary’s latest sensing algorithm.

The more ambitious and final ingredient necessary to achieve improved IW and EMW capabilities is a form of AI designed for electromagnetic applications and decision support. It is obvious from the contributing authors to the recent ITU Journal special issue, The impact of Artificial Intelligence on communication networks and services that Chinese research and innovation is also trending in this direction.15 While SDRs are powerful tools, they could be improved by orders of magnitude through use of AI algorithms such as those derived from Game Theory and Bayesian mathematics.16 SDRs can perform DPS and waveform generation, but AI or machine learning algorithms can assist in orchestrating enhanced scanning and sensing, thus providing the right signals or portions of the spectrum at the right time to the SDRs for DSP and information extraction. In other words, AI could perform higher-level operations such as altering the application of DSP procedures and determining when and how best to sense and exploit underused, or purposefully below the noise floor, portions of the spectrum. AI could also link the myriad permutations of waveform possibilities to operational objectives such as prioritizing air defense electromagnetic sensor processing and EW protection during an engagement, minimizing adversary emission detection opportunities during a raid, or contributing to adversary uncertainty through deliberately misleading emissions during deceptive maneuvers. Together, these capabilities crowned with practical AI implementations could contribute toward easing many tedious, human-speed and error-prone activities used to achieve IW and EMW capabilities. These human errors include hurried and disjointedly setting emissions control, establishing overly static yet fragile communications plans, divining optimal radar configurations, or communicating haphazardly with coalition partners. Empowered with AI-enabled automation and decision aids, a more integrated and homogenous approach using advanced antenna arrays and SDRs to access and sense the spectrum would vastly improve electromagnetic freedom of action and decision superiority. Thus, if the Navy desires to seize sea control when and where she chooses, first establishing electromagnetic spectrum control is a warfighting prerequisite.


All worthwhile visions of the future confront challenges and resistance, and this one is no different. Legacy antennas, components, radios, and architecture litter numerous program offices, each with differing objectives. Therefore, the Navy must diligently work to coordinate deliberate whole-of-Navy modernization schemes that leverage open architecture, emphasize interoperability, and prioritize these technologies in pursuit of this vision’s goals. Beneficially, the Naval Surface Warfare Center Dahlgren Division’s Real Time Spectrum Operations (RTSO) and ONR’s Integrated Topside initiative are laboring toward these ends.17 Also, various DARPA activities such as Signal Processing at RF (SPAR),  Shared Spectrum Access for Radar and Communications (SSPARC), and Communications Under Extreme RF Spectrum Conditions (CommEx), Advanced Wireless Network System (AWNS), and the Spectrum Collaboration Challenge (SC2) together create a rich portfolio of experience and opportunity awaiting renewed Navy focus and attention.18 Furthermore, it will be critical for the Navy to establish an ecosystem, either contracted as a service or as a core, in-house function, in support of continuous SDR software Development and Operations (DevOps) and AI algorithm development.19 This will enable the Navy to continually pace electromagnetic congestion and adversary competition.

Agilely designed, open architecture antenna arrays and RF components connected to dynamic SDRs and empowered by AI algorithms can revitalize and ingrain IW and EMW warfighting capabilities across the Navy to allow the force to confidently seize sea control and win in the future maritime battlespace. Collectively, these capabilities could bring about currently fanciful opportunities, such as a strike group secretly transiting at night through fishing grounds using radio communications imperceptibly different from the fishing trawlers. Such a strike group could employ both intra-strike group communications and surface search radar while receiving and sending intelligence via recently launched CubeSats transmitting on waveforms indistinguishable with area freighters’ Very Small Aperture Terminal (VSAT) satellite communication links, thus remaining electromagnetically camouflaged while maintaining battlespace awareness and communications. Meanwhile, cognitively networked strike group assets could passively sense and target the adversary’s emissions, enabling distributed but converging fires from distant unmanned platforms across the area of operations. Electromagnetic control establishes the initial conditions for sea control.

Lofty tactics and operations will perform sub-optimally and be disrupted through electronic attack unless the Navy builds a solid foundation in electromagnetic freedom of action. Fortuitously, these technologies creatively combined will lay the keel of advanced naval warfighting upon which future naval success will be built, launching a powerful, tough, and confident Navy into the turbulent waters of great power competition to seize sea control when and where she chooses.

LCDR Damien Dodge is a U.S. Navy cryptologic warfare officer assigned to the staff of Supreme Allied Commander Transformation, NATO. He welcomes your comments at: damienadodge+essay@gmail.com. These views are his alone and do not necessarily represent any U.S. or Allied government or NATO department or agency.


[1] Joint Operating Environment 2035: The Joint Force in a Contested and Disordered World, Joint Staff, 14 July 2016, pp. 15-20. http://www.jcs.mil/Portals/36/Documents/Doctrine/concepts/joe_2035_july16.pdf?ver=2017-12-28-162059-917

[2] Daniel R. Coats, “Worldwide Threat Assessment  of the  US Intelligence Community,” 11 May 2017,  https://www.dni.gov/files/documents/Newsroom/Testimonies/SSCI%20Unclassified%20SFR%20-%20Final.pdf  

and, Reuters, “Chinese quantum satellite sends ‘unbreakable’ code,” Reuters.com, 10 August 2017,  https://www.reuters.com/article/us-china-space-satellite/chinese-quantum-satellite-sends-unbreakable-code-idUSKBN1AQ0C9 and, Shelly Banjo and David Ramli, “Google to Open Beijing AI Center in Latest Expansion in China,” Bloomberg.com, 12 December 2017, https://www.bloomberg.com/news/articles/2017-12-13/google-to-open-beijing-ai-center-in-latest-expansion-in-china

[3] GEN John R. Allen, USMC (Ret.), and Amir Husain, “On Hyperwar,” U.S. Naval Institute Proceedings 143, no. 7 (July 2017), 30–37.

[4] A Design for Maintaining Maritime Superiority, Chief of Naval Operations Staff, Version 1.0 January 2016. Available at, http://www.navy.mil/cno/docs/cno_stg.pdf

[5] “The Future Navy,” 17 May 2017, http://www.navy.mil/navydata/people/cno/Richardson/Resource/TheFutureNavy.pdf

[6] Sydney J. Freedberg Jr., “Navy Kludges Networks: $1M Per Carrier Strike Group, Per Deployment,” Breaking Defense, 12 February 2018, https://breakingdefense.com/2018/02/navy-kludges-networks-1m-per-carrier-strike-group-per-deployment/?_ga=2.90851354.1645113230.1518436630-2104563909.1489661725

[7] Mike Gruss, “Three tech problems the Navy and Marines are worried about,” C4ISRNET, 8 February 2018, available https://www.c4isrnet.com/show-reporter/afcea-west/2018/02/08/three-tech-problems-the-navy-and-marines-corps-are-worried-about/

[8] Examples include: James J. Komiak, Ryan S. Westafer, Nancy V. Saldanha, Randall Lapierre, and R. Todd Lee “Wideband Monolithic Tile for Reconfigurable Phased Arrays,” available http://www.dtic.mil/dtic/tr/fulltext/u2/1041386.pdf and Benjamin Rohrdantz, Karsten Kuhlmann, Alexander Stark, Alexander Geise, Arne Jacob, “Digital beamforming antenna array with polarisation multiplexing for mobile high-speed satellite terminals at Ka-band,” The Journal of Engineering, 2016, 2016, (6), p. 180-188, DOI: 10.1049/joe.2015.0163 IET Digital Library, http://digital-library.theiet.org/content/journals/10.1049/joe.2015.0163  and Darren J. Hartl, Jeffery W. Baur, Geoffrey J. Frank, Robyn Bradford, David Phillips, Thao Gibson, Daniel Rapking, Amrita Bal, and Gregory Huff, “Beamforming and Reconfiguration of A Structurally Embedded Vascular Antenna Array (Seva2) in Both Multi-Layer and Complex Curved Composites,” Air Force Research Laboratory, AFRL-RX-WP-JA-2017-0481, 20 October 2017, available http://www.dtic.mil/dtic/tr/fulltext/u2/1042385.pdf

[9] GTRI Agile Aperture Antenna Technology Is Tested On An Autonomous Ocean Vehicle … https://www.rfglobalnet.com/doc/gtri-agile-aperture-antenna-technology-autonomous-ocean-vehicle-0001

[10] Aquila is a Facebook project to develop a high-altitude, long-endurance (HALE) solar-powered UAV “that the company envisions one day will provide wireless network connectivity to parts of the world that lack traditional communication infrastructure.” Steven Moffitt and Evan Ladd, “Ensure COMMS: Tap Commercial Innovations for the Military,” U.S. Naval Institute Proceedings 143, no. 12 (December 2017), 54-58.

[11] Bryan Paul, Alex R. Chiriyath, and Daniel W. Bliss, “Survey of RF Communications and Sensing Convergence Research,” IEEE Access, date of publication December 13, 2016, date of current version February 25, 2017, Digital Object Identifier 10.1109/ACCESS.2016.2639038 available http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7782415

[12] Mike Lee, Mike Lucas, Robert Young, Robert Howell, Pavel Borodulin, Nabil El-Hinnawy, “RF FPGA for 0.4 to 18 GHz DoD Multi-function Systems,” Mar 2013, http://www.dtic.mil/dtic/tr/fulltext/u2/a579506.pdf

[13] Helen Tang and Susan Watson, “Cognitive radio networks for tactical wireless Communications,” Defence Research and Development Canada, Scientific Report, DRDC-RDDC-2014-R185, December 2014, available http://www.dtic.mil/dtic/tr/fulltext/u2/1004297.pdf 

[14] Chenguang Shi, Sana Salous, Fei Wang, and Jianjiang Zhou, “Low probability of intercept-based adaptive radar waveform optimization in signal-dependent clutter for joint radar and cellular communication systems,” EURASIP Journal on Advances in Signal Processing, (2016) 2016:111, DOI 10.1186/s13634-016-0411-6, available https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5085998/ 

[15] ITU Journal, ICT Discoveries, First special issue on “The impact of Artificial Intelligence on communication networks and services,” Volume 1, No. 1, March 2018, available, https://www.itu.int/dms_pub/itu-t/opb/tut/T-TUT-ITUJOURNAL-2018-P1-PDF-E.pdf

[16] Jan Oksanen, “Machine learning methods for spectrum exploration and exploitation,” Aalto University publication series, Doctoral Dissertations 169/2016, 21 June 2016 Unigrafia Oy, Helsinki, Finland, 2016, available

https://aaltodoc.aalto.fi/bitstream/handle/123456789/21917/isbn9789526069814.pdf?sequence=1 and Helen Tang, et al.

[17] Gregory Tavik, James Alter, James Evins, Dharmesh Patel, Norman Thomas, Ronnie Stapleton, John Faulkner, Steve Hedges, Peter Moosbrugger, Wayne Hunter, Robert Normoyle, Michael Butler, Tim Kirk, William Mulqueen, Jerald Nespor, Douglas Carlson, Joseph Krycia, William Kennedy, Craig McCordic, and Michael Sarcione, “Integrated Topside (InTop) Joint Navy–Industry Open Architecture Study” Naval Research Laboratory, Sponsored by Office of Naval Research, 10 September 2010,  NRL/FR/5008–10-10,198 available http://www.dtic.mil/get-tr-doc/pdf?AD=ADA528790 and, John Joyce, “Navy Expands Electromagnetic Maneuver Warfare for ‘Victory at Sea,’” U.S. Navy, 11/2/2017, Story Number: NNS171102-14, http://www.navy.mil/submit/display.asp?story_id=103165

[18] See DARPA research at https://www.darpa.mil/our-research and, Helen Tang, et al. and John Haystead, “Big Challenges Ahead as DOD Tries to Address EMSO Implementation,” Journal of Electronic Defense, February 2018 pp 22-25; and DARPA’s SC2 site https://spectrumcollaborationchallenge.com

[19] Possibly a sub-ecosystem within OPNAV’s Digital Warfare Office (DWO).

Featured Image: Operations Specialist 2nd Class Matthew Jones, from Victorville, Calif., stands watch in Combat Direction Center aboard the forward-deployed aircraft carrier USS George Washington (CVN 73). (U.S. Navy photo by Chief Mass Communication Specialist Jennifer A. Villalovos/Released)

Unmanned Mission Command, Pt. 1

By Tim McGeehan

The following two-part series discusses the command and control of future autonomous systems. Part 1 describes how we have arrived at the current tendency towards detailed control. Part 2 proposes how to refocus on mission command.


In recent years, the U.S. Navy’s unmanned vehicles have achieved a number of game-changing “firsts.” The X-47B Unmanned Combat Air System (UCAS) executed the first carrier launch and recovery in 2013, first combined manned/unmanned carrier operations in 2014, and first aerial refueling in 2015.1 In 2014, the Office of Naval Research demonstrated the first swarm capability for Unmanned Surface Vehicles (USV).2 In 2015, the NORTH DAKOTA performed the first launch and recovery of an Unmanned Underwater Vehicle (UUV) from a submarine during an operational mission.3 While these successes may represent the vanguard of a revolution in military technology, the larger revolution in military affairs will only be possible with the optimization of the command and control concepts associated with these systems. Regardless of specific mode (air, surface, or undersea), Navy leaders must fully embrace mission command to fully realize the power of these capabilities.

Unmanned History

“Unmanned” systems are not necessarily new. The U.S. Navy’s long history includes the employment of a variety of such platforms. For example, in 1919, Coast Battleship #4 (formerly USS IOWA (BB-1)) became the first radio-controlled target ship to be used in a fleet exercise.4 During World War II, participation in an early unmanned aircraft program called PROJECT ANVIL ultimately killed Navy Lieutenant Joe Kennedy (John F. Kennedy’s older brother), who was to parachute from his bomb-laden aircraft before it would be guided into a German target by radio-control.5 In 1946, F6F Hellcat fighters were modified for remote operation and employed to collect data during the OPERATION CROSSROADS atomic bomb tests at Bikini.6 These Hellcat “drones” could be controlled by another aircraft acting as the “queen” (flying up to 30 miles away). These drones were even launched from the deck of an aircraft carrier (almost 70 years before the X-47B performed that feat).

A Hellcat drone takes flight. Original caption: PILOTLESS HELLCAT (above), catapulted from USS Shangri-La, is clear of the carrier’s bow and climbs rapidly. Drones like this one will fly through the atomic cloud. (All Hands Magazine June 1946 issue)

However, the Navy’s achievements over the last few years were groundbreaking because the platforms were autonomous (i.e. controlled by machine, not remotely operated by a person). The current discussion of autonomy frequently revolves around the issues of ethics and accountability. Is it ethical to imbue these machines with the authority to use lethal force? If the machine is not under direct human control but rather evaluating for itself, who is responsible for its decisions and actions when faced with dilemmas? Much has been written about these topics, but there is a related and less discussed question: what sort of mindset shift will be required for Navy leaders to employ these systems to their full potential?

Command, Control, and Unmanned Systems

According to Naval Doctrine Publication 6 – Command and Control (NDP 6), “a commander commands by deciding what must be done and exercising leadership to inspire subordinates toward a common goal; he controls by monitoring and influencing the action required to accomplish what must be done.”7 These enduring concepts have new implications in the realm of unmanned systems. For example, while a commander can assign tasks to any subordinate (human or machine), “inspiring subordinates” has varying levels of applicability based on whether his units consist of “remotely piloted” aircraft (where his subordinates are actual human pilots) or autonomous systems (where the “pilot” is an algorithm controlling a machine). “Command” also includes establishing intent, distributing guidance on allocation of roles, responsibilities, and resources, and defining constraints on actions.8 On one hand, this could be straightforward with autonomous systems as this guidance could be translated into a series of rules and parameters that define the mission and rules of engagement. One would simply upload the mission and deploy the vehicle, which would go out and execute, possibly reporting in for updates but mostly operating on its own, solving problems along the way. On the other hand, in the absence of instructions that cover every possibility, an autonomous system is only as good as the internal algorithms that control it. Even as machine learning drastically improves and advanced algorithms are developed from extensive “training data,” an autonomous system may not respond to novel and ambiguous situations with the same judgment as a human. Indeed, one can imagine a catastrophic military counterpart to the 2010 stock market “flash crash,” where high-frequency trading algorithms designed to act in accordance with certain, pre-arranged criteria did not understand context and misread the situation, briefly erasing $1 trillion in market value.9

“Control” includes the conduits and feedback from subordinates to their commander that allow them to determine if events are on track or to adjust instructions as necessary. This is reasonably straightforward for a remotely piloted aircraft with a constant data link between platform and operator, such as the ScanEagle or MQ-8 Fire Scout unmanned aerial systems. However, a fully autonomous system may not be in positive communication. Even if it is ostensibly intended to remain in communication, feedback to the commander could be limited or non-existent due to emissions control (EMCON) posture or a contested electromagnetic (EM) spectrum. 

Mission Command and Unmanned Systems

In recent years, there has been a renewed focus across the Joint Force on the concept of “mission command.” Mission command is defined as “the conduct of military operations through decentralized execution based upon mission-type orders,” and it lends itself well to the employment of autonomous systems.10 Joint doctrine states:

“Mission command is built on subordinate leaders at all echelons who exercise disciplined initiative and act aggressively and independently to accomplish the mission. Mission-type orders focus on the purpose of the operation rather than details of how to perform assigned tasks. Commanders delegate decisions to subordinates wherever possible, which minimizes detailed control and empowers subordinates’ initiative to make decisions based on the commander’s guidance rather than constant communications.”11

Mission command for an autonomous system would require commanders to clearly confer their intent, objectives, constraints, and restraints in succinct instructions, and then rely on the “initiative” of said system. While this decentralized arrangement is more flexible and better suited to deal with ambiguity, it opens the door to unexpected or emergent behavior in the autonomous system. (Then again, emergent behavior is not confined to algorithms, as humans may perform in unexpected ways too.) 

In addition to passing feedback and information up the chain of command to build a shared understanding of the situation, mission command also emphasizes horizontal flow across the echelon between the subordinates. Since it relies on subordinates knowing the intent and mission requirements, mission command is much less vulnerable to disruption than detailed means of command and control.

However, some commanders today do not fully embrace mission command with human subordinates, much less feel comfortable delegating trust to autonomous systems.  They issue explicit instructions to subordinates in a highly-centralized arrangement, where volumes of information flow up and detailed orders flow down the chain of command. This may be acceptable in deliberate situations where time is not a major concern, where procedural compliance is emphasized, or where there can be no ambiguity or margin for error. Examples of unmanned systems suitable to this arrangement include a bomb disposal robot or remotely piloted aircraft that requires constant intervention and re-tasking, possibly for rapid repositioning of the platform for a better look at an emerging situation or better discrimination between friend and foe. However, this detailed control does not “function well when the vertical flow of information is disrupted.”12 Furthermore, when it comes to autonomous systems, such detailed control will undermine much of the purpose of having an autonomous system in the first place.

A fundamental task of the commander is to recognize which situations call for detailed control or mission command and act appropriately. Unfortunately, the experience gained by many commanders over the last decade has introduced a bias towards detailed control, which will hamstring the potential capabilities of autonomous systems if this tendency is not overcome.

Current Practice

The American military has enjoyed major advantages in recent conflicts due to global connectivity and continuous communications. However, this has redefined expectations and higher echelons increasingly rely on detailed control (for manned forces, let alone unmanned ones). Senior commanders (or their staffs) may levy demands to feed a seemingly insatiable thirst for information. This has led to friction between the echelons of command, and in some cases this interaction occurs at the expense of the decision-making capability of the unit in the field. Subordinate staff watch officers may spend more time answering requests for information and “feeding the beast” of higher headquarters than they spend overseeing their own operations.

It is understandable why this situation exists today. The senior commander (with whom responsibility ultimately resides) expects to be kept well-informed. To be fair, in some cases a senior commander located at a fusion center far from the front may have access to multiple streams of information, giving them a better overall view of what is going on than the commander actually on the ground. In other cases, it is today’s 24-hour news cycle and zero tolerance for mistakes that have led senior commanders to succumb to the temptation to second-guess their subordinates and micromanage their units in the field. A compounding factor that may be influencing commanders in today’s interconnected world is “Fear of Missing Out” (FoMO), which is described by psychologists as apprehension or anxiety stemming from the availability of volumes of information about what others are doing (think social media). It leads to a strong, almost compulsive desire to stay continually connected.  13

Whatever the reason, this is not a new phenomenon. Understanding previous episodes when leadership has “tightened the reins” and the subsequent impacts is key to developing a path forward to fully leverage the potential of autonomous systems.

Veering Off Course

The recent shift of preference away from mission command toward detailed control appears to echo the impacts of previous advances in the technology employed for command and control in general. For example, when speaking of his service with the U.S. Asiatic Squadron and the introduction of the telegraph before the turn of the 20th century, Rear Admiral Caspar Goodrich lamented “Before the submarine cable was laid, one was really somebody out there, but afterwards one simply became a damned errand boy at the end of a telegraph wire.”14

Later, the impact of wireless telegraphy proved to be a mixed blessing for commanders at sea. Interestingly, the contrasting points of view clearly described how it would enable micromanagement; the difference in opinion was whether this was good or bad. This was illustrated by two 1908 newspaper articles regarding the introduction of wireless in the Royal Navy. One article extolled its virtues, describing how the First Sea Lord in London could direct all fleet activities “as if they were maneuvering beneath his office windows.”15 The other article described how those same naval officers feared “armchair control… by means of wireless.”16 In century-old text that could be drawn from today’s press, the article quoted a Royal Navy officer:

“The paramount necessity in the next naval war will be rapidity of thought and of execution…The innovation is causing more than a little misgiving among naval officers afloat. So far as it will facilitate the interchange of information and the sending of important news, the erection of the [wireless] station is welcomed, but there is a strong fear that advantage will be taken of it to interfere with the independent action of fleet commanders in the event of war.”

Military historian Martin van Creveld related a more recent lesson of technology-enabled micromanagement from the U.S. Army. This time the technology in question was the helicopter, and its widespread use by multiple echelons of command during Viet Nam drove the shift away from mission command to detailed control:

“A hapless company commander engaged in a firefight on the ground was subjected to direct observation by the battalion commander circling above, who was in turn supervised by the brigade commander circling a thousand or so feet higher up, who in his turn was monitored by the division commander in the next highest chopper, who might even be so unlucky as to have his own performance watched by the Field Force (corps) commander. With each of these commanders asking the men on the ground to tune in his frequency and explain the situation, a heavy demand for information was generated that could and did interfere with the troops’ ability to operate effectively.”17

However, not all historic shifts toward detailed control are due to technology; some are cultural. For example, leadership had encroached so much on the authority of commanders in the days leading up to World War II that Admiral King had to issue a message to the fleet with the subject line “Exercise of Command – Excess of Detail in Orders and Instructions,” where he voiced his concern. He wrote that the:

“almost standard practice – of flag officers and other group commanders to issue orders and instructions in which their subordinates are told how as well as what to do to such an extent and in such detail that the Custom of the service has virtually become the antithesis of that essential element of command – initiative of the subordinate.”18

Admiral King attributed this trend to several cultural reasons, including anxiety of seniors that any mistake of a subordinate be attributed to the senior and thereby jeopardize promotion, activities of staffs infringing on lower echelon functions, and the habit and expectation of detailed instructions from junior and senior alike. He went on to say that they were preparing for war, when there would be neither time nor opportunity for this method of control, and this was conditioning subordinate commanders to rely on explicit guidance and depriving them from learning how to exercise initiative. Now, over 70 years later, as the Navy moves forward with autonomous systems the technology-enabled and culture-driven drift towards detailed control is again becoming an Achilles heel.

Read Part 2 here.

Tim McGeehan is a U.S. Navy Officer currently serving in Washington. 

The ideas presented are those of the author alone and do not reflect the views of the Department of the Navy or Department of Defense.


[1] Northrup Grumman, X-47B Capabilities, 2015, http://www.northropgrumman.com/Capabilities/x47bucas/Pages/default.aspx

[2] David Smalley, The Future Is Now: Navy’s Autonomous Swarmboats Can Overwhelm Adversaries, ONR Press Release, October 5, 2014, http://www.onr.navy.mil/en/Media-Center/Press-Releases/2014/autonomous-swarm-boat-unmanned-caracas.aspx

[3] Associated Press, Submarine launches undersea drone in a 1st for Navy, Military Times, July 20, 2015, http://www.militarytimes.com/story/military/tech/2015/07/20/submarine-launches-undersea-drone-in-a-1st-for-navy/30442323/

[4] Naval History and Heritage Command, Iowa II (BB-1), July 22, 2015, http://www.history.navy.mil/research/histories/ship-histories/danfs/i/iowa-ii.html

[5] Trevor Jeremy, LT Joe Kennedy, Norfolk and Suffolk Aviation Museum, 2015, http://www.aviationmuseum.net/JoeKennedy.htm

[6] Puppet Planes, All Hands, June 1946, http://www.navy.mil/ah_online/archpdf/ah194606.pdf, p. 2-5

[7] Naval Doctrine Publication 6:  Naval Command and Control, 1995, http://www.dtic.mil/dtic/tr/fulltext/u2/a304321.pdf, p. 6

[8] David Alberts and Richard Hayes, Understanding Command and Control, 2006, http://www.dodccrp.org/files/Alberts_UC2.pdf, p. 58

[9] Ben Rooney, Trading program sparked May ‘flash crash’, October 1, 2010, CNN, http://money.cnn.com/2010/10/01/markets/SEC_CFTC_flash_crash/

[10] DoD Dictionary of Military and Associated Terms, March, 2017, http://www.dtic.mil/doctrine/new_pubs/jp1_02.pdf

[11] Joint Publication 3-0, Joint Operations, http://www.dtic.mil/doctrine/new_pubs/jp3_0.pdf

[12] Ibid

[13] Andrew Przybylski, Kou Murayama, Cody DeHaan , and Valerie Gladwell, Motivational, emotional, and behavioral correlates of fear of missing out, Computers in Human Behavior, Vol 29 (4), July 2013,  http://www.sciencedirect.com/science/article/pii/S0747563213000800

[14] Michael Palmer, Command at Sea:  Naval Command and Control since the Sixteenth Century, 2005, p. 215

[15] W. T. Stead, Wireless Wonders at the Admiralty, Dawson Daily News, September 13, 1908, https://news.google.com/newspapers?nid=41&dat=19080913&id=y8cjAAAAIBAJ&sjid=KCcDAAAAIBAJ&pg=3703,1570909&hl=en

[16] Fleet Commanders Fear Armchair Control During War by Means of Wireless, Boston Evening Transcript, May 2, 1908, https://news.google.com/newspapers?nid=2249&dat=19080502&id=N3Y-AAAAIBAJ&sjid=nVkMAAAAIBAJ&pg=470,293709&hl=en

[17] Martin van Creveld, Command in War, 1985, p. 256-257.

[18] CINCLANT Serial (053), Exercise of Command – Excess of Detail in Orders and Instructions, January 21, 1941

Featured Image: An X-47B drone prepares to take off. (U.S. Navy photo)