Category Archives: Future Tech

What is coming down the pipe in naval and maritime technology?

For America and Japan, Peace and Security Through Technology, Pt. 1

By Capt. Tuan N. Pham, USN

This is part one of a two-part series on the urgent need for a bilateral technology roadmap to field and sustain a lethal, resilient, and rapidly adapting technology-enabled Joint Force that can seamlessly conduct high-end maritime operations in the Indo-Pacific…a fitting legacy for former Japanese Prime Minister Shinzo Abe and his successor Yoshihide Suga, staunch champions of the enduring U.S.-Japan Alliance. 

In today’s strategic environment of Great Power Competition (GPC), global powers actively vie for preeminence. The growing competition is particularly acute in the technology domain, as evidenced by the ongoing technology race amongst the world powers. The global powers invest heavily in Fourth Industrial Revolution technologies to build national power, global influence, and international prestige and to prepare for uncertain economic and security futures. 

The United States and Japan are fully committed to national security technological innovation. The 2018 U.S. National Defense Strategy (NDS) and 2020 Defense of Japan (DOJ) White Paper call for the harnessing, investing, and protecting of their respective technology bases for competitive advantages. Both nations share the strategic imperative and urgency to develop and sustain a technology-enabled Joint Force (otherwise known in Japan as the Multi-Domain Defense Force) that can conduct synchronized, distributed, and integrated operations across the interconnected and contested battlespaces in furtherance of the alliance’s shared national interests. The changing character of warfare has made warfighting a transregional, multi-domain, and multi-functional activity. The U.S. Navy (USN) and Japan Maritime Self-Defense Force (JMSDF) must, therefore, better leverage emerging maritime technologies and developing concomitant naval warfare concepts and doctrines to adapt to the new way of fighting. Otherwise, the allied navies risk ceding the technology domain and consequently maritime superiority in the Indo-Pacific to the competing navies of revisionist China and revanchist Russia – People’s Liberation Army Navy and Russian Federation Navy, respectively.

How China and Russia View Technological Competition

For General Secretary of the Chinese Communist Party (CCP) Xi Jinping, technological advancement is not only a means to economic, political, and military power and influence for the CCP; it is also the “Long March” (or way) toward regional hegemony and ultimately global preeminence and an ideological end to itself: the Chinese Dream of national rejuvenation. The Chinese Dream offers hope for and validation of China as a great rising power after decades of political, economic, and social struggles. The commitment to advanced technologies reflects Beijing’s longing for past imperial glory (Middle Kingdom), its wishful guarantee against another century of humiliation (19th-century colonialism), and steadfast ambition to surpass the United States and Europe (21st century of Asia preeminence). To that end, China endeavors to become a global leader in every sector and domain and dominate emerging “game-changing” technologies like artificial intelligence (AI), autonomy, and blockchain in accordance with its Made in China 2025 and Internet Plus policy initiatives. To Xi, technological innovation, by all means, is necessary to surpass the West, and technological dominance is the path to realize global preeminence by 2049 – the essence of the Chinese Dream.

Russian President Vladimir Putin likewise understands and appreciates the disruptive potential of technology as he tries to restore Russia to its former greatness. In 2017, he presciently declared that “whoever becomes the leader in this sphere [explicitly AI and implicitly technology at large] will become the ruler of the world.” The bold statement summarizes well the purpose and intent behind the 2017 Strategy for the Development of an Information Society for 2017–2030, one of Putin’s key policy initiatives to rebuild Russia to its past Soviet glory. The technology strategy supplements and complements the greater 2015 National Security Strategy which reflects a Russia more confident in its ability to defend its sovereignty, resist Western pressure and influence, and realize its great power aspirations. 

Bilateral Technology Roadmap

The Department of Defense (DOD) technological advantage depends on a healthy and secure national security innovation base that includes both traditional and non-traditional partners. (2018 U.S. NDS)    

Japan will enhance priority defense capability areas as early as possible – strengthening capabilities necessary for cross-domain operations and core elements of defense capability by reinforcing the human resource base, technology base, and defense industrial base. (2020 DOJ White Paper)

The U.S. NDS and DOJ White Paper call for harnessing, investing, and protecting their respective national security innovation and technology bases to better respond to the growing challenges to the rules-based liberal international order (LIO) by illiberal powers like China and Russia. Washington and Tokyo both want to develop innovative technological approaches, make targeted and sustained technological investments, and execute disciplined fielding of critical warfighting capabilities to the Joint Force (Multi-Domain Defense Force ) – a force that can protect national and allied interests, advance the bilateral military-to-military relationship, strengthen the strategic alliance, promote the Free and Open Indo-Pacific, and uphold the LIO. 

Now is the opportune time to build a bilateral technology roadmap to field and sustain a lethal, resilient, and adaptable Joint Force, enabled by technology, that can seamlessly conduct high-end maritime operations in the Indo-Pacific – a predominantly maritime fight in a maritime domain. To do otherwise is a missed opportunity to strengthen the enduring U.S.-Japan alliance, increase the stabilizing regional security, and reinforce the weakening LIO that has provided global security and prosperity for over 70 years.

The technology roadmap should leverage extant USN and JMSDF technology strategies and plans to identify and prioritize joint projects for collaboration across the respective governments, private industries, and academia. By doing so, the allied stakeholders can identify current, proposed, and potential collaborative projects. Stakeholders must assess the cultural, institutional, organizational, and legal challenges of each country to determine how best to promote and incentivize bilateral collaboration. They must also expand the framework to all the joint services, and eventually extend the framework to other key allies and partners in the region and beyond.

Proposed Roadmap Framework

Purpose and Scope: In alignment with the defense strategies of the United States and Japan, the roadmap should examine the strategic environment in the innovative technology domain through the lens of GPC. This roadmap should:

  • Characterize the current state, development, and employment of disruptive technologies across the USN and JMSDF.
  • Envision the future integration of these emerging maritime technologies and developing concomitant naval concepts (doctrines) into the Joint Force.
  • Identify the barriers to realizing that joint future.
  • Outline the proposed actions to overcome those barriers.
  • Leverage the pervasive technological innovations happening in government, private industry, and academia within the United States and Japan.
  • Inform the actions of stakeholders who possess limited resources (human capital, money, and knowledge), incongruent cultures, and sometimes conflicting priorities to effectively and efficiently accelerate the development, fielding, and integration of joint warfighting capabilities in a fiscally constrained budgetary environment across the current U.S. Future Years Defense Program and Japan Mid-Term Defense Program.

Vision and Goals. The USN and JMSDF should contribute to the development and sustainment of a technology-enabled Joint Force. In the near term, both allied navies should develop a bilateral technology roadmap to deliver joint warfighting capabilities and increase joint warfighting capacities to the Multi-Domain Defense Force. In the long-term, each allied navy should modify its respective Doctrine, Organization, Training, Material Solutions, Leadership and Education, Personnel, Facilities, and Policies (DOTMLPF-P) to provide the infrastructure and systems required to support the development, fielding, integration, and sustainment of these new joint warfighting capabilities and capacities. 

The broader U.S. DOD and Japan Ministry of Defense (MOD) should also modernize their respective defense infrastructures (to include ecosystems of technical professionals, research facilities, and partnerships) to better support cutting-edge Science and Technology (S&T), realize the technology-enabled Joint Force, and maintain technological superiority over a rising China and resurging Russia, which are also making rapid technological advancements and incorporating them into their respective modernized forces. Long-term strategic success requires focused investment in four fundamental S&T areas – fundamental research, technical workforce, defense laboratories, and partnerships with the private sector and key allies and partners.

Objectives: The USN and JMSDF should consider broad and interlocked objectives to realize the aforesaid vision and goals. These include:

  • Define and prioritize emerging maritime technologies and developing concomitant naval concepts (doctrines) to maintain warfighting superiority.
  • Be technically and fiscally capable of fielding and sustaining maritime technologies at will.
  • Be interoperable and cyberspace-secure, and have adequate infrastructure and logistics support in both nations.
  • Be consistent with the programmatic principles of affordability, interoperability, agility, and resiliency.
  • Leverage emerging accelerated acquisition processes to enable the rapid development, demonstration, and fielding of maritime technologies.
  • Develop policies to allow the implementation of new bilateral warfighting capabilities and advance mutual naval interests.
  • Promote joint warfighter’s trust in these new maritime technologies.
  • Build on the Navy-to-Navy technology exchange and collaboration to extend to the other services and expand to other key allies and partners as and when appropriate.

This concludes part one of a two-part series that calls for a bilateral technology roadmap to field and sustain a lethal, resilient, and rapidly adapting technology-enabled Joint Force that can seamlessly conduct high-end maritime operations in the Indo-Pacific. Part two underscores the imperatives to do so and describes the ongoing technology competition within the region through the lens of GPC in the 21st century.    

CAPT Pham is a maritime strategist, strategic planner, naval researcher, and China Hand with 20 years of experience in the Indo-Pacific. He completed a research paper with the Office of Naval Research (ONR) at the U.S. Naval War College (USNWC) in 2020. The articles are derived from the aforesaid paper. The views expressed here are personal and do not reflect the positions of the U.S. Government, USN, ONR or USNWC.

Featured photo: RADM Winter and RADM Saito discuss Science and Technology partnerships between the U.S. and Japan, aboard Japanese JS Izumo (DDH-183). Photo credit: Office of Naval Research, released. https://twitter.com/usnavyresearch/status/743474786643251201

Software-Defined Tactics and Great Power Competition

By LT Sean Lavelle, USN

There are two components to military competency: understanding and proficiency. To execute a task, like driving a ship, one must first understand the fundamentals and theory—the rules of navigation, how the weather impacts performance, how a ship’s various controls impact its movement. Understanding is stable and military personnel forget the fundamentals slowly. Learning those fundamentals, though, does not eliminate the need to practice. Failing to practice tasks like maneuvering the ship in congested waters or evaluating potential contacts of interest will quickly degrade operational proficiency.

In the coming decades, human understanding of warfighting concepts will still be paramount to battlefield success. Realistic initial training and high-end force-on-force exercises will be critical to building that understanding. However, warfighters cross-trained as software developers will make it far easier to retain proficiency without as much rote, expensive practice. Their parent units will train them to make basic applications, and they will use these skills to translate their hard-won combat understanding into a permanent proficiency available to anyone with the most recent software update.

These applications, called software-defined tactics, will alert tacticians to risk and opportunity on the battlefield, ensuring they can consistently hit the enemy’s weak points while minimizing their own vulnerabilities. They will speed force-wide learning by orders of magnitude, create uniformly high-performing units, and increase scalability of conventional forces.

Vignette

Imagine an F-35 section leader commanding two F-35 fighters tasked to patrol near enemy airspace and kill any enemy aircraft who approach. As the F-35s establish a combat air patrol in the assigned area, the jet’s sensors indicate there are two flights of adversary aircraft approaching the formation, one from off the nose to the north and the other off the right wing, from the east. Each of these flights consists of four bandits that are individually overmatched by the advanced F-35s. Safety is to the south.

These F-35s have enough missiles within the section to reliably kill four enemies, but are facing eight. Since the northern group of bandits are a bit closer, the section leader decides to move north and kill them. The section’s volley of missiles all achieve solid hits, and there are now four fewer enemy aircraft to threaten the larger campaign.

Now out of missiles, the section turns south to head back home. That’s when the section leader realizes the mistake. As the F-35s flowed northward, they traveled farther away from safety while the eastern group of bandits continued to close on the F-35s, cutting off their path home. 

The only options at this point are to try to travel around the bandits or go through them. A path around them would run the fighters out of fuel, so the flight leader goes straight for the four enemy aircraft, hoping that the bandits will have seen their friends shot down and run away in fear.

The gambit fails, however, and the remaining enemy aircraft close with the F-35s and shoot them down. What should have been an easy victory ended in a tactical stalemate, and in a war where the enemy can build their simple aircraft faster than America can build complex F-35s, the 2:1 exchange ratio is in their favor strategically.

This could have gone differently.

Persistent and Available Tactical Lessons 

Somebody in the F-35 fleet had likely made a mistake similar to this example during a training evolution long before the fateful dogfight. They might have even taken a few days out of their schedule to write a thoughtful lessons-learned paper about it. This writing is critically important. It communicates to other pilots the fundamental knowledge required to succeed in combat. However, success in combat demands not just understanding, but proficiency as well. An infantryman who has not fired a rifle in a few years likely still understands how to shoot, but their lack of practice means they will struggle at first.

Under a software-defined tactics regime, in addition to writing a paper, the pilot could have written software that would have alerted future pilots about the impending danger. While those pilots would still need to understand the risk, ever-watching software would alert them to risks in real-time so that a lack of recent practice would not be fatal. A quick software update to the F-35 fleet would have dramatically and permanently reduced the odds of anyone ever making that mistake again.

The program would not have had to be complex. It could have run securely, receiving data from the underlying mission system without transmitting data back to the aircraft’s mission computers. This one-way data pipe would have eliminated the potential for ad-hoc software to accidentally hamper the safety of the aircraft.

The F-35’s mission computer in our example already had eight hostile tracks displayed. The F-35’s computer also knew how many missiles it had loaded in its weapons bay. If that data were pushed to a software-defined tactics application, the coder-pilot could have written a program that executed the following steps:

  1. Determine how many targets can be attacked, given the missiles onboard.
  2. If there are enough missiles to attack them all, recommend attacking them all. If there are more hostile tracks than missiles (or a predefined missile-to-target ratio), run the following logic to determine which targets to prioritize.
  3. Determine all the possible ordered combinations of targets. There are 1,680 combinations in the original example—a small number for a computer.
  4. For each combination, simulate the engagement and determine if an untargeted aircraft could cut off the escape towards home. Store the margin of safety distance.
  5. If a cutoff is effective in a given iteration, reject that combination of targets and test the next one.
  6. Recommend the combination of targets to the flight commander with the widest clear path home. Alert the flight commander if there is no course of action with a clear path home.

This small program would have instantly told the pilot to engage the eastern targets, and that engaging the targets to the north would have allowed the eastern targets to cut off the F-35s’ route to safety. Following this recommendation would have allowed the F-35s to maintain a 4:0 kill ratio and live to fight another day.

A simple version of this program could have been written by two people in a single day—16 man-hours—if they had the right tools. Completing tactical testing in a simulator and ensuring the software’s reliability would take another 40-80 man-hours. 

Alternatively, writing a compelling paper about the situation would take a bit less time: around 20-40 hours. However, a force of 1,000 pilots spending 30 minutes each to read the paper would require 500 man-hours. Totaling these numbers, results in 96 man-hours on the high-end for software-defined tactics versus 520 man-hours on the low-end for writing and reading. While both are necessary, writing software is much more efficient than writing papers.

To truly train the force not to make this mistake without software-defined tactics, every pilot would need to spend around five hours—a typical brief, simulator, and debrief length—in training events that stressed the scenario. That yields an additional 10,000 man-hours, given one student and one instructor for each training event. At that point, all of the training effort might reduce instances of the mistake by about 75%.

To maintain that level of performance, aircrew would need to practice this scenario once every six months in simulators. That is 10,000 hours every six months. Over five years, you’d need to spend more than 100,000 man-hours to maintain proficiency in this skill across the force.

Software-defined tactics applications do not need ongoing practice to maintain currency. They do need to be updated periodically to account for tactical changes and to improve them, though. Budgeting 100 man-hours per year is reasonable for an application of this size. That is 500 man-hours over five years.

Pen-and paper updates require 100,000 man-hours for a 75% reduction in a mistake. Software-driven updates require 596 man-hours for a nearly 100% reduction. It is not close. 

When a software developer accidentally creates a bug, they code a test that will alert them if anyone else ever makes that same mistake in the future. In this way, a whole development team learns and gets more reliable with every mistake they make. Software-defined tactics offer that same power to military units.

Software Defined Tactics in Action

While the F-35 example is hypothetical, software-defined tactics are not. The Navy’s P-8 community has been leveraging a software-defined tactics platform for the last four years to great effect. The P-8 is a naval aircraft primarily designed to hunt enemy submarines. Localization—the process by which a submarine-hunting asset goes from initial detection to accurate estimate of the target’s position, course, and speed—is among the most challenging parts of prosecuting adversary submarines.

On the P-8, the tactical coordinator decides on and implements the tactics the P-8 will use to localize a submarine. It takes about 18 months of time in their first squadron to qualify as a tactical coordinator and demonstrate reliable proficiency in this task. These months include thousands of hours of study, hundreds of hours in the aircraft and simulator, and dozens of hours defending their knowledge in front of more experienced tacticians.

When examining the data the P-8 community collects, there is a clear and massive disparity in performance between inexperienced and experienced personnel. There is another massive disparity between those experienced tacticians who have been selected to be instructors because of demonstrated talent and those who have not. In other words, there are both experience and innate talent factors with large impacts on performance in submarine localization.

The community’s software-defined tactics platform has made it so that a junior tactician (inexperienced and possibly untalented) with 6-months of time in platform performs exactly as well as an instructor (experienced and talented) with 18-months in platform. It does this largely by reducing tactician mistakes—alerting them to the opportunities the tactical situation presents and dissuading them from enacting poor tactical responses.

This makes the P-8 force extremely scalable in wartime. In World War II, America beat Japan because it was able to quickly and continually train high-quality personnel. It took nine months to train a basic fighter pilot in 1942. It takes two or three years to go from initial flight training until arriving at a fleet squadron in 2021. Reducing time to train with software-defined tactics will restore that rapid scalability to America’s modern forces.

The P-8 community has had similar results for many tactical scenarios. It does this, today, with very little integration into the P-8s mission system. Soon, its user-built applications will be integrated with a one-way data pipe from the aircraft’s mission system that will enable the full software-defined tactics paradigm. A team called the Software Support Activity at the Naval Air Systems Command will manage the security of this system and provide infrastructure support. Another team consisting of P-8 operators at the Maritime Patrol and Reconnaissance Weapons School will develop applications based on warfighter needs. 

Technical Implementation

Implementing this paradigm across the US military will yield a highly capable force that can learn at speeds orders of magnitude faster than its adversaries. Making the required technical changes will be inexpensive.

On the P-8, implementing a secure computing environment with one-way data flow was always part of the acquisition plan. That should be the case for all future platform acquisitions. All it requires is an open operating system and a small amount of computing resources reserved for software-defined tactics applications.

Converting legacy platforms will be slightly more difficult. If a platform has no containerized computing environment, it is possible to add one, though. The Air Force recently deployed Kubernetes—a framework that allows for securely containerized applications to be inserted in computing environments—on a U-2. Feeding mission-system data to this environment and allowing operators to build applications with it will enable software-defined tactics.

If it is possible to securely implement this on the U-2, which was built in 1955, any platform in the U.S. arsenal can be modified to accept software-defined tactics applications.

Human Implementation

From a technical standpoint, implementing this paradigm is trivial. From the human perspective, it is a bit harder. However, investing in operational forces’ technical capabilities without the corresponding human capabilities will result in a force that operates in the way industry believes it should, rather than the way warfighters know it should. A tight feedback loop between the battlefield reality and the algorithms that help dominate that battlefield is essential. Multi-year development cycles will not keep up.

As a first step, communities should work to identify the personnel they already have in their ranks with some ability to develop software. About a quarter of Naval Academy graduates enter the service each year with majors that require programming competency. These officers are a largely untapped resource.

The next step is to provide these individuals with training and tools to make software. An 80-hour, two-week course customized to the individual’s talent level is generally enough to get a new contributor to a productive level on the P-8’s team. A single application pays for this investment many times over. Tools available on the military’s unclassified and secret networks like DI2E and the Navy’s Black Pearl enable good practices for small-scale software development.

Finally, this cadre of tactician-programmers should be detailed to warfare development centers and weapons schools during their non-operational tours. Writing code and staying current with bleeding-edge tactical issues should be their primary job once there. Given the significant contribution this group will make to readiness, this duty should be rewarded at promotion boards to maintain technical competence in senior ranks.

A shortcut to doing this could be to rely on contractors to develop software-defined tactics. To maximize the odds of success, organizations should ensure that these contractors 1) are co-located with experienced operators, 2) are led by a tactician with software-development experience, 3) can deploy software quickly, 4) have at least a few tactically-current, uniformed team members, and 5) are funded operationally vice project-based so they can switch projects quickly as warfighters identify new problems. 

The Stakes

Great power competition is here. China’s economy is now larger than America’s on a purchasing parity basis. America no longer has the manufacturing capacity advantage that led to victory in World War II, nor the ability to train highly-specialized warfighters rapidly. To maintain America’s military dominance in the 21st century, it must leverage the incredible talent already resident in its armed forces.

When somebody in an autocratic society makes a mistake, they hide that mistake since punishment can be severe. The natural openness that comes from living in a democratic society means that American military personnel are able to talk about mistakes they have made, reason about how to stop them from happening again, and then implement solutions. The U.S. military must give its people the tools required to implement better, faster, and more permanent solutions. 

Software-defined tactics will yield a lasting advantage for American military forces by leveraging the comparative advantages of western societies: openness and a focus on investing in human capital. There is no time to waste.

LT Sean Lavelle is an active-duty naval flight officer who instructs tactics in the MQ-4C and P-8A. He leads the iLoc Software Development Team at the Maritime Patrol and Reconnaissance Weapons School and holds degrees from the U.S. Naval Academy and Johns Hopkins University. The views stated here are his own and are not reflective of the official position of the U.S. Navy or Department of Defense.

Featured image: A P-8A Poseidon conducts flyovers above the Enterprise Carrier Strike Group during exercise Bold Alligator 2012. (U.S. Navy photo by Mass Communication Specialist 3rd Class Daniel J. Meshel/Released)

Winning The AI-Enabled War-at-Sea

By Dr. Peter Layton

Artificial intelligence (AI) technology is suddenly important to military forces. Not yet an arms race, today’s competition is more in terms of an experimentation race with many AI systems being tested and new research centers established. There may be a considerable first-mover advantage to the country that first understands AI adequately enough to change its existing human-centered force structures and embrace AI warfighting.

In a new Joint Studies Paper, I explore sea, land and air operational concepts appropriate to fighting near-to-medium term future AI-enabled wars. With much of the underlying narrow AI technology already developed in the commercial sector, this is less of a speculative exercise than might be assumed. Moreover, the contemporary AI’s general-purpose nature means its initial employment will be within existing operational level constructs, not wholly new ones.

Here, the focus is the sea domain. The operational concepts mooted are simply meant to stimulate thought about the future and how to prepare for it. In being so aimed, the concepts are deliberately constrained; crucially they are not joint or combined. In all this, it is important to remember that AI enlivens other technologies. AI is not a stand-alone actor, rather it works in the combination with numerous other digital technologies. It provides a form of cognition to these.

AI Overview

In the near-to-medium term, AI’s principal attraction is its ability to quickly identify patterns and detect items hidden within very large data troves. The principal consequence of this is that AI will make it much easier to detect, localize and identity objects across the battlespace. Hiding will become increasingly difficult. However, AI is not perfect. It has well known problems in being able to be fooled, in being brittle, being unable to transfer knowledge gained in one task to another and being dependent on data.

AI’s warfighting principal utility then becomes ‘find and fool’. AI with its machine learning is excellent at finding items hidden within a high clutter background. In this role AI is better than humans and tremendously faster. On the other hand, AI can be fooled through various means. AI’s great finding capabilities lack robustness.

A broad generic overview is useful to set the scene. The ‘find’ starting point is placing a large number of low cost Internet of Things (IoT) sensors in the optimum land, sea, air, space and cyber locations in the areas across which hostile forces may transit. From these sensors, a deep understanding can be gained of the undersea terrain, sea conditions, physical environment and local virtual milieu. Having this background data accelerates AI’s detection of any changes and, in particular, of the movement of military forces across it.

The fixed and mobile IoT edge-computing sensors are connected into a robust cloud to reliably feed data back into remote command support systems. The command system’s well-trained AI could then very rapidly filter out the important information from the background clutter. Using this, AI can then forecast adversary actions and predict optimum own force employment and its combat effectiveness. Hostile forces geolocated by AI can, after approval by human commanders, be quickly engaged using indirect fire including long-range missiles. Such an approach can engage close or deep targets; the key issues being data on the targets and the availability of suitable range firepower. The result is that the defended area quickly becomes a no-go zone.

To support the ‘fool’ function, Uncrewed Vehicles (UV) could be deployed across the battlespace equipped with a variety of electronic systems suitable for the Counter Intelligence Surveillance And Reconnaissance And Targeting (C-ISRT) task. The intent is to defeat the adversary’s AI ‘find’ capabilities. Made mobile through AI, these UVs will be harder for an enemy to destroy than fixed jammers would be. Moreover, mobile UVs can be risked and sent close in to approaching hostile forces to maximize jamming effectiveness. Such vehicles could also play a key role in deception, creating a false and misleading impression of the battlefield to the adversary. Imagine a battlespace where there are a thousand ‘valid’ targets, only a few of which are real.

A War-at-Sea Defense Concept

Defense is the more difficult tactical problem during a war-at-sea. Its intent is solely to gain tactical time for an effective attack or counterattack. Wayne Hughes goes as far in his seminal work to declare that: “All fleet operations based on defensive tactics…are conceptually deficient.”1  The AI-enabled battlefield may soften this assertion.

Accurately determining where hostile ships are in the vast ocean battlefields has traditionally been difficult. A great constant of such reconnaissance is that there never seems to be enough. However, against this, a great trend since the early 20th century is that maritime surveillance and reconnaissance technology is steadily improving. The focus is now not on collecting information but on improving the processing of the large troves of surveillance and reconnaissance data collected.2 Finding the warship ‘needle’ in the sea ‘haystack’ is becoming easier. 

The earlier generic ‘find’ concept envisaged a large distributed IoT sensor field. Such a concept is becoming possible in the maritime domain given AI and associated technology developments.

DARPA’s Ocean of Things (OoT) program aims to achieve maritime situational awareness over large ocean areas through deploying thousands of small, low-cost floats that form a distributed sensor network. Each smart float will have a suite of commercially available sensors to collect environmental and activity data; the later function involves automatically detecting, tracking and identifying nearby ships and – potentially – close aircraft traffic. The floats use edge processing with detection algorithms and then transmit the semi-processed data periodically via the Iridium satellite constellation to a cloud network for on-shore storage. AI machine learning then combs through this sparse data in real time to uncover hidden insights. The floats are environmentally friendly, have a life of around a year and in buys of 50,000 have a unit cost of about US$500 each. DARPA’s OoT shows what is feasible using AI.

In addition to floats, there are numerous other low-cost AI-enabled mobile devices that could noticeably expand maritime situational awareness including: the EMILY Hurricane Trackers, Ocean Aero Intelligent Autonomous Marine Vehicles, Seaglider Autonomous Underwater Vehicles, Liquid Robotics Wave Gliders and Australia’s Ocius Technology Bluebottles.

In addition to mobile low-cost autonomous devices plying the seas there is an increasing number of smallsats being launched by governments and commercial companies into low earth orbit to form large constellations. Most of these will use AI and edge computing; some will have sensors able to detect naval vessels visually or electronically.

All this data from new sources can be combined with that from the existing large array of traditional maritime surveillance systems. The latest system into service is the long-endurance MQ-4C Triton uncrewed aerial vehicle with detection capabilities able to be enhanced through retrofitting AI. The next advance may be the USN’s proposed 8000km range, AI-enabled Medium Unmanned Surface Vessel (MUSV) which could cruise autonomously at sea for two months with a surveillance payload.

With so many current and emerging maritime surveillance systems, the idea of a digital ocean is becoming practical. This concept envisages the data from thousands of persistent and mobile sensors being processed by AI, analyzed though machine learning and then fused into a detailed ocean-spanning three-dimensional comprehensive picture. Oceans remain large expanses making this a difficult challenge. However, a detailed near-real time digital model of smaller spaces such as enclosed waters like the South China Sea, national littoral zones or limited ocean areas of specific import appears practical using current and near-term technology.

Being able to create a digital ocean model may prove revolutionary. William Williamson of the USN Naval Postgraduate School declares: “On the ‘observable ocean’, the Navy must assume that every combatant will be trackable, with position updates occurring many times per day. …the Navy will have lost the advantages of invisibility, uncertainty, and surprise. …Vessels will be observable in port…[with] the time of departure known to within hours or even minutes. This is true for submarines as well as for surface ships.”3

This means that in a future major conflict, the default assessment by each warship’s captain might be that the adversary probably knows the ship’s location. Defense then moves from being “conceptually deficient” to being the foundation of all naval tactics in an AI-enabled battlespace. The emerging AI-enabled maritime surveillance system of systems will potentially radically change traditional war-at-sea thinking. The ‘attack effectively first’ mantra may need to be rewritten to ‘defend effectively first.’

The digital, ‘observable ocean’ will ensure warships are aware of approaching hostile warships and a consequent increasing risk of attack. In this addressing this, three broad alternative ways for the point defense of a naval task group might be considered.

Firstly, warships might cluster together, so as to concentrate their defensive capabilities and avoid any single ship being overwhelmed by a large multi-axis, multi-missile attack. In this, AI-enabled ship-borne radars and sensors will be able to better track incoming missiles amongst the background clutter. Moreover, AI-enabled command systems will be able to much more rapidly prioritize and undertake missile engagements. In addition, nearby AI-enabled uncrewed surface vessels may switch on active illuminator radars, allowing crewed surface combatants to receive reflections to create fire control-quality tracks. The speed and complexity of the attacks will probably mean that human-on-the-loop is the generally preferred AI-enabled ship weapon system control, switching to human-out-of-the-loop as numbers of incoming missiles rise or hypersonic missiles are faced.

Secondly, instead of clustering, warships might scatter so that an attack against one will not endanger others. Crucially, modern technology now allows dispersed ships to fight together as a single package. The ‘distributed lethality’ concept envisages distant warships sharing precise radar tracking data across a digital network, although there are issues of data latency that limit how far apart the ships sharing data for this purpose can be. An important driver of the ‘distributed lethality’ concept is to make adversary targeting more difficult. With the digital ocean, this driver may be becoming moot.

Thirdly, the defense in depth construct offers new potential through becoming AI-enabled, particularly when defending against submarines although the basic ideas also have value against surface warship threats. In areas submarines may transit through, stationary relocatable sensors like the USN’s Transformational Reliable Acoustic Path System could be employed backed up by unpowered, long endurance gliders towing passive arrays. These passive sonars would use automated target recognition algorithms supported by AI machine learning to identify specific underwater or surface contacts.

Closer to the friendly fleet, autonomous MUSVs could use low-frequency active variable depth sonars supplemented by medium-sized uncrewed underwater vehicles (UUV) with passive sonar arrays. Surface warships or the MUSVs could further deploy small UUVs carrying active multistatic acoustic coherent sensors already fielded in expendable sonobuoys. Warships could employ passive sonars to avoid counter-detection and take advantage of multistatic returns from the active variable depth sonars deployed by MUSVs.

Fool Function. The “digital ocean” significantly increases the importance of deception and confusion operations. This ‘fool’ function of AI may become as vital as the ‘find’ function, especially in the defense. In the war-at-sea, the multiple AI-enabled systems deployed across the battlespace offer numerous possibilities for fooling the adversary.

Deception involves reinforcing the perceptions or expectations of an adversary commander and then doing something else. In this, multiple false cues will need seeding as some clues will be missed by the adversary and having more than one will only add to the deception’s credibility. For example, a number of uncrewed surface vessels could set sail as the warship leaves port, all actively transmitting a noisy facsimile of the warships electronic or acoustic signature. The digital ocean may then suggest to the commander multiple identical warships are at sea, creating some uncertainty as to which is real or not.

In terms of confusion, the intent might be not to avoid detection as this might be very difficult but instead prevent an adversary from classifying vessels detected as warships or identifying them as a specific class of warship. This might be done using some of the large array of AI-enabled floaters, gliders, autonomous devices, underwater vehicles and uncrewed surface vessels to considerably confuse the digital ocean picture. The aim would be to change the empty oceans – or at least the operational area – into a seemingly crowded, cluttered, confusing environment where detecting and tracking the real sought-after warships was problematic and at best fleeting. If AI can find targets, AI can also obscure them.

A War-at-Sea Offense Concept

In a conflict where both sides are employing AI-enabled ‘fool’ systems, targeting adversary warships may become problematic. The ‘attack effectively first’ mantra may evolve to simply ‘attack effectively.’ Missiles that miss represent a significant loss of the task group’s or fleet’s net combat power, and take a considerable time to be replaced. Several alternatives may be viable.

In a coordinated attack, the offence might use a mix of crewed and uncrewed vessels. One option is to use three ship types: a large, well-defended crewed ship that carries considerable numbers of various types of long-range missiles but which remains remote to the high-threat areas; a smaller crewed warship pushed forward into the area where adversary ships are believed to be both for reconnaissance and to provide targeting for the larger ship’s long-range missiles; and an uncrewed stealthy ship operating still further forward in the highest risk area primarily collecting crucial time-sensitive intelligence and passing this back through the smaller crewed warship onto the larger ship in the rear.

The intermediate small crewed vessel can employ elevated or tethered systems and uncrewed communications relay vehicles to receive the information from the forward uncrewed vessel and act as a robust gateway to the fleet tactical grid using resilient communications systems and networks. Moreover, the intermediate smaller crewed vessel in being closer to the uncrewed vessel will be able to control it as the tactical situation requires and, if the context changes, adjust the uncrewed vessel’s mission.

This intermediate ship will probably also have small numbers of missiles available to use in extremis if the backward link to the larger missile ship fails. Assuming communications to all elements of the force will be available in all situations may be unwise. The group of three ships should be network enabled, not network dependent, and this could be achieved by allowing the intermediate ship to be capable of limited independent action.

The coordinated attack option is not a variant of the distributed lethality concept noted earlier. The data being passed from the stealthy uncrewed ship and the intermediate crewed vessel is targeting, not fire control, quality data. The coordinated attack option has only loose integration that is both less technically demanding and more appropriate to operations in an intense electronic warfare environment.

An alternative concept is to have a large crewed vessel at the center of a networked constellation of small and medium-sized uncrewed air, surface and subsurface systems. A large ship offers potential advantages in being able to incorporate advanced power generation to support emerging defensive systems like high energy lasers or rail guns. In this, the large crewed ship would need good survivability features, suitable defensive systems, an excellent command and control system to operate its multitude of diverse uncrewed systems and a high bandwidth communication system linking back to shore-based facilities and data storage services.

The crewed ship could employ mosaic warfare techniques to set up extended kinetic and non-kinetic kill webs through the uncrewed systems to reach the adversary warships. The ship’s combat power is not then in the crewed vessel but principally in its uncrewed systems with their varying levels of autonomy, AI application and edge computing.

The large ship and its associated constellation would effectively be a naval version of the Soviet reconnaissance-strike complex.  An AI-enabled war at sea then might involve dueling constellations, each seeking relative advantage.

Conclusion

The AI-enabled battlespace creates a different war-at-sea. Most obvious are the autonomous systems and vessels made possible by AI and edge computing. The bigger change though may be to finally take the steady scouting improvements of the last 100 years or so to their final conclusion. The age of AI, machine learning, big data, IoT and cloud computing appear set to create the “observable ocean.” From combining these technologies, near-real digital models of the ocean environment can be made that highlight the man-made artefacts present.

The digital ocean means warships could become the prey as much as the hunters. Such a perspective brings a shift in thinking about what the capital ship of the future might be. A recent study noted: “Navy’s next capital ship will not be a ship. It will be the Network of Humans and Machines, the Navy’s new center of gravity, embodying a superior source of combat power.” Tomorrow’s capital ship looks set to be the human-machine teams operating on an AI-enabled battlefield.

Dr. Peter Layton is a Visiting Fellow at the Griffith Asia Institute, Griffith University and an Associate Fellow at the Royal United Services Institute. He has extensive aviation and defense experience and, for his work at the Pentagon on force structure matters, was awarded the US Secretary of Defense’s Exceptional Public Service Medal. He has a doctorate from the University of New South Wales on grand strategy and has taught on the topic at the Eisenhower School. His research interests include grand strategy, national security policies particularly relating to middle powers, defense force structure concepts and the impacts of emerging technology. The author of ‘Grand Strategy’, his posts, articles and papers may be read at: https://peterlayton.academia.edu/research.

Endnotes

1. Wayne P. Hughes and Robert Girrier, Fleet tactics and naval operations, 3rd edn., (Annapolis: Naval Institute Press, 2018), p. 33.

2. Ibid., pp.132, 198.

3. William Williamson, ‘From Battleship to Chess’, USNI Proceedings, Vol. 146/7/1,409, July 2020, https://www.usni.org/magazines/proceedings/2020/july/battleship-chess

Featured image: Graphic highlighting Fleet Cyber Command Watch Floor of the U.S. Navy. (U.S. Navy graphic by Oliver Elijah Wood and PO2 William Sykes/Released)

State of War, State of Mind: Reconsidering Mobilization in the Information Age, Pt. 2

By LCDR Robert “Jake” Bebber USN

This article is part two of a two-part piece drawn from a recently completed report by the author that was published by The Journal of Political Risk, and is available in its entirety here

What Must Be Done?

Part one of this article outlined some of the broad challenges facing American policy-makers and defense planners in the coming years. Part two explores the practical and policy implications of what must be done.

Considering these developments outlined in part one, U.S. mobilization efforts should take the following six steps:

  1. Shift the focus of strategic warning to identifying emerging disruptions and strategic latency.
  2. Develop a strategic intelligence capability to monitor and evaluate sources of U.S. power and identify areas of potential comparative advantage.
  3. Institutionalize a “whole of society” approach to peacetime preparedness.
  4. Reframe warfighting posture toward preparing to survive an initial blow, then transition to alternative capabilities that can achieve desired effects. 
  5. Integrate allied and U.S. preparedness efforts, to include research and development, technology sharing, coordinated production, and political resiliency.
  6. Understand and educate the American people on the realities of sustained competition and conflict.

Strategic Latency, Warning, and Disruption Futures

Since the Second World War, the idea of “warning” has largely been linked to surprise military attacks. Pearl Harbor, the invasion of South Korea by North Korea, and September 11, stand out as hallmark examples of the types of surprise attacks that most concern policymakers. During the Cold War, this included not only a nuclear first strike, but also a surprise Soviet attack into Western Europe or a resumption of hostilities on the Korean peninsula. Other warning concerns would be events which might have dramatic impact on the geopolitical landscape, such as coups and revolutions, the outbreak of civil war, the assassination of a world leader, or the outbreak of a war involving a U.S. ally. 

The nature of surprise assumes a level of unpreparedness – catching your adversary unprepared is why surprise is usually sought after. The American intelligence community, while it has many roles and functions, exists foremost to prevent surprise and provide strategic warning. 

Cynthia Grabo describes warning as “an intangible, an abstraction, a theory, a deduction, a perception, a belief. It is the product of reasoning or of logic, a hypothesis whose validity can neither be confirmed nor refuted until it is too late [emphasis added].”1 It should not be confused with current intelligence, nor does it necessarily flow from a mere “compilation of facts” or the result of “majority consensus.” Rather it depends on exhaustive research, and usually the kind of holistic approach that the American intelligence community was not originally designed for.2 There are currently 17 federal agencies and military service components devoted to different collection and analysis emphases, each working independently under a broad umbrella agency, the Office of the Director of National Intelligence (ODNI). ODNI was established after the September 11, 2001, terrorist attacks, largely in response to the significant failures of the separate intelligence agencies to work together and share information and analytic expertise.

While anticipating a military surprise attack will remain an enduring requirement for the intelligence community, the emerging global trends and adversary campaigns reshaping the strategic environment will likely matter more in the coming decades. However, the current analytic techniques used by intelligence analysts are inadequate to identify these trends and are likely to result in a strategic warning crisis.3

Strategic latency refers to the potential for technologies to fundamentally shift the military and economic balance of power.4 China (and Russia to a lesser extent) leverage dual-use technologies to exploit commercial and supply chain vulnerabilities and hold critical information and economic “choke points.” Supply chain dominance provides control of the underlying infrastructure of the 21st century economy, from undersea cables to satellites. By controlling  the electromagnetic spectrum and supporting supply chains such as media, advertising, entertainment, legal regimes, political lobbying, and public opinion management, China is approaching the point where it can achieve global information superiority, if not dominance. Information control enables population control

The intelligence community’s inability to detect and anticipate latent disruptions results from the organizational structure of the community, the charges of its component organizations, and its analytic tradecraft. The 17 U.S. intelligence agencies that fall under ODNI’s purview are organized under either intelligence disciplines, such as communication intelligence or geospatial intelligence, service warfighting domains (air, land, sea, space), or domestic security and law enforcement functions. Its core responsibility is the fusion of these different disciplines into larger strategic intelligence support to the President and National Security Council.

Today’s intelligence community organization results from two major events: the 2001 terrorist attacks on New York and Washington, D.C., and the intelligence community’s erroneous assessment in 2002-3 of Iraq’s weapons of mass destruction program. The first represents a failure to detect an impending attack. The second represents a failure to accurately assess the state of an adversary’s capabilities. In both cases, cognitive limitations inherent to dealing with incomplete or ambiguous information led to intelligence and warning failures. Analysts do not approach their trade with a “blank slate,” but start with certain assumptions about foreign capabilities and intentions that have been developed through education, training, and experience. These assumptions form a mindset that influences what the analyst judges to be reliable and relevant. While this is often a strength, it is not error-free.5

However, the intelligence community’s ability to forecast latent disruptions is questionable at best. This places American national security at a severe risk since it directly impacts peacetime strategic competition and mobilization execution in the event of conflict. Yet understanding anticipatory behavior is central to financial asset management firms, and seven of the top ten firms reside in the United States. Before these firms make multi-billion dollar decisions, they perform deep research and analysis, evaluating an immense, diverse array of data sets, from predicting sea level rises to mobile communication use in India. These firms specialize in evaluating risks to capital investment. 

Data sets are available almost instantaneously from a growing “Internet of Things” and ubiquitous sensors that constantly monitor human activity. Programmers use these data sets to build and refine predictive algorithms that drive risk management and investment.  This methodological approach suggests humans telegraph their behavior through technology and investment decisions. This “Techno-Financial” intelligence capability is a critical requirement for better anticipating emerging disruptions.6 It is a multidisciplinary approach integrating behavioral economics, neuroscience, demographics, regulatory, legal, and other sectors. Interconnected technologies and complex networks are treated as living organisms, while investment is the fueling force that can predict future organism behavior.7

Along with a techno-financial intelligence capability, the intelligence community lacks a comprehensive methodology to “understand the ways individuals perceive and respond to various types of information.” It requires a knowledge of how humans communicate with others in groups, and “orient and respond to economic, social and political environments.” To detect these changing patterns in human group behavior, the intelligence community will need massive sets of diverse and cross domain data sets, along with the ability to process this data to yield understanding and prediction.8 Many of these data sets will overlap with techno-financial intelligence, and the two disciplines complement one another.

Intelligence and Investment for the Home Front

Underlying disruptions in the global economy, changing consumer behaviors, and advanced non-kinetic mass disruption attacks have left the American home front vulnerable. In many respects, war in the 21st century will be characterized not only by a lack of “front lines” but also the absence of any sanctuary. Traditional offensive and defensive operations may not apply, and the “battlefield” may be located in far off corners of the globe while simultaneously being fought in corporate boardrooms, small town hall meetings, and even family gatherings.

Mobilization and peacetime preparedness are best informed through a comprehensive program that identifies the sources of American power creation, evaluates changes and coming discontinuities, and conducts predictive analysis. The Department of Defense has been conducting this type of work through agencies such as the Office of Net Assessment and the Defense Science Board, yet for obvious reasons their efforts are mostly confined to understanding the military balance. Other agencies do track data and trends and make reports within their purview, such as the Department of Labor or Health and Human Services. However, no agency or interagency network or research institute is tasked with crafting a framework to evaluate sources of American power, anticipate opportunities to develop comparative advantages or to mitigate vulnerabilities, or to be used as the basis for policy formation and strategic decision-making. There is no framework that provides the understanding of complex network relationships and evaluates it as an organic whole.9

This is not to say that no one has suggested doing so. One such approach, Strategic Advantage by Bruce Berkowitz, argued that in order for the U.S. to remain the global leader in the 21st Century, it must achieve organizational agility, optimally manage risk, better navigate the crosscurrents of economic development and democratic institutions, and use its comparative advantages effectively. This requires a constant evaluation process of macro-trends in demographics, economics, commercial use, technology, health, and other factors, and how those factors shape national power and create opportunities and vulnerabilities. Importantly, there is a pacing element to power creation and sustainment based around economic constraints and the realities of American political support. In a complex threat environment with competing – and sometimes conflicting – interests, the challenge will be developing, selecting, and combining various capabilities (military, economic, diplomatic, etc.), and then recombining them as conditions change, while avoiding becoming so overcommitted in addressing one threat that we are unable to address others.10

Six principles guide this framework. This first is to understand the potential scenarios for world events, and the important variables (demographic, economic, technology, etc.) that underlie each scenario and identify the mileposts that might signal how these scenarios would play out. The second principle is to recognize the United States’ unique strengths that provide it outsized advantages and to identify how these strengths might be cultivated and exploited. Next, planning must anticipate that changes in the environment occur rapidly, and assumptions will likely not remain valid for more than three to five years, at best. Planning must also account for constraints on both resources and public opinion. Success will require an organizational approach that accommodates more risk and is agile enough to respond to changes in the environment. Finally, maintaining a strategic advantage will depend on the availability of resources, which emphasizes the centrality of economic growth toward national security, preparedness, and mobilization.11

From Whole-of-Government to Whole-of-Society

For a whole-of-society approach to be truly meaningful, it must reach beyond the federal, state, and local governments, as well as beyond traditional social institutions such as chambers of commerce and trade unions. A few lessons from the mobilization during the Second World War still apply, but none more so than organizing industrial mobilization around industry, rather than government, which was central to the explosive growth in American capacity to provide the bulk of war materiel for all allies. This was only possible because industry and labor led the approach. While the government stepped in to regulate consumption through the rationing of certain goods and services, production always remained voluntary and driven by incentive. As early as 1938, industrial mobilization planning was built around getting ahead of the problem to determine what was needed and when, rather than what American industry had the capacity to produce. This drove a requirements-based process while helping build production momentum.12

A major war in the 21st Century will certainly look much different in the production and employment of war materiel, but what might matter more is how the United States organizes its preparedness and mobilization planning to leverage its comparative advantages.

While it is important for the federal government to organize and sustain the effort, state and local governments must have a role in decision-making on national-level priorities. Key economic sectors in finance, logistics, transportation, health care, manufacturing, retail, telecommunications, and others represent a large source of national power. No less so are public education and institutes of higher learning, training and certification bodies, and community organizations such as the American Red Cross and United Way. Important in the 21st Century is the growing role of social media “influencers” and YouTube stars, as well as bottom-up capital generation like Kickstarter and community activism tools such as Change.org. Non-traditional platforms and organizations can bring innovative ways of thinking and alternative approaches to mobilization and preparedness planning.

Some states are approaching preparedness in novel ways. The Ohio National Guard has created the Ohio Cyber Reserve, teams of trained civilians available to assist municipalities with cybersecurity vulnerabilities and provide recommendations to reduce threats. They also provide workforce development training and education services in local schools. This approach can be expanded with government support to create citizen volunteer organizations modeled on the Civil Air Patrol to better utilize the large population of Americans who may not be interested in government or military service but have unique skill sets such as  on-net operations, resiliency testing, critical infrastructure protection roles, and youth mentorship in science, technology, engineering and the liberal arts.13

“Survive, Then Transition”

The stages of mobilization are traditionally crisis mobilization, tactical mobilization, and strategic mobilization. However, the character of warfare in the information age suggests that adversaries will likely engage in non-kinetic disruption attacks, potentially on a mass scale, to achieve strategic effects well before initiating open hostilities. Disruptive attacks on preparation activities and material production will likely thwart or slow the U.S.’s ability to mobilize, marshal forces, and project power. These attacks may go on for months or years, in pursuit of long-term weakening by delegitimizing democratic institutions, sowing social discord, or even increasing the use of addictive opioids among the population, thereby rendering them unfit not only for military service, but unemployable in most industries. It might be wise to assume that the U.S. is under attack right now for the express purpose of rendering its mobilization and preparedness capability impossible.

As discussed above, policymakers should create a strategic warning regime tailored to detect these types of mass disruptive attacks, while building intelligence collection capabilities and analytic techniques to support strategic warning. Still, the ability of an adversary to initiate a surprise attack on a global scale, along with the complexity and high tempo of modern combat suggests that against a peer adversary like China, the United States and its allies could quickly find themselves overwhelmed in one or more theaters. Maintaining credible, forward deployed combat power is challenging now, and growing more so each day. 

This suggests that the United States would have to develop deep enough stocks and magazines to sustain combat forces in the early stages of a conflict (the “staying power” that the Reagan Administration attempted to address). However, the current mix of highly exquisite and expensive weapons systems has left the resources available for war reserve stocks nearly non-existent. Therefore, once military forces and the homeland have survived an initial onslaught, the U.S. will face two choices: try to reconstitute and replace forces or begin a transition to new capabilities that can be fielded rapidly and inexpensively while achieving required operational and strategic results. The fact that the force design and its supporting defense industrial base cannot be meaningfully expanded to keep up with anticipated attrition levels suggests that new means of rapid capability employment will be required.

The Defense Department has expanded its efforts to go outside of the traditional defense industry base and encourage companies to do business with the Pentagon, giving the military access to unique products and services as well as alternative approaches to design, production and sustainment. Through initiatives like the Defense Innovation Unit and legislative action to expand the use of Other Transaction Authorities, the Defense Department has adapted many commercially available products to military use, from personal communication devices to unmanned systems. 

Large companies are investing significantly in autonomy, artificial intelligence, and virtual reality to create new products, improve business logistics and administration, and meet changing consumer demands. Defense leaders should identify and improve upon lessons from employing non-traditional defense companies in order to transition to innovative and sustainable ways to deliver kinetic and non-kinetic capabilities. For example, there is a growing hobby in using a 3-D printer to create drones, leading to innovation in drone design, applications, time to develop, and reduced costs. In a strategic competition, actors can apply new and novel uses of information technology to dynamically impact global economies, public diplomacy, and influence campaigns to achieve strategic effects.

Integrating Allies into U.S. Preparedness

American security ultimately depends upon collective security, a fact that is often overlooked in preparedness planning. While the U.S. military and State Department have a long history of working with allies, friends, and partners to advance security interests, these efforts may not have the efficacy they once did, as China has aggressively sought to bond itself economically to American allies. Commercial and industrial interests are a strategic vulnerability to the democracies, unlike the Cold War where they were an asset. This has caused friction between the U.S. and its allies, especially concerning the use of Chinese companies to build critical infrastructure or operate maritime ports and transportation networks.

At present, only limited efforts exist to evaluate allied and partner nation industrial capacity, defense capabilities, research and development programs, dual-use technology development and applications, sustainment, and political resiliency. There is growing concern that as the gap between U.S. and allied military technology expands, interoperability between allied and coalition forces will become far more difficult. The inability to share resource, sustainment, and logistical burdens would place both U.S. and allied security at risk. The U.S.’s past successes in allied and coalition warfighting have largely been because of early agreement and understanding not only of the strategic objectives but also of partner burden sharing and mutual support. The U.S., given the size of its military, will likely have the largest share of the burden, and allies and partners must be able to receive and use American support.

Coordination of cross-domain operations, including space, cyber, and the electromagnetic domains, will be central to coalition warfighting and strategic competition campaigns that fall below the warfighting threshold. The U.S. will have allies of varying levels of sophistication, capabilities, and resources. Even allied and partner nations that operate comparable technology, such as Japan, South Korea, Israel, and the U.K., may have structural challenges that make coordination with the U.S. or with each other difficult. 

U.S. policy continues to emphasize self-sufficiency and autarky for its defense industrial base. This policy needs to be re-evaluated considering the increasing use of commercial and dual-use technology, much of which is developed in allied and partner countries. 

Fortunately, the U.S. and its allies have a long history of alliance management, cooperation on mutual interests, and integrated command structures. This is especially true for NATO, “Five-Eyes” partners, Japan, and South Korea. NATO established the Partnership Interoperability Initiative in 2014, which was also broadened to include Australia, Finland, Georgia, Jordan, and Sweden. NATO’s experience in the Balkans and Afghanistan highlighted many of the challenges forces had in standards, doctrine, logistics, and sustainment. The U.S. also maintained combined forces commands in Japan and South Korea, to include coalition war planning, exercises, basing, and sustainment. 

Expanding integration and interoperability is one area of mobilization preparedness that holds a great deal of promise. These efforts should be deepened to include joint development of research, development, and dual-use technology goals; combined command, control systems; and intelligence, surveillance, and reconnaissance capabilities and domain awareness capabilities. This may necessitate expanding and improving the ability of U.S. and allies to share a common operating picture that enables tactical tracking to find, fix, and finish targets across coalition platforms. 

Coordinated industrial base expansion, sustainment, tooling, and logistics support will be critical to maximizing comparative advantages that the alliance system provides. The U.S. and its allies should undertake further weapons system and platform development capabilities, to include non-traditional and dual use civilian-military capabilities. This may mean accepting the tradeoff between high-end, exquisite systems and moderately less capable, but still effective combat and non-kinetic systems that all parties can operate. In a strategic competition or conflict with China, and the immense industrial capacity it can harness, this could be the best option. It frees up a portion of the U.S. information technology and industrial base to develop and produce future high-end systems while spreading out the production of moderately capable systems that can be brought into the competition or conflict more rapidly. 

Such an expansion will require a dedicated, regular, systematic evaluation of allied and partner capabilities, more frequent combined and coalition exercises, and deeper coordination of planning and planning assumptions. Early and often allied wargaming, to include frank discussions on potential strategic and political goals, will greatly improve planning assumptions and further guide research, development, production, and operational concepts that are tailored to better meet alliance goals.

Understand and Educate the American People 

To paraphrase former Secretary of Defense Donald Rumsfeld, the U.S. will compete with the population it has, not the one it wants. That is, policy makers must realistically assess the willingness and desire of the American population to support and sustain another indefinite competition and conflict with a major power. The fiscal burden of creating and sustaining American power is likely to grow. This will come at a time when it will be incumbent on decision-makers to address the entire scope of national taxation and spending. Hard trade-offs will be required.

Yet fiscal constraints are only one piece of the puzzle. Even if the resources were readily available, it is not entirely clear that the population of 2020 is particularly interested in competing. The Cold War was born out of the Second World War, and early system shocks caused a reappraisal of U.S. efforts to rebuild the world order while being confronted with a global communist movement that had other designs. 

Part of this is due to the nature of how the Cold War ended and the brief, unipolar moment the United States enjoyed. Little effort was given to recapitalizing the institutions necessary to meet a new, peer challenger. Even conservative, anti-communist stalwarts argued that it was time for America to become a “normal nation,” and shed the burden of global leadership. The lack of an existential threat made such calls even more appealing.

Recent polling suggests that a smaller portion of younger generations – Generation “Z” and Millennials – view the United States as “better” than all other countries, an idea commonly called “American exceptionalism.” At the same time, significant gaps exist between the younger and older generations on perceived threats to America, with Millennials pointing to “climate change” (62%) as a bigger threat than “the development of China as a world power” (35%), “North Korea’s nuclear program,” (55%) or the “rise of authoritarianism around the world” (42%).

To be sure, as one ages and experiences the world, the perception of threats will likely change, and generations do not hold monolithic views that remain etched in stone. Evidence suggests that the public is growing far more wary of China as a threat, and CCP leadership’s complicity in covering up the danger of the COVID-19 pandemic may further incur the American public’s anger. The vast majority of Americans still believe that a future with U.S. leadership is far better than a world led by Beijing.14

Yet it would be the most profound failure of policy for the United States to execute a grand strategy designed to compete with, and if necessary, fight Communist China if popular consensus is not there. Indeed, it would be disastrous. This is more important for younger generations as it is they who will face most of the sacrifice. The underlying assumption behind competing with China is that the American people are invested in the cause. If that assumption is misplaced, then a competition strategy cannot succeed, and the U.S. is likely to suffer a catastrophic loss.

Implementing a competition strategy will require not only public debate, but also public accountability, and the willingness to craft policy and strategy around the constraints of public opinion. While public opinion can be moved, the case must be made. This must be central to American grand strategy, strategic competition, mobilization, and preparedness planning. The current complacency regarding the public’s declining trust in institutions and America’s role in the world is dangerous. Foreign powers actively engage in strategies to undermine American political legitimacy and resiliency, but they need only accentuate the domestic trends that are already present. 

Preparedness and mobilization planning remain central to America’s ability to defend its interests and the cause of freedom. This is worth fighting for. But it cannot be defended without the support of the people. It is a political case that must be made at all levels of government and society. It will require a renewed effort toward public education, and frank, honest debate about the sacrifice required. To best make the case, policy makers have to meet the American public where they are, using terms that convey the gravity of the situation and the stakes involved.

 LCDR Bebber is a Cryptologic Warfare officer assigned to Information Warfare Training Command Corry Station in Pensacola, Florida. The views expressed here do not represent those of the Department of Defense, Department of the Navy or the U.S. government. He welcomes your comments at jbebber@gmail.com.

Endnotes

1 Cynthia M. Grabo, Anticipating Surprise: Analysis for Strategic Warning (Lanham: University Press of America, 2004), 4.

2 Grabo.

3 Maureen Rhemann, “Intelligence Analysis in a Post-Heuer World: Why We Don’t Recognize New Forms of Warfare and 6 Intelligence Take-Aways From Neuroscience” (Reperi Analysis Center, 2020).

4 Celeste Chen, Jacob Andriola, and James Giordano, “Biotechnology, Commercial Veiling, and Implications for Strategic Latency: The Exemplar of Neuroscience and Neurotechnology Research and Development in China,” in Strategic Latency: Red, White, and Blue, ed. Zachary S. Davis and Michael Nacht (Livermore: Lawrence Livermore National Laboratory, 2018).

5 Richards J. Heuer, “Limits of Intelligence Analysis,” Orbis Winter (2005): 76–77.

6 Techno-Financial Intelligence was pioneered by the Reperi Analysis Center (RAC) in 1999 to detect future disruption blending leading data sets to detect asymmetric pre-cursors and perfected with advanced algorithms in 2020. It assumes behavior is telegraphed and users 7-S/ADP and other processes.

7 Maureen Rhemann, “What We’ve Learned from 20 Years of Techno-Financial Intelligence” (Reperi Analysis Center, 2020).

8 James Giordano and Rachel Wurzman, “Integrative Computational and Neurocognitive Science and Technology for Intelligence Operations: Horizons of Potential Viability, Value and Opportunity,” STEPS 4 (2016): 32–37.

9 For a brief overview of how China approaches this challenge, see the Appendix on Comprehensive National Power which is found in the longer study published at the Journal of Political Risk.

10 Bruce Berkowitz, Strategic Advantage: Challengers, Competitors, and Threats to America’s Future (Washington, D.C.: Georgetown University Press, 2008).

11 Berkowitz, 231–32.

12 Herman, Freedom’s Forge: How American Business Produced Victory in World War II.

13  Robert Bebber, Interview: Dr. Peter W. Singer, January 16, 2020.

14 Devlin, Silver, and Huang, “U.S. Views of China Increasingly Negative Amid Coronavirus Outbreak.”

Featured Image: Fighter aircraft under construction at the Bell Aircraft Corporation plant at Wheatfield, New York. (U.S. National Archives)