Tag Archives: U.S. Navy

The Surface Navy: Still in Search of Tactics

By Captain Christopher H. Johnson

A month before deployment, the captain of an Oliver Hazard Perry (FFG-7)-class frigate sits quietly in his cabin. With the long process of pre-deployment inspections over and the threatening waters of the Persian Gulf a few short weeks ahead, now, more than ever before, he considers his three line department heads in the context of their impending role as Tactical Action Officers (TAOs) for the ship when it arrives in the Northern Persian Gulf. To this point, these young officers have been measured by their ability to juggle priorities, pass inspections, sustain planned maintenance at acceptable accomplishment levels, keep the squadron staff happy, and perform a number of other administrative tasks. Now they must become tacticians, and a fleeting sense of despair crosses the captain’s mind.

He recalls when he was a lieutenant junior grade serving on a destroyer in the Gulf of Tonkin, and he remembers the officers who taught him. There were operators who could sense what was happening around them with a gut instinct that distinguished them as mariners and naval officers. For a moment, he thinks about his TAOs and realizes that they are different. Yes, the world was simpler when the most complicated weapon on board was a 5-inch/38-caliber gun, but despite new weapons of enormous capability and complexity, today’s officer is better at paperwork than he is at tactics and operations.

The captain also recalls a discouraging afternoon three months ago when the operations officer and three petty officers brought to his cabin every tactical memorandum, tactical note, and Naval Warfare Publication on the ship, as references for new battle orders. Surely, within the tactics library of his ship, there would be the pearls of wisdom he needed for operations in the Persian Gulf.

Instead, he found an endless succession of publications that often dealt with obscure tactical problems and were generally out-of-date, long-winded, rarely insightful, and almost always too complex. As the petty officers packed up the publications and departed, the captain wondered why, after all this effort in tactics, there was so much paper with so little knowledge to show for it.

Now, the same question haunts him again. “I must find a way to make these department heads into tacticians,” he says aloud. “But what are tactics, and how do I prepare a tactician?” His thoughts are interrupted by a knock on his door. “Sorry to bother you, Captain,” booms the executive officer, “but we’ve got to talk about Seaman Jackson and his family problems.”

This captain’s plight is not unusual, but it is dismaying. Where have tactics gone in the modern surface Navy? Perhaps officers are too focused on being managers and administrators, and maybe the emphasis on engineering has diverted them from tactical thinking. Maybe we have accepted the contention that, in an era of overwhelming technical complexity, everything must be reduced to a lifeless, static procedure to be understood. Whatever the cause, the loss of tactics – and the subsequent appearance of hundreds of publications which masquerade as tactics – is a problem that reaches the very heart of our profession. Tactics must be resurrected.

Brilliant success on the battlefield is the object of command as practiced by Spruance, Nimitz, and other great naval tacticians of the past. Such success is not simply the result of perfect methodology, but rather it is rooted in a hierarchy of preparation and thought. First, success requires knowledge of the technical environment in which naval operations take place. Second, it requires specific procedures to guide the operation of combat systems. Third, and most important, it requires tactics.

Tactics build on knowledge and procedure, but go far beyond either. Contrary to the common definition, tactics are not like check-off lists, diagrams, or procedural doctrine. Tactics are the educated process of thought by which a battlefield commander adapts procedure, knowledge, and insight to the situation at hand and molds a winning plan. Tactics, therefore, are characterized by responsive, analytical, and individualized solutions to real-life circumstances. Tactical ideas or procedures may be found in books, publications, or manuals, but tactics rely on ingenuity, instinct, and innovation. Tactics are never a single answer to a generic tactical problem; but a continuous effort to find the right way to undermine, exploit, and beat the enemy.

In the tactician’s mind, the heart of this tactics thought process is his continuous, individual, and deeply personal struggle with an assortment of intangible measurements, including his vision of the mission at hand, its bounds, rules of engagement, sequences, priorities, and urgencies; analysis of the critical capabilities and limitations of own force; experience, courage, and determination; his commitment to the safety of the ship and personnel; an evaluation of the enemy’s frame of mind, liabilities, strength, and mission; and an appreciation of the opportunities provided by geography, environment, or political conditions.

The process has an immediate and an ultimate product. The immediate object of tactics is a real-time vision, or sense of the tactical balance sheet. What are the key opportunities and critical liabilities inherent in the situation? Where are we strong, and where is the enemy weak? What actions will confuse the enemy? How can friendly forces further undermine enemy strength? How can the enemy’s confidence be shaken?

This analysis leads to the ultimate object of tactics: a course of action, springing from inspiration and evaluation of all factors, which will win with minimal cost. To win while taking few losses defines brilliant action and is the indisputable purpose of tactics, inherent in all the greatest naval victories in history. Our country wants us to act boldly and bring our sailors home safely. Sadly, the tactics underpinning this goal have come to be procedures for pitting one weapon against another, rather than a thought process for winning.

It is useful at this point to contrast the tactician with today’s officer who is more accustomed to the role of technician. Technicians live in a world of black and white, focusing exclusively on mechanics and measurements; they are often caught up in an engineering-oriented ethic which asserts that there is a single, discrete solution for every situation. To the technician, combat is a toe-to-toe struggle where the most perfectly designed and operated system wins. Conversely, the tactician sees this technical struggle as essential but subordinate to other vital issues. To him, the engagement is a series of chess moves where the best thinker, the most accomplished facilitator of quick, decisive, and perfectly timed action will win. To the technician, the victory at Midway was fortune; to the tactician, Midway was brilliant tactical instinct reaping its rightful reward.

The tactician also is distinguished from the technician by the breadth of innovative weapons that he brings to bear on the tactical problem. Modern technician-tacticians think in terms of missiles, guns, torpedoes, and mines. These are valid pieces of the tactical problem, but the real tactician also thinks in terms of influences and effects far beyond ordnance. The tactician must consider the aspects of positioning and timing, secrecy, surprise, deception and confusion, demonstration and intimidation, and command and control.

Tacticians strive to anticipate; to be constantly ahead of the enemy; to occupy the high ground; to use land or water conditions to advantage; and never to allow the enemy an open, unobscured, or unambiguous shot. They seek ways to strike first and to preempt the enemy at every juncture. They use weapons envelopes to advantage; they position friendly forces so they can always concentrate fire and support one another while forcing the enemy to scatter his attack. Consider some of the following facets of tactics:

There is nothing as fundamental to warfare as secrecy. The unalerted enemy is an ill-prepared enemy. Without warning, he cannot ready, deploy, instruct, maneuver, position, or effectively command his forces.

Surprise is another quintessential ingredient. The Trojan War, Washington’s crossing of the Delaware, Pearl Harbor, Midway, Grenada, Libya, and Desert Storm were all overwhelming victories because of surprise, a navy’s greatest force multiplier. Not technologically demanding, not requiring budget in the Future Years Defense Plan, and not necessitating field changes, this aspect of tactics consistently achieves victory with minimum loss.

For deception and confusion, the tactician uses the natural cloak of the sea to misdirect, blind, disrupt, or coax an adversary into apathy. The opportunities are endless, limited only by imagination. Merchant shipping lanes, land, emission control, turn-count masking, zig-zag patterns, and mock radio communications all offer opportunities to keep the enemy off-guard, to delay or unravel his tactical plan.

For years, U.S. aircraft carriers always intercepted foreign aircraft at long ranges from the carrier. Such intercepts conveyed the unmistakable message that aircraft could not approach in wartime and hope to survive. It is a superpower’s privilege to sap an enemy’s will and confidence by repeatedly demonstrating how surely and decisively he can be detected and destroyed. A true tactician showcases his abilities in peacetime as a continual, effectual reminder of his inherent superiority.

Perfectly anticipated, precisely controlled action is another mark of the tactician. He collects the right pieces of information to predict the enemy’s next move, and he consistently develops the ability to act more quickly and with more precision than his opponent.

Commanding officers and their key subordinates must embrace these aspects of tactics. Regrettably, the technician has generally eclipsed the tactician, especially in the case of TAOs, which exist on the crease of two powerful interpretations of their role. On one hand, it is fashionable to view the TAO as an automaton whose role is to react to threats with machine-like, button-pushing precision. On the other hand, the TAO’s real purpose is to be the intelligent being who measures the evolving situation and takes every conceivable step to win and keep the ship safe.

If the TAO’s purpose is simply to direct scripted action, then the technician will suffice; if the TAO is there to guide action intelligently and to find resourceful ways to win, however, he must be a tactician first and foremost. With the technician, the CO enters the combat information center (CIC) and sees a TAO bent over the scope, immersed in the mechanics. With the tactician, the CO should see an officer rising above the details with every option in mind, ready to act in ways that are both sure and insightfully adapted to the situation.

Is it possible that modern technology has made tactics irrelevant? Are today’s operations so linked to technical issues or foreordained by combat system mechanics that there is no place for tactics? No, the opposite is true. The advent of modern technology makes greater, not lesser, demands for superb tacticians.

Consider a single navy ship on a critical mission that will take it through a strait guarded by an adversary. On the west side of the strait at least one conventional submarine is on patrol; on the east shore are truck-mounted, anti-ship cruise missiles. In these days of modern weapons, this scenario may seem like a simple matchup of combat systems. Torpedoes, helicopters, and sonars against the submarine; missiles, guns, and electronic warfare against the cruise missiles. The prudent CO will be assured that these weapons are ready and that the procedures for using them are optimized, in place, and practiced.

The tactician, of course, will go one enormous step farther. He will employ tactics. He will measure the situation carefully, looking for opportunities to exploit. Should he transmit on electronic sensors or remain passive? Should he challenge the enemy or avoid him? In what ways should he confuse, delay, deceive, or surprise the enemy? What pieces of tactical information does he require to anticipate the enemy’s moves, and exactly how will he control his ship’s weapons to assure lightning-quick yet accurate responses?

On the west side of the strait, this tactician will probably “attack” the submarine by using merchant shipping lanes, darkness, and darken ship to hide his approach. He will use speed and maneuver to disrupt any track a submarine might gain. He will take his ship through shallow water to confound and outmaneuver the submarine. He will cover his close-in weapon system mount with gray herculite, remove white windscreens, and paint out distinctive white hull numbers to take away any visual cue of his identity. Finally, he will use helicopters to search for periscopes and masts and drive the submarine to depth.

On the other side of the strait, he might avoid the enemy’s attempts to find him by mixing with merchants or by land shadowing; he could shut down his electronic emissions to prevent identification and classification; he might use oil platforms, or other natural obstructions, as shields against an attack; conceivably communications jamming or deception might be used to misdirect or confuse the enemy’s targeting reports.

In this example the tactician dramatically alters the battle equation. More than simply preparing his ship to repel any attack, through tactics he shields his ship from even becoming a target. He achieves the successful transit without confrontation, without having to pit one weapon against another. He has in essence opened up a panorama of tactical options that improves the probability of success and significantly reduces the levels of risk.

Tactics impel commanders not to be slaves to preconceived or formalized procedures. With tactics, the logistics or amphibious ship is not inherently defenseless in these straits, nor should the Aegis cruiser feel compelled by its mystique or its combat system to transit the straits openly, daring the enemy to react.

In this hypothetical situation, as in virtually all offensive and defensive tactical scenarios, the tactician opens a larger sphere of thought and action – and he guarantees success more assuredly than either the warrior or the technician.

Tactics are more vital now to the U.S. Navy than at any time in the past 20 years. Operations in the littoral areas of the world will put navy ships at great risk. At the edge of the sea, detection of modern antiship cruise missiles, mines, and conventional submarines will be difficult, and reaction times will be compressed. Defense in depth, the doctrine of the past, will be impossible so close to shore, and the dwindling number of carriers will reduce the combat power that has so frequently been just over the horizon. Survival will rest increasingly, therefore, on ingenuity, secrecy, deception, speed, and positioning.

Tactics must return to the forefront as a critical element of our profession. Tactics are our highest calling, and ought to be the focus of preparation for our officers, but today they are not. Tactical savvy is no longer our strong point; we have largely become a Navy of technicians and managers instead of tacticians. Reviving tactical proficiency does not require more money, more people, or a new doctrine command. It requires a dedicated, well-organized, and redirected return to the basics of knowledge, procedure, and tactics.

While naval tactics organizations have long pursued tactical knowledge and procedures, their search has been flawed in many significant ways. Efforts routinely confuse information for knowledge and persistently fail to extract from our tactical and technical experience the penetrating insights that support tactical decision making. To a great extent, our tactical procedures, as embodied in current tactical memorandums, tactical notes, and doctrines, lack coherence and essence. They are like having 50 street maps for various American cities without a map of the interstate system to describe how to get from one to another.

They are often unexecutable in a practical scenario and are frequently too complex to be internalized and fully understood by the lieutenant TAOs who must execute them. They fill a vault with their volume yet provide so little satisfaction to the captain. Despite decades of commitment and work, much remains to be done and undone in the area of communicating knowledge and designing procedure.

These well-intentioned efforts, though, are flawed not by lack of dedication but rather by lack of definition and expectation. We are a Navy largely focused on maintenance and are too comfortable with technical details, parameters, and procedures. Accordingly, we are generally satisfied with descriptions of how a combat system operates technically instead of insisting to know how a system performs tactically.

We understand, for example, how various modes of the SPS-49 affect the moving target indicator circuits or make the antenna scan faster, but we do not see the necessity of knowing explicitly how these modes change the radar’s performance against an incoming missile. We know in detail how much power the radar should have without a clear notion of how much power is enough to see targets of interest at suitable ranges. We have failed to extract the concise and meaningful insights required by tacticians to make correct decisions on the battlefield.

In the area of tactical procedures, the story is similar. Efforts at developing tactical procedures, apparently unaware of the tacticians ultimate role in defining tactics, often overstep the logical bounds of procedure, resulting in procedures that are too long, too intricate, and too numerous to be absorbed and understood by operators in the fleet. Moreover, the procedures fall out of date quickly as conditions, assumptions, and intelligence estimates change.

Finally, development and support of the tactics thinking process are even more adrift. As a rule we do not understand the nature of tactics; we do not perceive the essence. We neither nurture this tactical care in our careers nor explain or support it in “tactics” publications. Seniors do not groom it in juniors and frequently fail to employ sound tactics themselves.

The resurrection of tactics, today buried in procedure and cloaked by fundamental misunderstandings of their essential nature – now requires an extraordinary effort. It is essential that the surface community find the few real tacticians in its ranks – not the ones who claim to be tacticians because of their total recall of threat matrices or their superb dexterity on combat system consoles – but the innovative deep thinkers of our time.

These tacticians must be brought together and given a mandate to redesign the entire structure of our tactics effort. They must identify the essential pieces of tactical knowledge which truly support tactical decision making, and they must design a compact and useful system for conveying that information to the fleet. They must sift through the vaults of current tactical publications and identify the quintessential procedures that are the bedrock of effective tactical action. Then, they must distill them into knowable, concise, and simple guidance.

Finally, the core of these tacticians must form a tactics institute for the surface Navy. The institute must become a think-tank charged with exploring the science of tactical operations. They must investigate the envelope of tactical thought to include advancing new concepts of data fusion, analysis, command and control, maneuvering, targeting, positioning, deception, surprise, secrecy, mutual support, and teamwork. Through this institute the surface Navy can begin to ensure that the art of tactics formulation is nurtured in its officers, that suitable curricula for officers in the surface warfare training continuum is developed and supported, and that the commanding officer’s role as a bone fide tactician is established and solidified within the fabric of surface warfare. If we truly want to preserve tactics and tacticians from extinction, we must take radical steps and take them quickly.

As the frigate pulls away from the pier, the captain waves to his wife and family. The deployment has begun, but he agonizes because he is no closer to building tacticians than he was three weeks ago. He sees before him young officers who have been “methodologized,” consumed by the mechanical and procedural tasks which are properly the domain of senior enlisted men. He tries to make them think on their own, to make decisions, to have a vision, but it is slow progress.

He wonders, “Have we gone too far? Can we turn back the tide of administrators and managers and revive tacticians?”

His thoughts are interrupted by a knock on the door. “Trouble, Captain,” says the XO. “We forgot to send in our monthly retention report.”


This article originally featured in the September 1993 issue of USNI Proceedings, read it in its original form here. Reprinted from U.S. Naval Institute Proceedings magazine with permission; Copyright © U.S. Naval Institute/www.usni.org.


Captain Johnson is the program manager for the Advanced Research Project Agency’s Maritime Systems Technology Office. His sea duty includes tours as executive officer USS Ramsey (FFG-2) and commanding officer USS Vandegrift (FFG-48) where he served as antiair warfare coordinator for the Persian Gulf during the Iraqi invasion of Kuwait. His last shore assignment was Director, Prospective Commanding Officer Course at the Surface Warfare School, Newport, Rhode Island.

Featured Image: PACIFIC OCEAN (June 25, 2018) The guided-missile destroyer USS Dewey (DDG 105) transits the Pacific Ocean while underway conducting operations in the U.S. 3rd Fleet area of operations. (U.S. Navy photo by Mass Communication Specialist 2nd Class Devin M. Langer/Released)

Did We Learn Anything From That Exercise? Could We?

The following article originally featured in the 1982 July-August edition of The Naval War College Review and is republished with permission.

By Frederick Thompson

Exercises are a source of information on tactics, force capabilities, scenario outcomes, and hardware systems effectiveness. But they distort battle operations in ways which prevent the immediate application of their results to real world situations. For example, because they are artificial, the force capabilities demonstrated need not exactly portray capabilities in actual battle. Further, our analysis process is imperfect. Our data can be incomplete or erroneous, the judgments we make during reconstruction and data refinement may be contentious, and our arguments linking the evidence to our conclusions may be incorrect. Still, exercises are among the most realistic operations we conduct. Our investigations of what really happened in an exercise yield valuable insights into problems, into solutions, and into promising tactical ideas.

The Nature of Exercises

How do naval exercises differ from real battles? Clearly the purpose of each is different. Exercises are opportunities for military forces to learn how to win real battles. During exercises, emphasis is on training people and units in various aspects of warfare, practicing tactics and procedures, and coordinating different force elements in complex operations. Ideally, the exercise operations and experiences would be very much like participating in a real battle. For obvious reasons, exercises fall short of this ideal, and thereby distort battle operations. These distortions are called “exercise artificialities.” An understanding of the different kinds of exercise artificialities is essential to understanding exercise analysis results. The exercise artificialities fall loosely into three classes: those which stem from the process of simulating battle engagements; those which stem from pursuit of a specific exercise goal; and those stemming from gamesmanship by the players.

Engagement Simulation. Obviously real ordnance is not used in an exercise. As a result, judging the accuracy of weapon delivery and targeting, force attrition, and damage assessment become problems in an exercise. If a real SAM is fired at an incoming air target, the target is either destroyed or it is not. There is no corresponding easy solution to the engagement in an exercise. Somehow, the accuracy of the fire control solution must be judged, and an umpire must determine whether the warhead detonates and the degree of destruction it causes.

What is the impact of this simulation on battle realism? Suppose the SAM is judged a hit and the incoming target destroyed. The incoming target will not disappear from radar screens. It may, in fact, continue to fly its profile (since it won’t know it’s been destroyed). So radar operators will continue to track it and the target will continue to clutter the air picture. A cluttered air picture naturally consumes more time of operators and decision makers. Now suppose the SAM misses the incoming target. If time permitted, the SAM ship would fire again, thereby depleting SAM inventories. However, the judgment process is not quick enough to give the SAM ship feedback to make a realistic second firing. In fact, AAW engagement resolution may not occur until the post-exercise analysis.

Now suppose the SAM misses the incoming missile, but the missile hits a surface combatant. Then the problem is to figure out how much damage was done to the combatant. An umpire will usually role dice to probablistically determine damage; a real explosion wreaks destruction instantaneously. As a result, there will be some delay in determining damage and even then that damage may be unrealistic.

It is easy to see how the flow of exercise events may become distorted, given the delay between engagement and engagement resolution during an exercise. Other examples of distortion abound. For example, it may happen that a tactical air strike is launched to take out an opposing surface group armed with long-range antiship missiles, but only after those missiles have already dealt a crippling blow to the CV from which the air strike comes. In another case, aircraft will recover on board CVs with simulated damage as well as those CVs still fully operational. In general, it has so far been impossible to effect in exercises the real-time force attrition of an actual battle so that battle flow continues to be realistic after the first shots are fired.

Such artificialities make some aspects of the exercise battle problem more difficult than in a real battle; others make it less difficult. Because destroyed air targets don’t disappear from radar screens, the air picture becomes more complicated. On the other hand, a SAM ship will seldom expend more than two SAMs on a single target and therefore with a given inventory can engage more incoming missiles than she would be able to in reality. Further, the entire AAW force remains intact during a raid (as does the raid usually) as opposed to suffering progressive attrition and thereby having to fight with less as the raid progresses. It is unclear exactly what the net effect of these artificialities is on important matters like the fraction of incoming targets effectively engaged.

Safety restrictions also distort exercise operations and events. For example, the separation of opposing submarines into nonoverlapping depth bands affects both active and passive sonar detection ranges, especially in submarine vs. submarine engagements. Here realism is sacrificed for a reduced probability of a collision. For the same sorts of reasons, aircraft simulating antiship missiles stay above a fixed altitude in the vicinity of the CV, unless they have prior approval from the CV air controller, which distorts the fidelity of missile profiles. In other examples, surface surveillance aircraft may not use flares to aid night time identification. Tactical air strike ranges may be reduced to give the strike aircraft an extra margin of safety in fuel load. The degree of battle group emission control, especially with regard to CV air control communications and navigation radars, is determined partially by safety consideration. Quietness is often sacrificed in favor of safety.

The point is that safety is always a concern in an exercise, whereas in an actual battle, the operators would probably push their platforms to the prudent limits of their capabilities. These safety restrictions impart another artificiality to exercise operations. By constraining the range of Blue operational options, his problem becomes more difficult than the real world battle. By constraining Orange operational options, Orange’s problem becomes harder, and hence Blue’s problem easier, than in the real world.

Another source of distortion is the use of our own forces as the opposition. US naval ships, aircraft, and submarines differ from those of potential enemies. It is probable that enemy antiship missiles can be launched from further away, fly faster, and present a more difficult profile than can be simulated by manned aircraft in an exercise. The simulated antiship missile in an exercise thus presents an easier target in this regard. Customarily, Orange surveillance has fewer platforms with less on-station time than do some potential enemies, say the Soviets. In ASW, there maybe differences between US submarine noise levels and potential enemy submarine noise levels. All these differences in sensors and weapon systems distort detection, identification, and engagement in exercises and thereby make aspects of exercise operations artificial.

A more subtle distortion occurs when US military officers are cast in the role of enemy decision makers. The US officers are steeped in US naval doctrine, tactics, and operating procedures. It is no doubt difficult to set aside these mind-sets and operate according to enemy doctrine, tactics, and procedures. Add to this the fact that one has only a perception of enemy doctrine, tactics, and procedures to work from, and the operating differences between an actual enemy force and a simulated enemy force become more disparate. With this sort of distortion, it is difficult to identify exactly how the exercise operations will be different from those they try to simulate. But the distortions are real, and are at work throughout the exercises.

Exercise Scenarios.The goal of an exercise can drive the nature of the exercise operations; this is a familiar occurrence in all fleets. The degree of distortion depends upon the nature of the goal. Consider two examples.

First, consider an exercise conducted for the express purpose of examining a particular system’s performance in coordinated operations. It is likely to involve a small patch of ocean, repeated trials in a carefully designed scenario, dictated tactics, and most importantly a problem significantly simpler than that encountered in a real battle. At best, the battle problem in this controlled exercise will be a subtask from a larger battle problem. Participants know the bounds of the problem and they can concentrate all of their attention and resources on solving it. Now such exercises are extremely valuable, both in providing a training opportunity to participants and in discovering more about the system in question. But the exercise results apply only in the limited scenario which was employed; in this sense the goal of the exercise distorts the nature of the operations. Exercise operations in these small, canned, controlled exercises are artificially simple as compared to those in a real battle.

Next consider a large, multi-threat free-play exercise which is conducted partially for training, perhaps the most realistic type of exercise conducted. The exercise area will still have boundaries but will probably include a vast part of the ocean. Commercial aircraft traffic and shipping may well be heavier than would be the case in a hot war environment. As the exercise unfolds there will be a tendency for the controlling authority to orchestrate interactions. By doing this, the options are constrained unrealistically for both sides. Blue or Orange may not be able to pick the time and place to fight the battle. Both sides know that a simulated battle will be fought, and higher authority may hasten the interaction so that the participants can fight longer. Clearly this is a case where trade-offs must be made and it is important to understand this when exercise results are being interpreted.

In both kinds of exercises, artificialities are necessary if the goals are to be met. Partly as a result the operations are not exact duplicates of those likely to occur in the same scenario in a real battle. Aside from recognizing that forced interactions distort an exercise battle, little work has been done to learn more about how these distortions affect the resulting operations.

Gamesmanship and Information. A separate class of artificialities arise when exercise participants are able to exploit the rules of play. Consider a transiting battle group. It may be possible to sail so close to the exercise area boundary that from some directions the opponent could attack only from outside the area, and that is prohibited. Thus, the battle group would reduce the potential threat axes and could concentrate its forces only along axes within the operating area. Clearly, the tactical reasoning which leads to such a decision is valuable training for the participants, and exploiting natural phenomena such as water depth, island masking, and so on are valid tactics. But exploiting an exercise boundary to make the tactical problem easier distorts operations in the scenario and is a kind of gamesmanship.

Consider another situation. In exercises, both sides usually know the opposition’s exact order-of-battle. So participants have more information than they are likely to have in a real battle, and that information is known to be reliable. Blue also knows the operating capabilities of the ships stimulating the enemy, and may be able to deduce associated operating constraints from them. For example, he knows more about US submarine noise levels and operating procedures than he does about likely opposition submarines. He also knows how many Orange submarines are actually participating in the exercise, and as he engages them, he may be able to estimate the size of the remaining threat by examining time and distance factors.                           

Classes of Exercise Artificiality

I. Battle Simulation Artificiality

  • No Real Ordnance
  • Safety Restrictions on Operations
  • Simulate Opposition Platforms
  • Imperfect Portrayal of enemy doctrine, tactics, and procedures

II. Scenario Artificiality

  • Forced Interaction
  • Focus on Small Piece of Battle Problem

III. Gamesmanship and Information

  • Exact Knowledge of Enemy OOB
  • Exact Knowledge of Enemy Platform Capabilities
  • Exploitation of Exercise Rules
  • Tactical Information Feedback Imperfections

A final exercise artificiality is the poor information flow from battle feedback. With real ordnance, undetected antiship missiles hit targets, explode, and thereby let the force know it is under attack. This does not occur in exercises. A force may never know it has been located and has been under attack. As a result, the force may continue to restrict air search radar usage when in a real battle, all radars would have been lit off shortly after the first explosion. The force may never be able to establish a maximum readiness posture. In a real battle, there would have been plenty of tactical information to cue the force that it is time to relax emission control. This kind of exercise artificiality affects both the engagement results and the flow of battle events.

In spite of these artificialities, exercises still provide perhaps the only source of operational information from an environment which even attempts to approximate reality. Though artificial in many ways, exercises on the whole are about as realistic as one can make them, short of staging a real war. This is especially true in the case of large, multi-threat, free-play fleet exercises. The only time a battle group ever operates against opposition may be during these exercises. So for lack of something better, exercises become a most important source of information.

The Nature of the Analysis Process

The analytical conclusions drawn from examining exercise operations are the output of a sequence of activities which collectively are called the exercise analysis process. While there is only one real analytical step in the process, it has become common to refer to the entire sequence as an analysis process. The individual steps themselves are (1) data collection, (2) reconstruction, (3) data reduction and organization, (4) analysis, and (5) reporting. It is of immense value to understand how the body of evidence supporting exercise results and conclusions is developed. We will examine the activities which go on in each step of a typical large, multi threat, free-play fleet exercise, and end with some comments to make clear how the analyses of other kinds of exercises may be different.

Data collection. The first step is to collect data on the events that occur during an exercise. Think of the exercise itself as a process. All the people in the exercise make decisions and operate equipment and so create a sequence of events. The data which are collected are like measurements of certain characteristics of the process, taken at many different times during the exercise. The data are of several different types.

One type is keyed to particular events which occur during the exercise: a detection of an incoming missile, the order to take a target under fire, the act of raising a periscope on a submarine, a change in course, and so on. This sort of data is usually recorded in a log along with the time of its occurrence. Another kind of data is the perceptions of various participants during the exercise. These data are one person’s view of the state of affairs at one point in time. The individual could be an OTC, a pilot, or a civilian analyst observer. Another type of data is the evaluative report of a major participant, usually filed after the exercise is over. These provide the opinions of key participants on the exercise, on a particular operation and what went wrong, on deficiencies, etc. Finally, the memories of participants and observers also are a source of data. Their recollections of what went on during a particularly important period of the exercise may often be valuable.

There are two kinds of imperfections attendant to all this. The first is imperfections in the data collected: they don’t reflect accurately what they were intended to reflect. That is, some data elements are erroneous. The second imperfection stems from having taken measurements only at discrete points in time, and having only partial control over the points in time for which there will be data. A commander in the press of fighting a battle may not have the time to record an event, or his rationale for a crucial decision. An observer may likewise miss an important oral exchange of information or an important order. After the exercise is over, memories may fade and recollections become hazy. So the record of what went on during the exercise, the raw data, is imperfect.

Once most of the raw recorded data are gathered in one place, reconstruction begins. In general, gross reconstruction provides two products: geographical tracks of ships and aircraft over time, and a chronology of important exercise events: time and place of air raids, submarine attacks, force disposition changes, deception plan executions and so forth. Tentative identification of important time periods is made at this time. These periods may become the object of finer grained reconstruction later as new questions emerge which the gross reconstruction is unable to answer. The table below lists the primary products of gross reconstruction. The major event chronology includes the main tactical decisions such as shifts in operating procedures, shifts in courses of action, executions of planned courses of action, and all others which might have affected what went on.

Reconstruction is arguably the most important step in the exercise analysis. Many judgments are made at this level of detail which affect both the overall picture of what went on during the exercise as well as the validity of the results and conclusions. It is much the same kind of laboratory problem scientists face in trying to construct a database from a long, costly series of experiments. The basic judgments concern resolving conflicts among the data, identifying errors in data entries, and interpreting incomplete data. Judging each small case seems minor. However, the enormous number of small judgments collectively have a profound effect on the picture of exercise operations which emerges. The meticulous sifting which is required demands knowledgeable people in each area of naval operations as well as people possessed of a healthy measure of common sense. Judgments made during reconstruction permeate the remainder of the exercise analysis process. These judgments constitute yet another way for errors to enter the process.

Data Reduction and Organization. The line between reconstruction and data reduction and organization is blurred. At some point, most of the reconstruction is done and summary tables of information begin to emerge. In anti-air warfare for example, tables will show the time of the air raid, raid composition, number detected, percent effectively engaged, and by whom. An antisubmarine warfare summary table might show contacts, by whom detected, validity of detection, attack criteria achievement, and validity of any attacks conducted. Other products based upon the reconstruction are tailored to the specific analysis objective or the specific question of interest. For example, in command and control, a detailed history of the flow of particular bits of information from their inception to their receipt by a weapon system operator might be constructed. In surface warfare, the exact sequence of detections and weapon firings are other examples.

Two important acts occur during this step. First certain data are selected as being more useful than other data and then the individual bits are aggregated. Second, the aggregate data are organized into summary presentations (in the form of tables, figures, graphs, and so on) so that relations among the data can be examined. Obviously, the way in which data is aggregated involves judgments as to what data to include and what to exclude. These choices and the selection of the form of the presentation itself involve important judgments. As before, the judgments comprise another potential source of error.

Analysis. Analysis is the activity of testing hypotheses against the body of evidence, constructing new hypotheses, and eventually rejecting some and accepting others according to the rules of logic. While reconstructing, reducing, and organizing data, analysts begin to identify problem areas, speculate upon where answers to questions might lie, and formulate a first set of hypotheses concerning exercise operations. It is now time to examine systematically the body of evidence to ascertain whether the problems are real, whether answers to questions can indeed be constructed, and whether the evidence confirms or refutes the hypotheses. Arguments must be constructed from the evidence, i.e., from the summary presentations already completed, from others especially designed for the hypothesis in question, or from the raw data itself. The construction of such logical arguments is the most time­-consuming step in the process and the most profitable. Yet the pressure from consumers for quick results, a justifiable desire, may severely cut down on the time available. In such situations, hypotheses may emerge from this step as apparently proven results and conclusions, without the benefit of close scrutiny. This is an all too common occurrence.

One kind of shortcut is to examine only evidence which tends to confirm a hypothesis. The analyst uses the time he has to construct as convincing an argument as he can in support of a contention. Given additional time, an equally persuasive argument refuting the contention might have been developed, errors may also enter the analysis in the course of judging the relative strength of opposing bodies of evidence. Where such judgments are made, conventional wisdom would have both bodies of evidence appear along with an argument why one body seems stronger. In these ways the analysis step may introduce additional uncertainty into the analysis process.

Reporting. The final step in the analysis process is reporting. It is during this step that analysts record the fruits of their analytical labors. There are four basic categories of reports, some with official standing, some without. It is worth defining them, both to give some idea of the amount of analysis which under underlies the results and to present the reports most likely to be encountered.

One kind of exercise report is for a higher level commander. It details for him those exercise objectives which were met and those which were not. It is a post­-operation report to a superior. Customarily it will describe training objectives achieved (i.e., did the assigned forces complete the designated training evolutions?), the resulting increase in readiness ratings for individual units, and an overview of exercise play and events. There is little if any analysis of exercise events to learn of problem areas, tactical innovations, or warfighting capabilities.

Another kind of exercise report is a formal documentation of the product of the analysis process. It concentrates on the flow of battle events in the exercise instead of the “training events.” These reports may or may not include word of training objectives achieved and changes in unit readiness. A report might begin with a narrative description of battle events and results for different warfare areas. Summary tables, arguments confirming or refuting hypotheses, and speculations about problems needing further investigation form the bulk of the warfare sections. Conclusions and supporting rationale in the form of evidence from exercise operations may also be present. Bear in mind that the analysis process preceding the report may have been incomplete. In this case the report will include the narrative and customarily a large collection of reconstruction data and summary tables. The report will fall short of marshaling evidence into arguments for, or against, hypotheses. These reports are really record documents of raw and processed exercise data.

It can be difficult to distinguish between these two types of report if the latter also includes items called “conclusions.” Beware if there is an absence of argumentation, or if great leaps of faith are necessary for the arguments to be valid. Sometimes one gets the reconstruction plus analysis, other times just the reconstruction.

Units participating in exercises often submit their own message reports, called “Post-ex Reports” or “Commander’s Evaluation.” These reports seldom include any analytical results or conclusions. They do venture the unit commander’s professional opinions on exercise events and operations. These opinions, tempered by years of operational experience, as well as firsthand operational experience during the exercise, are a valuable source of information. They provide the perspective of a particular player on perceived problems, suspected causes, reasons for tactical decisions, and possibly even some tentative conclusions. Statements in these reports should be tested against the data for confirmation. Sometimes the messages also contain statements entitled “Lessons Learned.” Since such judgments are based upon the limited perspective of one unit, these lessons learned require additional verification, too. The unit CO probably will base this report on some of the data collected by his own unit. So the CO’s post-exercise report is a view of the exercise based upon a partial reconstruction using one unit’s data.

Finally, the Navy Tactical Development and Evaluation (TACD&E) program sanctions reports of exercise results and analyses as a formal Lessons Learned. NWP-0 defines a Lessons Learned as “…statements based on observation, experience, or analysis which indicates the state of present or proposed tactics.” Note that a Lessons Learned is specific to a tactic or group of tactics. Evidence developed in an exercise often provides the analytical basis for such statements. NWP-0 goes on to state that “…the most useful Lessons Learned are brief case studies which tell what happened and why certain key outcomes resulted.” Exercise operations can often provide the “cases” and exercise analysis can provide the “why” certain things happened. Again it is necessary to examine carefully the argumentation in Lessons Learned, to be sure the analysis process applied to the individual cases hasn’t been curtailed after the reduction and organization step.

Variations. The analysis process for a small specialized exercise has a slightly different manifestation from that in a large, free-play fleet exercise. Consider an exercise designed to test tactics for the employment of a new sonar and to train units how to execute those tactics. It might involve three or four ships outfitted with the sonar pitted against a submarine in a controlled scenario. If there is high interest in innovative ways to employ the system tactically, data collection might be better than average, since many hands can be freed from other warfare responsibilities for data collection. The operating area might be an instrumented range on which very precise ship tracks can be recorded automatically. If the planning is thorough, the design of the exercise (the particular pattern of repeated engagements with careful varying of each important factor) enables just the right data to be collected which will enable analysts to sort among the different tactics. The data which is collected would then leave fewer holes, relative to the exact questions which are of interest. So, one might end up with fewer errors in the data, and simultaneously, less missing data.

The quality of reconstruction will still depend on the skill of the reconstructors. With only a few ships to worry about and good data, however, not many people are required to do a good job; the job is small. If the exercise was designed carefully to shed light on specific questions, data reduction and organization work smoothly toward pre-identified goals: specific summary tables, graphs, or figures. In fact from the analytical viewpoint, the whole exercise may as well have been conducted to generate reliable numbers to go into the tables and graphs. The analysis step is more likely to proceed smoothly too, since the evidence has been designed specifically to confirm or deny the questions of interests.

The analysis process of other exercises will likely fall between these two extremes. The degree to which exercise play is controlled and constrained by the operating area’s size and by various units’ tactical autonomy will determine the ease with which the analysts and data collectors can finish their work. Normally, the analysis is best in the small, controlled exercises designed to answer specific questions or to train units in specific tactics. As the exercise grows in size and more free-play is allowed, it is harder to collect data to answer the host of questions which may become of interest.

Limitations on the Use of Exercise Analysis

The reason for analyzing exercise operations is to learn from them. One learns about tactics, readiness levels of units and groups, hardware operational capabilities, and advantages or disadvantages we may face in certain scenarios. Let us see how exercise artificialities and an imperfect analysis process limit what we can learn.

Hardware operational capabilities can be dispensed with quickly. Special exercises are designed to measure how closely systems meet design specifications. The measures are engineering quantities such as watts per megahertz, time delay in a switching mechanism, sensitivity, and so on. As the human element enters either as the operator of the equipment or in a decision to use the system in a particular way, one moves into the realm of tactics.

Warfare Capabilities. One problem in learning about warfare capabilities from exercises lies in translating the exercise results into those one might expect in an actual battle. Setting aside the measurement errors which may crop up in the analysis process, consider the exercise artificialities. Suppose a battle group successfully engages 70 percent of the incoming air targets. This does not mean that the force would successfully engage 70 percent of an air attack in a real battle. Assuming identical scenarios and use of the same tactics, some artificialities make the exercise problem easier, others make it harder than the real-world battle problem. There is no known accurate way of adjusting for these artificialities. In fact only recently has there been general acceptance of the fact that the artificialities both help and hinder. A second problem is the lack of a baseline expected performance level for given forces in a given scenario. A baseline level would describe how well one expected a specific force to do, against a given opposition in a given scenario on average. One would compare exercise results with baseline expectations to conclude that the exercise force is worse or better than expected. But no such baseline exists; that is there are no models of force warfare which can predict the outcome of an exercise battle. Thus, we don’t know what the “zero” of the warfare effectiveness index is; neither do we know the forms of the adjustments necessary to translate exercise results into corresponding real-world results.

One might speculate that it would at least be possible to establish trends in warfare effectiveness from exercises. However, this too is difficult. The exercise scenarios as well as the forces involved will change over time. In any particular exercise, the missions, the geography, the forces (e.g., a CV rather than a CVN), and the threat simulation are likely to be different from those in any other exercise. Some scenarios may be particularly difficult, while others are easy. Comparing across exercises requires a way of adjusting for these differences. It requires knowing how a given force’s capabilities change with each of these factors, and right now we don’t know how. Of course, solving the problems of adjusting for exercise artificialities and of establishing an expected performance level for given battle problems would be a move in the right direction. But imperfections in the steps of the analysis process compound these conceptual difficulties. Recall that the data are imperfect to begin with, and errors enter during reconstruction and data reduction and organization. The numbers built from these data then have some error associated with them. These are the numbers which appear in summary tables and graphs depicting warfare effectiveness during an exercise. They are imprecise. This means that changes over time, even in exercises with roughly equivalent scenarios, must be large to be significant. Otherwise, such differences might only be statistical variations. Exactly how large they have to be, is still not clear but “big” differences bear further investigation.

What then is the usefulness of such numbers? They are useful because they result from examining the exercise from different viewpoints, and they allow judgment to he employed in a systematic manner. Without them one is completely in the dark. Clearly it is better to merge many different perspectives on how the operations went, than to rely on just one. The analysis process does this by examining objectively data collected from many different positions. It provides a framework for systematic employment of professional judgment concerning the effect of artificialities on exercise results. Recognizing each artificiality, professional judgment can be applied to assess the influence of each individually as opposed to the group as a whole. While obviously imprecise, the numbers appearing in the summary presentations, together with an understanding of the artificialities, the contextual factors, and the measurement errors, are better than a blind guess.

Evaluating an individual unit’s warfighting capability (as opposed to a group’s) is not easy either. The normal measures of unit readiness which come out of an exercise are at a lower mission level. An air squadron may have high sortie rates, and may be able to get on and off the carrier with ease, but the question of interest may be how effectively they contributed to the AAW defense. The link between task group AAW effectiveness and high sortie rates or pilot proficiency is not well understood. So while measurements at that level may be more precise than those at a higher level, and while the individual actions are more like actions in a real battle, it is not clear how measures of effectiveness at this level contribute to success at the group or force level. There is a need to research this crucial link between unit performance of low level mission actions and group mission effectiveness.

Tactics. As a vehicle for evaluating tactics, exercise analysis fares pretty well. Exercise artificialities and the analysis process still limit what we conceivably could learn and, practically, what we do learn.

The main artificiality to be careful of is threat simulation. Generally there are situations of short duration in an exercise which closely approximate those occurring in real battles, some in crucial respects. It is possible, then, to test a tactic in a specific situation which, except for the threat simulation, is realistic. The tactic may work well in the situation, but would it work against a force composed of true enemy platforms? This may be more problematic.

The limitations due to the analysis process stem more from improper execution rather than flaws in the process itself. To date, exercise analysis has failed to distinguish regularly between problems of tactical theory and those of tactical execution. If the analysis concludes that the employment of a tactic failed to achieve a desired result it seldom explains why. There is no systematic treatment of whether the tactic was ill-conceived, or employed in the wrong situation, or executed clumsily. The idea of the tactic may be fine, it may only have been employed in the wrong situation or it may have been executed poorly. In the event that a tactic does work, that is, the overall outcome is favorable, scant explicit attention is paid to the strength of the tactic’s contribution to the outcome. The outcome might have been favorable with almost any reasonable tactic because, say, one force was so much stronger than the other. Remember too that the data upon which the tactical evaluation is based is the same imperfect data as before. It is true that in some evaluations, the conclusion may be so clear as to swamp any reasonable error level in the data. Even if the error is 30 percent (say in detection range, or success ratio) the conclusion still might hold.

There are certain analytical standards which are achievable for tactics evaluation in exercises. The tactic or procedure should be defined clearly. The analysis should address whether the tactic was executed correctly and whether it was employed in the appropriate situation. It should answer the question of whether the influence of other contextual factors (aspects of the scenario for example) dominated the outcome. It should identify whether the tactic will only work when some factor is present. It should address whether the tactic integrates easily into coordinated warfare. Even if all these conditions are satisfied, the exercise may only yield one or two trials of the tactic. Definitive tests require more than one or two data points.

Scenarios. Judging how well Blue or Orange does in a scenario depends on the accuracy of the warfare capability assessments, the fidelity of the threat simulation, and the skill with which exercise results can be translated into real world expectations. It is clear from previous discussions on each of these topics that there are problems associated with each. Consequently, what we can learn about a scenario from playing it in an exercise is limited.

At best one can make gross judgments; an example might be “a CVTG cannot long operate from a Modloc in this specific area of the world without more than the usual level of ASW assets.” The exercise will provide an especially fertile environment for brainstorming about the scenario, and in a systematic way. The kinds of tactical encounters which are likely to cause problems will surface. Those engagements or operations which are absolutely crucial to mission success may also become clear. Serious thorough consideration of many courses of action may only occur in the highly competitive environment of an exercise. This can lead to the discovery of unanticipated enemy courses of action.

There are pitfalls of course in making even these gross assessments. For example, care must be taken to recognize very low readiness levels by exercise participants as a major contributor to the exercise outcome. But on the whole it should be possible to identify scenarios which are prohibitively difficult and should, therefore, be avoided. It may be possible to confirm what forces are essential for mission success and the rough force levels required.

What kinds of things might one reasonably expect to learn from exercises? First and foremost, the product of exercise analysis is well suited to correcting misperceptions about what happened during the exercise. It provides a picture of the exercise which is fashioned logically from data taken from many key vantage points instead of just one or two. As such, it is likely to be closer to the truth than a sketchy vision based on the experience of a single participant in the exercise. Second there is a capability to make some quantitative comment on warfare effectiveness. All the caveats developed earlier in the essay still apply, of course. It is safest to assume that there is a large error in the measures of effectiveness which are used. And a single exercise usually provides but a single data point of warfare effectiveness; extrapolation from a single such point is very risky.

Exercises are a very good vehicle for identifying any procedural difficulties which attend tactical execution. The exercise and analysis also provide a fertile opportunity to rethink the rationale underlying a tactic. More definitive evidence can be developed on ill-conceived tactics if the tactic was executed correctly and employed appropriately. The exercise and analysis also present an opportunity to observe the performance of the people and the systems. Examination may uncover areas where more training is needed, where operating procedures are not well understood, or where explicit operating and coordination procedures are absent.

Sweeping conclusions and strong, definitive judgments of capabilities, tactical effectiveness, and scenario advantages should be warning flags to exercise report readers. The reader should reassure himself that the exercise scenario, the exercise goal, and the tactical context are amenable to drawing such conclusions. For example, battle group tactical proficiency cannot be easily investigated in small, controlled exercises. Nor do capabilities demonstrated in easy battle problems imply like capabilities in harder, more realistic battle problems. The message is to read exercise reports with caution, continuously testing whether it makes sense that such results and conclusions could be learned from the exercise.

Dr. Thompson was the CNA field representative to the Commander, Sixth Fleet, from 1981 to 1984. He is currently a principal research scientist at CNA.

Featured Image: At sea aboard USS John F. Kennedy (CV 67) Mar. 18, 2002 — Air Traffic Controller 1st Class Michael Brown monitors other controlmen in the ship’s Carrier Air Traffic Control Center (CATCC) as aircraft conduct night flight operations. (U.S. Navy photo by Photographer’s Mate 2nd Class Travis L. Simmons.)

Publication Release: Alternative Naval Force Structure

By Dmitry Filipoff

From October 3 to October  7, 2016 CIMSEC ran a topic week where contributors proposed alternative naval force structures to spur thinking on how the threat environment is evolving, what opportunities for enhancing capability can be seized, and how navies should adapt accordingly. Contributors had the option to write about any nation’s navy across a variety of political contexts, budgetary environments, and time frames. 

Relevant questions include asking what is the right mix of platforms for a next-generation fleet, how should those platforms be employed together, and why will their capabilities endure? All of these decisions reflect a budgetary context that involves competing demands and where strategic imperatives are reflected in the warships a nation builds. These decisions guide the evolution of navies.

In a modern age defined by rapid change and proliferation, we must ask whether choices made decades ago about the structure of fleets remain credible in today’s environment. Navies will be especially challenged to remain relevant in such an unpredictable era. A system where an average of ten years of development precedes the construction of a lead vessel, where ships are expected to serve for decades, and where classes of vessels are expected to serve through most of a century is more challenged than ever before.

Authors:
Steve Wills
Javier Gonzalez
Tom Meyer 
Bob Hein
Eric Beaty
Chuck Hill
Jan Musil
Wayne P. Hughes Jr.

Editors:
Dmitry Filipoff
David Van Dyk
John Stryker

Download Here

Articles:

The Perils of Alternative Force Structure by Steve Wills

“Even the best alternative force structure that meets strategic needs, is more affordable than previous capabilities, and outguns the enemy could be subject to obsolescence before most of its units are launched. These case studies in alternative force structure suggest that such efforts are often less than successful in application.”

Unmanned-Centric Force Structure by Javier Gonzalez

“The conundrum and implied assumption, with this or similar future force structure analyses, is that the Navy must have at least a vague understanding of an uncertain future. However, there is a better way to build a superior and more capable fleet—by continuing to build manned ships based on current and available capabilities while also fully embracing optionality (aka flexibility and adaptability) in unmanned systems.”

Proposing A Modern High Speed Transport –  The Long Range Patrol Vessel by Tom Meyer

Is the U.S. Navy moving from an era of exceptional “ships of the line” – including LHA’s & LPD’s, FFG’s, CG’s, DDG’s, SSN’s and CVN’s – to one filled with USV’s, UAV’s, LCS’s, CV’s, SSK’s and perhaps something new – Long Range Patrol Vessels (LRPV’s)? But what in the world is an LRPV? The LRPV represents the 21stcentury version of the WWII APD – High Speed Transports.

No Time To Spare: Drawing on History to Inspire Capability Innovation in Today’s Navy by Bob Hein

“Designing and building new naval platforms takes time we don’t have, and there is still abundant opportunity to make the most of existing force structure. Fortunately for the Navy, histories of previous wars are a good guide for future action.”

Enhancing Existing Force Structure by Optimizing Maritime Service Specialization by Eric Beaty

“Luckily, the United States has three maritime services—the Navy, Coast Guard, and Marine Corps—with different core competencies covering a broad range of naval missions. Current investments in force structure can be maximized by focusing the maritime services on their preferred missions.”

Augment Naval Force Structure By Upgunning The Coast Guard by Chuck Hill

“The Navy should consider investing high-end warfighting capability in the Coast Guard to augment existing force structure and provide a force multiplier in times of conflict. A more capable Coast Guard will also be better able to defend the nation from asymmetrical threats.”

A Fleet Plan for 2045: The Navy the U.S. Ought to be Building by Jan Musil

“2045 is a useful target date, as there will be very few of our Cold War era ships left by then, therefore that fleet will reflect what we are building today and will build in the future. This article proposes several new ship designs and highlights enduring challenges posed by the threat environment.”

Closing Remarks on Changing Naval Force Structure by CAPT Wayne P. Hughes Jr., USN (Ret.)

“The biggest deficiencies in reformulating the U. S. Navy’s force structure are (1) a failure to take the shrinking defense budget into account which (2) allows every critic or proponent to be like the blind men who formulated their description of an elephant by touching only his trunk, tail, leg, or tusk. To get an appreciation of the size of the problem you have to describe the whole beast, and what is even harder, to get him to change direction by hitting him over the head repeatedly.”

Dmitry Filipoff is CIMSEC’s Director of Online Content. Contact him at Nextwar@cimsec.org.

Featured Image: PACIFIC OCEAN (Oct. 27, 2017) Ships from the Theodore Roosevelt Carrier Strike Group participate in a replenishment-at-sea with the USNS Guadalupe (hull number). (U.S. Navy photo by Mass Communication Specialist Seaman Morgan K. Nall/Released)

The Navy’s New Fleet Problem Experiments and Stunning Revelations of Military Failure

By Dmitry Filipoff

Losing the Warrior Ethos

“…despite the best efforts of our training teams, our deploying forces were not preparing for the high-end maritime fight and, ultimately, the U.S. Navy’s core mission of sea control.” –Admiral Scott Swift 1

Today, virtually every captain in the U.S. Navy has spent most of his or her career in the post-Cold War era where high-end warfighting skills were de-emphasized. After the Soviet Union fell, there was no navy that could plausibly contest control of the open ocean against the U.S. In taking stock of this new strategic environment, the Navy announced in the major strategy concept document …From the Sea (1992) achange in focus and, therefore, in priorities for the Naval Service away from operations on the sea toward power projection.”2 This change in focus was toward missions that made the Navy more relevant in campaigns against lower-end threats such as insurgent groups and rogue nations (Iran, Iraq, North Korea, Libya) that were the new focus of national security imperatives. None of these competitors fielded modern navies.

The relatively simplistic missions the U.S. Navy conducted in this power projection era included striking inland targets with missile strikes and airpower, presence through patrolling in forward areas, and security cooperation through partner development engagements. The focus on this skillset has led to an era of complacence where the high-end warfighting skills that were de-emphasized actually atrophied to a significant degree. This possibility was forewarned in another Navy strategy document that sharpened thinking on adapting for a power projection era, Forward…from the Sea (1994): “As we continue to improve our readiness to project power in the littorals, we need to proceed cautiously so as not to jeopardize our readiness for the full spectrum of missions and functions for which we are responsible.”3

Now the strategic environment has changed decisively. Most notably, China is aggressively rising, challenging international norms, and rapidly building a large, modern navy. Because of the predominantly maritime nature of the Pacific theater, the U.S. Navy may prove the most important military service for deterring and winning a major war against this ascendant and destabilizing superpower. If things get to the point where offensive sea control operations are needed and the fleet is gambled in high-end combat, then it is very likely that the associated geopolitical stakes of victory or defeat will be historic. The sudden rise of a powerful maritime rival is coinciding with the atrophy of high-end warfighting skills and the introduction of exceedingly complex technologies, making the recent stunning revelations about how the U.S. Navy has failed to prepare for great power war especially chilling.

Admiral Scott Swift, who leads U.S. Pacific Fleet (the U.S. Navy’s largest and most prioritized operational command), candidly revealed that the Navy was not realistically practicing high-end warfighting skills and operations, including sinking modern enemy fleets, until only two years ago. Ships were not practicing against other ships in the realistic, free-play environments necessary to train and refine tactics and doctrine to win in great power war.

In a recent U.S. Naval Institute Proceedings article, Admiral Swift detailed training and experimentation events occurring in a series of “Fleet Problems.” These events take their name and inspiration from a years-long series of interwar-period fleet experiments and exercises that profoundly influenced how the Navy transformed itself in the run-up to World War Two. While ships practiced against ships in the inter-war period Fleet Problems, the modern version began with the creation of a specialized “Red” team well-versed in wargaming concepts and competitor thinking born from intelligence insights. This Red team is pitched against the Navy’s frontline commanders in Fleet Problem scenarios that simulate high-end warfare through the command of actual warships. What makes their creation an admission of grave institutional failure is that this Red team is leading the first series of realistic high-threat training events at sea in recent memory.

The Navy’s units should be able to practice high-end warfighting skills against one another without the required participation of a highly-specialized Red team adversary to present a meaningful challenge. But Adm. Swift strikingly admits that the Navy’s current system of certifying warfighting skills is not representative of real high-end capability because the Navy “never practiced them together, in combination with multiple tasks, against a free-playing, informed, and representative Red.” Furthermore, “individual commanders rarely if ever [emphasis added] had the opportunity to exercise all these complex operations against a dynamic and thoughtful adversary.”

Core understanding on what makes training realistic and meaningful was absent. Warfighting truths were not being discovered and necessary skills were not being practiced because ships were not facing off against other ships in high-end threat scenarios to test their abilities under realistic conditions. If the nation sent the Navy to fight great power war tomorrow, it would amount to a coach sending a team that “rarely if ever” did practice games to a championship match.

These exercises are not just experiments that push the limits of what is known about modern war at sea. They are also experimental in that they are now figuring out if the U.S. Navy can even do what it has said it could do, including the ability to sink enemy fleets and establish sea control. According to Adm. Swift, the Navy had “never performed” a “critical operational tactic that is used routinely in exercises and assumed to be executable by the fleet [emphasis added]” until it was recently tested in a Fleet Problem. The unsurprising insight: “having never performed the task together at sea, the disconnect” between what the Navy thought it could perform and what it could actually do “never was identified clearly.” Adm. Swift concludes “It was not until we tried to execute under realistic, true free-play conditions that we discovered the problem’s causal factors…” In the Fleet Problems training and experimentation have become one and the same.

Why did the Navy assume it could confidently execute critical operational tactics it had never actually tried in the first place? And if the Navy assumed it could do it, then maybe the rest of the defense establishment and other nations thought so, too. Does this profound disconnect also hold true for foreign and allied navies? Is the unique tactical and doctrinal knowledge being represented by the specialized Red team an admission that competitors are training their units and validating their warfighting concepts through more realistic practice? Even though it is impossible to truly simulate all the chaos of real combat, only now are important ground truths of high-end naval warfare just being discovered which could prompt major reassessments of what the Navy can really contribute in great power war.

The entirety of the train, man, and equip enterprise that produces ready military forces for deployment must be built upon a coherent vision of how real war works. The advent of the Fleet Problems suggests that if one were to ask the Navy’s unit leaders what their real-world vision is of how to fight modern enemy warships as part of a distributed and networked force their responses would have little in common. If great power war breaks out tomorrow, the Navy’s frontline commanders could be forced to improvise warfighting fundamentals from the very beginning. Simple lessons would be learned at great cost in blood and treasure.

Many of the major revelations coming from the Fleet Problems are not unique innovations, but rather symptoms of deep neglect for a core element of preparing for war – pitting real-life units against one another to test people, ideas, and technology under realistic conditions. Adm. Swift surprisingly describes using a Red team to  connect intelligence insights, wargaming concepts, training, and real-life experimentation as “new ground.” Swift also noted that as the Navy attempted its purported concepts of operations in the Fleet Problems “it became apparent there were warfighting tasks that were critical to success that we could not execute with confidence.” In a normal context, it would not always be noteworthy for a military to invalidate concepts or realize it can’t do something well. What makes these statements revelations is that the process of testing concepts and people in realistic conditions simulating great power war has only just begun. 

This is a failure with profound implications. The insight that comes from training and experimenting against realistic threats forms a critical foundation for the rest of the military enterprise. Realistic experimentation and training is indispensable for developing meaningful doctrine, tactics, and operational art. Much of the advanced concept development on great power war by the Navy hasn’t been validated by real-world testing. The creation of the new Fleet Problems is fundamentally an admission that not only is the Navy unsure of its ability to execute core missions, but that major decisions about its future development were built on flaws. While the Fleet Problems are finally injecting much needed realism into the Navy’s thinking, their creation reveals that the entire defense establishment has suffered a major disconnect from the real character of modern naval warfare. The Fleet Problems have likely invalidated years of planning and numerous basic assumptions.

The Navy must now account for how many years it did not practice its forces in meaningful, high-end threat training in order to understand just how widespread this lack of realistic experience has penetrated its ranks. There should be no doubt that this has skewed decision-making at senior levels of leadership. How many leaders making important decisions about capability development, training, and requirements have zero firsthand experience commanding forces in high-end threat training? Could the fleet commanders operate networked and distributed formations if war breaks out? Has best military advice on the value of naval power for the nation’s national security interests been predicated on untested warfighting assumptions?

To Train the Fleet for What?

“The department directs that a board of officers, qualified by experience, be ordered to prepare a manual of torpedo tactics which will be submitted by the department to the War College, and after such discussion and revision as may be necessary, will be printed and issued to the torpedo officers of the service for trial. This order has not been complied with. If it had been, it would doubtless have resulted in a sort of tentative doctrine which, though it might well have been better than the flotilla’s first attempt, could not have been as complete or as reliable as one developed through progressive trials at sea; and it might well have contained very dangerous mistakes.”William S. Sims 4

Adm. Swift reveals that it was even debated whether free-play elements should play a role at all in certifying units to be combat ready: “there was concern in some circles that adding free-play elements to the limited time in the training schedule would come at the cost of unit certification. Others contended it was unrealistic and unfair to ask units that were not yet certified to perform our most difficult warfighting tasks.” The degree of certification is moot. Sailors are failing anyway because the shift in warfighting focus toward great power competition has not been matched by new training standards and therefore not penetrated down to the unit level.

Adm. Swift notes startling lessons: “In some scenarios, we learned that the ‘by the book’ procedure can place a strike group at risk simply because our standard operating procedures were written without considering a high-end wartime environment.” This is a direct result of the change in focus toward power projection missions against threats without modern navies. According to Adm. Swift the regular exercise schedule consisted of missions including “maritime interdiction operations, strait transits, and air wings focused on power projection from sanctuary” which meant that forces were “not preparing for the high-end maritime fight and, ultimately, the U.S. Navy’s core mission of sea control.” In this new context of a high-end fight in a Fleet Problem, according to Adm. Swift, “If we presented an accurate—which is to say hard—problem, there was a high probability the forces involved were going to fail. In our regular training events, that simply does not happen at the rate we assess will occur in war.” The Fleet Problems are revealing that Navy units are not able to confidently execute high-end warfighting operations regardless of the state of their training certifications. 

These revelations demonstrate that the way the Navy certifies its units as ready for war is broken. A profound disconnect exists between the Navy’s certification and training processes for various warfighting skills and what is actually required in war. Entire sets of training certifications and standard operating procedures born of the post-Cold War era are inadequate for gauging the Navy’s ability to fight great power conflict.

Mentally Absent in the Midst of the Largest Technological Revolution

“The American navy in particular has been fascinated with hardware, esteems technical competence, and is prone to solve its tactical deficiencies with engineering improvements. Indeed, there are officers in peacetime who regard the official statement of a requirement for a new piece of hardware as the end of their responsibility in correcting a current operational deficiency. This is a trap.” Capt. Wayne P. Hughes, Jr. (Ret.) 5

Regardless of a major shift in national security priorities toward lower-end threats, the astonishing pace of technological change constitutes an extremely volatile factor in the strategic environment that needs to be constantly paced by realistic training and experimentation under free-play conditions. The modern technological foundation upon which to devise tactics and doctrine is built on sand.

The advent of the information age has unlocked an unprecedented degree of flexibility for the conduct of naval warfare as platforms and payloads can be connected in real-time in numerous ways across great distances. This has resulted in a military-technical revolution as marked as when iron and steam combined to overtake wooden ships of sail. A single modern destroyer fully loaded with network-enabled anti-ship missiles has enough firepower to singlehandedly sink the entirety of the U.S. Navy’s WWII battleship and fleet carrier force.6 On the flipside, another modern destroyer could field the defensive capability to stop that same missile salvo.

Warfighting fundamentals are being reappraised in an information-focused context. The process by which forces find, target, and engage their opponents, known as the kill chain, is enabled by information at each individual step of the sequence. A key obstacle is meeting that burden of information in order to advance to the next step. This challenge is exacerbated by the great distances of open-ocean warfare and the difficulty of getting timely information to where it needs to be while the adversary seeks to deceive and degrade the network. Technological advancement means the kill chain’s information burdens can be increasingly met and interfered with.

The threshold of information needed for the archer to shoot decreases the smarter the arrow gets. Information-age advancements have therefore wildly increased the power of the most destructive conventional weapon ever put to sea, the autonomous salvo of swarming anti-ship missiles.

The next iteration of these missiles will have a robust suite of onboard sensors, datalinks, jamming capability, and artificial intelligence. These capabilities will combine to build resilience into the kill chain by containing as much of that process as possible within the missile itself. More and more of the need for the most up-to-date information will be met by the missile swarm’s own sensors and decided upon by its artificial intelligence. Once fired, these missiles are on a one-way trip, allowing them to discard survivability for the sake of seizing more opportunities to collect and pass information. Unlike most other information-gaining assets, these missiles will be able to close with potential targets to resolve lingering concerns of deception and identification. The missile’s infrared and electro-optical capabilities in particular will provide undetectable, jam-resistant sensors for final identification that will prove challenging to deceive with countermeasures. On final approach, the missile will pick a precise point on the ship to guarantee a kill, such as where ammunition is stored. 

The most fierce enemy in naval warfare has taken the form of autonomous networked missile salvos where the Observe, Orient, Decide, and Act (OODA) decision cycle will be transpiring within the swarm at machine speeds. Is the Navy ready to use and defend against these decisive weapons?

The Navy may feel inclined to say yes to the latter question sooner because shooting things out of the sky has been a special focus of the Surface Navy and naval aviation since WWII. The latest technology that will take this capability into the 21st century, the Naval Integrated Fire Control – Counter-Air (NIFC-CA) networking capability, will help unite the sensors and weapons of the Navy’s ships and aircraft. Aircraft will be able to use a warship’s missiles to shoot down threats the ship can’t see itself. This is decisive because anti-ship missiles will make their final approach at low altitudes below the horizon where they can’t be detected by a ship’s radar. Modern warships can be forced to wait until the final seconds to bring most of their defensive firepower to bear on a supersonic inbound missile salvo unless a networked aircraft can cue their fires with accurate sensor information from high above.

This makes mastering NIFC-CA perhaps the most important defensive capability the fleet needs to train for, but this will involve a steep learning curve. Speaking on the challenges of making this capability a reality, then-Captain Jim Kilby remarked that it involves “a level of coordination we’ve never had to execute before and a level of integration between aircrews and ship crews.”Is the Navy truly practicing and refining this capability in realistic environments? At least three years before the Fleet Problems started, the Chief of Naval Operations reported that concepts of operation were established for NIFC-CA.8

There should be little confidence that naval forces have a deep comprehension of how information has revolutionized naval warfare and how modern fleet combat will play out because there was a lapse in necessary realistic experimentation at sea. The way the Navy thought it would operate may not actually make sense in war, a key insight that experimentation will reveal as it did in the interwar period.

Training and Experimentation for Now and Tomorrow

If…the present system fails to anticipate and to adequately provide for the conditions to be expected during hostilities of such nature, it is obviously imperative that it be modified; wholly regardless of the effect of such change upon administration or upon the outcome of any peace activity whatsoever.” –Dudley W. Knox 9

The extent to which the Navy’s current capabilities have been tested by meaningful real-world training and experimentation is now in doubt. This doubt naturally extends to things that the Navy has just fielded or is about to introduce to the fleet. Yet Adm. Swift revealed a fatal flaw in the Fleet Problems that is not in keeping with a high-velocity learning or warfighting-first mindset: “We are not notionally employing systems and weapons that are not already deployed in the fleet. Each unit attacks the problem using what it has on hand (physically and intellectually) today.”

It is a mistake to not train forces to use future weapons. Units must absolutely attempt to experiment with capabilities not yet in the fleet to stay ahead of the ever-quickening pace of change. Realism should be occasionally sacrificed to anticipate the basic parameters of capabilities that are about to be fielded. Sailors should be thinking about how to employ advanced anti-ship missiles about to hit the fleet that feature hundreds of miles of range like the Long Range Anti-Ship Missile (LRASM), Standard Missile 6, and the Maritime Strike Tomahawk. These capabilities are far more versatile than the Navy’s only current ship-to-ship missile, the very short-range and antiquated Harpoon missile the Navy first fielded over 40 years ago and can’t even carry in its launch cells. Getting sailors to think about weapons before their introduction will mentally prepare them for new capabilities and warfighting realities.

Information-enabled capabilities have come to dominate every facet of offense, defense, and decision. Do naval aviators know how to retarget friendly salvos of networked missiles amidst a mass of deception and defensive counter-air capabilities while leveraging warship capabilities to target enemy missile salvos simultaneously? Do fleet commanders know how to maneuver numerous aerial network nodes to fuse sensors and establish flows of critical information that react to emerging threats and opportunities? Can commanders effectively manage and verify enormous amounts of information while the defense establishment and industrial base are being aggressively hacked by a great power? According to the Navy’s current service strategy document, A Cooperative Strategy for 21st Century Seapower, warfare concept development should involve efforts to…re-align Navy training, tactics development, operational support, and assessments with our warfare mission areas to mirror how we currently organize to fight.” 10

Despite all the enormous effort and long wait times that accompany the introduction of a new system, the Fleet Problems remind the defense establishment that the Navy can’t be expected to know how to use it simply because it is fielded. New warfighting certifications are in order and must be rapidly redefined and benchmarked by the Fleet Problems in order to pace technology and make the Navy credible. This will require that a significant amount of time be dedicated to real-world experimentation.

So How the Does the Navy Spend its Time? 

“Our forward presence force is the finest such force in the world. But operational effectiveness in the wrong competitive space may not lead to mission success. More fundamentally, has the underlying rule set changed so that we are now in a different competitive space? How will we revalue the attributes in our organization?” –Vice Admiral Arthur K. Cebrowski and John J. Garstka  11

These severe experimentation and training shortfalls are not at all due to lack of funding, but rather by faulty decisions on what is actually important for Sailors to focus their time on and what naval forces should be used for in the absence of great power war. Meanwhile, the power projection era featured extreme deployment rates that have run the Navy into the ground.

The Government Accountability Office states that 63 percent of the Navy’s destroyers, 83 percent of its submarines, and 86 percent of its aircraft carriers experienced maintenance overruns from FY 2011-2016 that resulted in almost 14,000 lost operational days – days where ships were not available for operations.12 How much of this monumental deployment effort went toward aggressively experimenting and training for great power conflict instead of performing lower-end missions? Hardly any if none at all because Adm. Swift termed the idea to use a unit’s deployment time for realistic experimentation an “epiphany.”

In order to more efficiently meet insatiable operational demand and slow the rate of material degradation the Navy implemented the Optimized Fleet Response Plan (OFRP) that reforms the cycle by which the Navy generates ready forces through maintenance, training, and sustainment phases.13 But Adm. Swift alleges that this major reform has caused the Navy to improperly invest its time:

“Commanders were busy following the core elements in our Optimized Fleet Response Plan (OFRP) training model, going from event to event and working their way through the list of training objectives as efficiently as possible. Rarely did we create an environment that allowed them to move beyond the restraints of efficiency to the warfighting training mandate to ensure the effectiveness of tactics, techniques, and procedures. We were not creating an environment for them to develop their own warfighting creativity and initiative.”

A check-in-the-box culture has been instituted to cope with crushing deployments rates at the expense of fostering leaders that embody the true warfighter ethos of imaginative tacticians and operational commanders. The OFRP cycle is under so much tension from insatiable demand and run-down equipment that Adm. Swift described it as a “Swiss watch—touching any part tended to cause the interlocking elements to bind, to the detriment of the training audience.” But as Adm. Swift already noted, pre-deployment training wasn’t even focused on preparing for the high-end fight anyway.

Every single deployment is an opportunity to practice and experiment. Simply teaching unit leaders to make time for such events will be valuable training itself as they figure out how to delegate responsibilities in an environment that more closely approximates wartime conditions. After all, if units are currently straining on 30 hours of sleep a week performing low-end missions and administrative tasks, how can we be sure they know how to make time to fight a high-stakes war while also maintaining a ship that’s falling apart?

Being a deckplate leader of a warship has always been an enormously busy job and there is always something a warship can do to be relevant. But it is a core competence of leaders at all levels to know what to make time for and how to delegate accordingly. From the sailor checking maintenance tasks to the combatant commander tasking ships for partner development engagements, a top-to-bottom reappraisal of what the Navy needs to spend its time doing is in order. Are Sailors performing tasks really needed to win a war? Are the ships being deployed on missions that serve meaningful priorities?

Major reform will be necessary in order to reestablish priorities to make large amounts of time for realistic training and experimentation. In addition to making enough time, it is also a question of having enough forces on hand when the fleet is stretched thin. Adm. Swift described a carrier strike group (CSG) being used in a Fleet Problem where “the entire CSG was OpFor [Red team] – an enormous investment that yielded unique and valuable lessons.” Does this mean that aircraft carriers, the Navy’s largest and most expensive warships, are especially hard-pressed to secure time for realistic experimentation and training? Can the Navy assemble more than a strike group’s worth of ships to simulate a competitor’s naval forces?

The recent deployment of three strike groups to the Pacific means it is possible. Basic considerations include asking whether the Navy has enough ships on hand to simulate a distributed fleet and enough units to simulate great power adversaries that have the advantages of time, space, and numbers. But with where the deployment priorities currently stand, the Navy may not have enough time or ships on hand to regularly simulate accurate scenarios. 

A Credibility Crisis in the Making

“…there are many, many examples of where our ships their commanding officers, their crews are doing very well, but if it’s not monitored on a continuous basis these skills can atrophy very quickly.”  Chief of Naval Operations Admiral John Richardson 14

When great power conflict last broke out in WWII the war at sea was won by admirals like Ernest King, Chester Nimitz, and Raymond Spruance whose formative career experiences were greatly influenced by the interwar-period Fleet Problems. This tradition of excellence based on realism is in doubt today.

What is clear is that business as usual cannot go on. The fundamental necessity of free-play elements for ensuring warfighting realism is beyond reproach. The reemergence of competition between the world’s greatest powers in a maritime theater is making many of the Navy’s power projection skillsets less and less relevant to geopolitical reality. New deployment priorities must preference realistic training and experimentation to make up for lost ground in concept development, accurately inform planning, understand the true limits and potential of technology, and test the mettle of frontline units. 

The recent pair of collisions challenged numerous assumptions about how the Navy operates and how it maintains its competencies. Tragic as those events were, they thankfully stimulated an energetic atmosphere of reflection and reform. But the competencies that such reforms are targeting include things like navigation, seamanship, and ship-handling. These basic maritime skills have existed for thousands of years. What is far newer, endlessly more complex, and absolutely vital to deter and win wars is the ability to employ networked and distributed naval forces in great power conflict. Compared to the fatal collisions, countless more sailors are dying virtual deaths in the Fleet Problems that are revealing shocking deficiencies in how the Navy prepares for war. Short of horrifying losses in real combat, there is no greater wake-up call.

Dmitry Filipoff is CIMSEC’s Director of Online Content. Contact him at Nextwar@cimsec.org.

References

[1] Admiral Scott H. Swift, “Fleet Problems Offer Opportunities” U.S. Naval Institute Proceedings, March 2018.  https://www.usni.org/magazines/proceedings/2018-03/fleet-problems-offer-opportunities

[2] Forward…From the Sea, U.S. Department of the Navy, 1994. https://www.globalsecurity.org/military/library/policy/navy/forward-from-the-sea.pdf 

[3] Ibid., 8. 

[4] William S. Sims, “Naval War College Principles and Methods Applied Afloat” U.S. Naval Institute Proceedings, March-April 1915. https://www.usni.org/magazines/proceedings/1915-03/naval-war-college-principles-and-methods-applied-afloat

[5] Wayne P. Hughes, Jr., Fleet Tactics: Theory and Practice, Second Edition, pg. 33, Naval Institute Press, 1999.

[6] Can be inferred from official U.S. Navy ship counts on battleships and aircraft carriers and near-term capabilities of anti-ship capabilities.

[7] Sam LaGrone, “The Next Act for Aegis”, U.S. Naval Institute News, May 7, 2014. https://news.usni.org/2014/05/07/next-act-aegis

[8] CNO’s Position Report 2013, U.S. Department of the Navy. http://www.navy.mil/cno/131121_PositionReport.pdf

[9] Dudley W. Knox, “The Role of Doctrine in Naval Warfare.” U.S. Naval Institute Proceedings, March-April 1915. https://www.usni.org/magazines/proceedings/1915-03/role-doctrine-naval-warfare

[10] A Cooperative Strategy for 21st Century Seapower. http://www.navy.mil/local/maritime/150227-CS21R-Final.pdf

[11] Vice Admiral Arthur K. Cebrowski and John J. Garstka, “Network Centric Warfare: It’s Origin, It’s Future.” U.S. Naval Institute Proceedings, January 1998. https://www.usni.org/magazines/proceedings/1998-01/network-centric-warfare-its-origin-and-future

[12] John H Pendleton, “Testimony Before the Committee on Armed Services, U.S. Senate Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Affecting the Fleet. Government Accountability Office, September 19, 2017. https://www.gao.gov/assets/690/687224.pdf

[13] “What is the Optimized Fleet Response Plan and What Will It Accomplish?” U.S. Fleet Forces Command, Navy Live, January 15, 2014. http://navylive.dodlive.mil/2014/01/15/what-is-the-optimized-fleet-response-plan-and-what-will-it-accomplish/

[14] Department of Defense Press Briefing by Adm. Richardson on results of the Fleet Comprehensive Review and investigations into the collisions involving USS Fitzgerald and USS John S. McCain, November 2, 2017. https://www.defense.gov/News/Transcripts/Transcript-View/Article/1361655/department-of-defense-press-briefing-by-adm-richardson-on-results-of-the-fleet/ 

Featured Image: SASEBO, Japan (Feb. 28, 2018) Operations Specialist 2nd Class Megann Helton practices course plotting during a fast cruise onboard the amphibious assault ship USS Wasp (LHD 1). (U.S. Navy photo by Mass Communication Specialist 3rd Class Levingston Lewis/Released)