Tag Archives: U.S. Navy

Did We Learn Anything From That Exercise? Could We?

The following article originally featured in the 1982 July-August edition of The Naval War College Review and is republished with permission.

By Frederick Thompson

Exercises are a source of information on tactics, force capabilities, scenario outcomes, and hardware systems effectiveness. But they distort battle operations in ways which prevent the immediate application of their results to real world situations. For example, because they are artificial, the force capabilities demonstrated need not exactly portray capabilities in actual battle. Further, our analysis process is imperfect. Our data can be incomplete or erroneous, the judgments we make during reconstruction and data refinement may be contentious, and our arguments linking the evidence to our conclusions may be incorrect. Still, exercises are among the most realistic operations we conduct. Our investigations of what really happened in an exercise yield valuable insights into problems, into solutions, and into promising tactical ideas.

The Nature of Exercises

How do naval exercises differ from real battles? Clearly the purpose of each is different. Exercises are opportunities for military forces to learn how to win real battles. During exercises, emphasis is on training people and units in various aspects of warfare, practicing tactics and procedures, and coordinating different force elements in complex operations. Ideally, the exercise operations and experiences would be very much like participating in a real battle. For obvious reasons, exercises fall short of this ideal, and thereby distort battle operations. These distortions are called “exercise artificialities.” An understanding of the different kinds of exercise artificialities is essential to understanding exercise analysis results. The exercise artificialities fall loosely into three classes: those which stem from the process of simulating battle engagements; those which stem from pursuit of a specific exercise goal; and those stemming from gamesmanship by the players.

Engagement Simulation. Obviously real ordnance is not used in an exercise. As a result, judging the accuracy of weapon delivery and targeting, force attrition, and damage assessment become problems in an exercise. If a real SAM is fired at an incoming air target, the target is either destroyed or it is not. There is no corresponding easy solution to the engagement in an exercise. Somehow, the accuracy of the fire control solution must be judged, and an umpire must determine whether the warhead detonates and the degree of destruction it causes.

What is the impact of this simulation on battle realism? Suppose the SAM is judged a hit and the incoming target destroyed. The incoming target will not disappear from radar screens. It may, in fact, continue to fly its profile (since it won’t know it’s been destroyed). So radar operators will continue to track it and the target will continue to clutter the air picture. A cluttered air picture naturally consumes more time of operators and decision makers. Now suppose the SAM misses the incoming target. If time permitted, the SAM ship would fire again, thereby depleting SAM inventories. However, the judgment process is not quick enough to give the SAM ship feedback to make a realistic second firing. In fact, AAW engagement resolution may not occur until the post-exercise analysis.

Now suppose the SAM misses the incoming missile, but the missile hits a surface combatant. Then the problem is to figure out how much damage was done to the combatant. An umpire will usually role dice to probablistically determine damage; a real explosion wreaks destruction instantaneously. As a result, there will be some delay in determining damage and even then that damage may be unrealistic.

It is easy to see how the flow of exercise events may become distorted, given the delay between engagement and engagement resolution during an exercise. Other examples of distortion abound. For example, it may happen that a tactical air strike is launched to take out an opposing surface group armed with long-range antiship missiles, but only after those missiles have already dealt a crippling blow to the CV from which the air strike comes. In another case, aircraft will recover on board CVs with simulated damage as well as those CVs still fully operational. In general, it has so far been impossible to effect in exercises the real-time force attrition of an actual battle so that battle flow continues to be realistic after the first shots are fired.

Such artificialities make some aspects of the exercise battle problem more difficult than in a real battle; others make it less difficult. Because destroyed air targets don’t disappear from radar screens, the air picture becomes more complicated. On the other hand, a SAM ship will seldom expend more than two SAMs on a single target and therefore with a given inventory can engage more incoming missiles than she would be able to in reality. Further, the entire AAW force remains intact during a raid (as does the raid usually) as opposed to suffering progressive attrition and thereby having to fight with less as the raid progresses. It is unclear exactly what the net effect of these artificialities is on important matters like the fraction of incoming targets effectively engaged.

Safety restrictions also distort exercise operations and events. For example, the separation of opposing submarines into nonoverlapping depth bands affects both active and passive sonar detection ranges, especially in submarine vs. submarine engagements. Here realism is sacrificed for a reduced probability of a collision. For the same sorts of reasons, aircraft simulating antiship missiles stay above a fixed altitude in the vicinity of the CV, unless they have prior approval from the CV air controller, which distorts the fidelity of missile profiles. In other examples, surface surveillance aircraft may not use flares to aid night time identification. Tactical air strike ranges may be reduced to give the strike aircraft an extra margin of safety in fuel load. The degree of battle group emission control, especially with regard to CV air control communications and navigation radars, is determined partially by safety consideration. Quietness is often sacrificed in favor of safety.

The point is that safety is always a concern in an exercise, whereas in an actual battle, the operators would probably push their platforms to the prudent limits of their capabilities. These safety restrictions impart another artificiality to exercise operations. By constraining the range of Blue operational options, his problem becomes more difficult than the real world battle. By constraining Orange operational options, Orange’s problem becomes harder, and hence Blue’s problem easier, than in the real world.

Another source of distortion is the use of our own forces as the opposition. US naval ships, aircraft, and submarines differ from those of potential enemies. It is probable that enemy antiship missiles can be launched from further away, fly faster, and present a more difficult profile than can be simulated by manned aircraft in an exercise. The simulated antiship missile in an exercise thus presents an easier target in this regard. Customarily, Orange surveillance has fewer platforms with less on-station time than do some potential enemies, say the Soviets. In ASW, there maybe differences between US submarine noise levels and potential enemy submarine noise levels. All these differences in sensors and weapon systems distort detection, identification, and engagement in exercises and thereby make aspects of exercise operations artificial.

A more subtle distortion occurs when US military officers are cast in the role of enemy decision makers. The US officers are steeped in US naval doctrine, tactics, and operating procedures. It is no doubt difficult to set aside these mind-sets and operate according to enemy doctrine, tactics, and procedures. Add to this the fact that one has only a perception of enemy doctrine, tactics, and procedures to work from, and the operating differences between an actual enemy force and a simulated enemy force become more disparate. With this sort of distortion, it is difficult to identify exactly how the exercise operations will be different from those they try to simulate. But the distortions are real, and are at work throughout the exercises.

Exercise Scenarios.The goal of an exercise can drive the nature of the exercise operations; this is a familiar occurrence in all fleets. The degree of distortion depends upon the nature of the goal. Consider two examples.

First, consider an exercise conducted for the express purpose of examining a particular system’s performance in coordinated operations. It is likely to involve a small patch of ocean, repeated trials in a carefully designed scenario, dictated tactics, and most importantly a problem significantly simpler than that encountered in a real battle. At best, the battle problem in this controlled exercise will be a subtask from a larger battle problem. Participants know the bounds of the problem and they can concentrate all of their attention and resources on solving it. Now such exercises are extremely valuable, both in providing a training opportunity to participants and in discovering more about the system in question. But the exercise results apply only in the limited scenario which was employed; in this sense the goal of the exercise distorts the nature of the operations. Exercise operations in these small, canned, controlled exercises are artificially simple as compared to those in a real battle.

Next consider a large, multi-threat free-play exercise which is conducted partially for training, perhaps the most realistic type of exercise conducted. The exercise area will still have boundaries but will probably include a vast part of the ocean. Commercial aircraft traffic and shipping may well be heavier than would be the case in a hot war environment. As the exercise unfolds there will be a tendency for the controlling authority to orchestrate interactions. By doing this, the options are constrained unrealistically for both sides. Blue or Orange may not be able to pick the time and place to fight the battle. Both sides know that a simulated battle will be fought, and higher authority may hasten the interaction so that the participants can fight longer. Clearly this is a case where trade-offs must be made and it is important to understand this when exercise results are being interpreted.

In both kinds of exercises, artificialities are necessary if the goals are to be met. Partly as a result the operations are not exact duplicates of those likely to occur in the same scenario in a real battle. Aside from recognizing that forced interactions distort an exercise battle, little work has been done to learn more about how these distortions affect the resulting operations.

Gamesmanship and Information. A separate class of artificialities arise when exercise participants are able to exploit the rules of play. Consider a transiting battle group. It may be possible to sail so close to the exercise area boundary that from some directions the opponent could attack only from outside the area, and that is prohibited. Thus, the battle group would reduce the potential threat axes and could concentrate its forces only along axes within the operating area. Clearly, the tactical reasoning which leads to such a decision is valuable training for the participants, and exploiting natural phenomena such as water depth, island masking, and so on are valid tactics. But exploiting an exercise boundary to make the tactical problem easier distorts operations in the scenario and is a kind of gamesmanship.

Consider another situation. In exercises, both sides usually know the opposition’s exact order-of-battle. So participants have more information than they are likely to have in a real battle, and that information is known to be reliable. Blue also knows the operating capabilities of the ships stimulating the enemy, and may be able to deduce associated operating constraints from them. For example, he knows more about US submarine noise levels and operating procedures than he does about likely opposition submarines. He also knows how many Orange submarines are actually participating in the exercise, and as he engages them, he may be able to estimate the size of the remaining threat by examining time and distance factors.

Classes of Exercise Artificiality

I. Battle Simulation Artificiality

  • No Real Ordnance
  • Safety Restrictions on Operations
  • Simulate Opposition Platforms
  • Imperfect Portrayal of enemy doctrine, tactics, and procedures

II. Scenario Artificiality

  • Forced Interaction
  • Focus on Small Piece of Battle Problem

III. Gamesmanship and Information

  • Exact Knowledge of Enemy OOB
  • Exact Knowledge of Enemy Platform Capabilities
  • Exploitation of Exercise Rules
  • Tactical Information Feedback Imperfections

A final exercise artificiality is the poor information flow from battle feedback. With real ordnance, undetected antiship missiles hit targets, explode, and thereby let the force know it is under attack. This does not occur in exercises. A force may never know it has been located and has been under attack. As a result, the force may continue to restrict air search radar usage when in a real battle, all radars would have been lit off shortly after the first explosion. The force may never be able to establish a maximum readiness posture. In a real battle, there would have been plenty of tactical information to cue the force that it is time to relax emission control. This kind of exercise artificiality affects both the engagement results and the flow of battle events.

In spite of these artificialities, exercises still provide perhaps the only source of operational information from an environment which even attempts to approximate reality. Though artificial in many ways, exercises on the whole are about as realistic as one can make them, short of staging a real war. This is especially true in the case of large, multi-threat, free-play fleet exercises. The only time a battle group ever operates against opposition may be during these exercises. So for lack of something better, exercises become a most important source of information.

The Nature of the Analysis Process

The analytical conclusions drawn from examining exercise operations are the output of a sequence of activities which collectively are called the exercise analysis process. While there is only one real analytical step in the process, it has become common to refer to the entire sequence as an analysis process. The individual steps themselves are (1) data collection, (2) reconstruction, (3) data reduction and organization, (4) analysis, and (5) reporting. It is of immense value to understand how the body of evidence supporting exercise results and conclusions is developed. We will examine the activities which go on in each step of a typical large, multi threat, free-play fleet exercise, and end with some comments to make clear how the analyses of other kinds of exercises may be different.

Data collection. The first step is to collect data on the events that occur during an exercise. Think of the exercise itself as a process. All the people in the exercise make decisions and operate equipment and so create a sequence of events. The data which are collected are like measurements of certain characteristics of the process, taken at many different times during the exercise. The data are of several different types.

One type is keyed to particular events which occur during the exercise: a detection of an incoming missile, the order to take a target under fire, the act of raising a periscope on a submarine, a change in course, and so on. This sort of data is usually recorded in a log along with the time of its occurrence. Another kind of data is the perceptions of various participants during the exercise. These data are one person’s view of the state of affairs at one point in time. The individual could be an OTC, a pilot, or a civilian analyst observer. Another type of data is the evaluative report of a major participant, usually filed after the exercise is over. These provide the opinions of key participants on the exercise, on a particular operation and what went wrong, on deficiencies, etc. Finally, the memories of participants and observers also are a source of data. Their recollections of what went on during a particularly important period of the exercise may often be valuable.

There are two kinds of imperfections attendant to all this. The first is imperfections in the data collected: they don’t reflect accurately what they were intended to reflect. That is, some data elements are erroneous. The second imperfection stems from having taken measurements only at discrete points in time, and having only partial control over the points in time for which there will be data. A commander in the press of fighting a battle may not have the time to record an event, or his rationale for a crucial decision. An observer may likewise miss an important oral exchange of information or an important order. After the exercise is over, memories may fade and recollections become hazy. So the record of what went on during the exercise, the raw data, is imperfect.

Once most of the raw recorded data are gathered in one place, reconstruction begins. In general, gross reconstruction provides two products: geographical tracks of ships and aircraft over time, and a chronology of important exercise events: time and place of air raids, submarine attacks, force disposition changes, deception plan executions and so forth. Tentative identification of important time periods is made at this time. These periods may become the object of finer grained reconstruction later as new questions emerge which the gross reconstruction is unable to answer. The table below lists the primary products of gross reconstruction. The major event chronology includes the main tactical decisions such as shifts in operating procedures, shifts in courses of action, executions of planned courses of action, and all others which might have affected what went on.

Reconstruction is arguably the most important step in the exercise analysis. Many judgments are made at this level of detail which affect both the overall picture of what went on during the exercise as well as the validity of the results and conclusions. It is much the same kind of laboratory problem scientists face in trying to construct a database from a long, costly series of experiments. The basic judgments concern resolving conflicts among the data, identifying errors in data entries, and interpreting incomplete data. Judging each small case seems minor. However, the enormous number of small judgments collectively have a profound effect on the picture of exercise operations which emerges. The meticulous sifting which is required demands knowledgeable people in each area of naval operations as well as people possessed of a healthy measure of common sense. Judgments made during reconstruction permeate the remainder of the exercise analysis process. These judgments constitute yet another way for errors to enter the process.

Data Reduction and Organization. The line between reconstruction and data reduction and organization is blurred. At some point, most of the reconstruction is done and summary tables of information begin to emerge. In anti-air warfare for example, tables will show the time of the air raid, raid composition, number detected, percent effectively engaged, and by whom. An antisubmarine warfare summary table might show contacts, by whom detected, validity of detection, attack criteria achievement, and validity of any attacks conducted. Other products based upon the reconstruction are tailored to the specific analysis objective or the specific question of interest. For example, in command and control, a detailed history of the flow of particular bits of information from their inception to their receipt by a weapon system operator might be constructed. In surface warfare, the exact sequence of detections and weapon firings are other examples.

Two important acts occur during this step. First certain data are selected as being more useful than other data and then the individual bits are aggregated. Second, the aggregate data are organized into summary presentations (in the form of tables, figures, graphs, and so on) so that relations among the data can be examined. Obviously, the way in which data is aggregated involves judgments as to what data to include and what to exclude. These choices and the selection of the form of the presentation itself involve important judgments. As before, the judgments comprise another potential source of error.

Analysis. Analysis is the activity of testing hypotheses against the body of evidence, constructing new hypotheses, and eventually rejecting some and accepting others according to the rules of logic. While reconstructing, reducing, and organizing data, analysts begin to identify problem areas, speculate upon where answers to questions might lie, and formulate a first set of hypotheses concerning exercise operations. It is now time to examine systematically the body of evidence to ascertain whether the problems are real, whether answers to questions can indeed be constructed, and whether the evidence confirms or refutes the hypotheses. Arguments must be constructed from the evidence, i.e., from the summary presentations already completed, from others especially designed for the hypothesis in question, or from the raw data itself. The construction of such logical arguments is the most time­-consuming step in the process and the most profitable. Yet the pressure from consumers for quick results, a justifiable desire, may severely cut down on the time available. In such situations, hypotheses may emerge from this step as apparently proven results and conclusions, without the benefit of close scrutiny. This is an all too common occurrence.

One kind of shortcut is to examine only evidence which tends to confirm a hypothesis. The analyst uses the time he has to construct as convincing an argument as he can in support of a contention. Given additional time, an equally persuasive argument refuting the contention might have been developed, errors may also enter the analysis in the course of judging the relative strength of opposing bodies of evidence. Where such judgments are made, conventional wisdom would have both bodies of evidence appear along with an argument why one body seems stronger. In these ways the analysis step may introduce additional uncertainty into the analysis process.

Reporting. The final step in the analysis process is reporting. It is during this step that analysts record the fruits of their analytical labors. There are four basic categories of reports, some with official standing, some without. It is worth defining them, both to give some idea of the amount of analysis which under underlies the results and to present the reports most likely to be encountered.

One kind of exercise report is for a higher level commander. It details for him those exercise objectives which were met and those which were not. It is a post­-operation report to a superior. Customarily it will describe training objectives achieved (i.e., did the assigned forces complete the designated training evolutions?), the resulting increase in readiness ratings for individual units, and an overview of exercise play and events. There is little if any analysis of exercise events to learn of problem areas, tactical innovations, or warfighting capabilities.

Another kind of exercise report is a formal documentation of the product of the analysis process. It concentrates on the flow of battle events in the exercise instead of the “training events.” These reports may or may not include word of training objectives achieved and changes in unit readiness. A report might begin with a narrative description of battle events and results for different warfare areas. Summary tables, arguments confirming or refuting hypotheses, and speculations about problems needing further investigation form the bulk of the warfare sections. Conclusions and supporting rationale in the form of evidence from exercise operations may also be present. Bear in mind that the analysis process preceding the report may have been incomplete. In this case the report will include the narrative and customarily a large collection of reconstruction data and summary tables. The report will fall short of marshaling evidence into arguments for, or against, hypotheses. These reports are really record documents of raw and processed exercise data.

It can be difficult to distinguish between these two types of report if the latter also includes items called “conclusions.” Beware if there is an absence of argumentation, or if great leaps of faith are necessary for the arguments to be valid. Sometimes one gets the reconstruction plus analysis, other times just the reconstruction.

Units participating in exercises often submit their own message reports, called “Post-ex Reports” or “Commander’s Evaluation.” These reports seldom include any analytical results or conclusions. They do venture the unit commander’s professional opinions on exercise events and operations. These opinions, tempered by years of operational experience, as well as firsthand operational experience during the exercise, are a valuable source of information. They provide the perspective of a particular player on perceived problems, suspected causes, reasons for tactical decisions, and possibly even some tentative conclusions. Statements in these reports should be tested against the data for confirmation. Sometimes the messages also contain statements entitled “Lessons Learned.” Since such judgments are based upon the limited perspective of one unit, these lessons learned require additional verification, too. The unit CO probably will base this report on some of the data collected by his own unit. So the CO’s post-exercise report is a view of the exercise based upon a partial reconstruction using one unit’s data.

Finally, the Navy Tactical Development and Evaluation (TACD&E) program sanctions reports of exercise results and analyses as a formal Lessons Learned. NWP-0 defines a Lessons Learned as “…statements based on observation, experience, or analysis which indicates the state of present or proposed tactics.” Note that a Lessons Learned is specific to a tactic or group of tactics. Evidence developed in an exercise often provides the analytical basis for such statements. NWP-0 goes on to state that “…the most useful Lessons Learned are brief case studies which tell what happened and why certain key outcomes resulted.” Exercise operations can often provide the “cases” and exercise analysis can provide the “why” certain things happened. Again it is necessary to examine carefully the argumentation in Lessons Learned, to be sure the analysis process applied to the individual cases hasn’t been curtailed after the reduction and organization step.

Variations. The analysis process for a small specialized exercise has a slightly different manifestation from that in a large, free-play fleet exercise. Consider an exercise designed to test tactics for the employment of a new sonar and to train units how to execute those tactics. It might involve three or four ships outfitted with the sonar pitted against a submarine in a controlled scenario. If there is high interest in innovative ways to employ the system tactically, data collection might be better than average, since many hands can be freed from other warfare responsibilities for data collection. The operating area might be an instrumented range on which very precise ship tracks can be recorded automatically. If the planning is thorough, the design of the exercise (the particular pattern of repeated engagements with careful varying of each important factor) enables just the right data to be collected which will enable analysts to sort among the different tactics. The data which is collected would then leave fewer holes, relative to the exact questions which are of interest. So, one might end up with fewer errors in the data, and simultaneously, less missing data.

The quality of reconstruction will still depend on the skill of the reconstructors. With only a few ships to worry about and good data, however, not many people are required to do a good job; the job is small. If the exercise was designed carefully to shed light on specific questions, data reduction and organization work smoothly toward pre-identified goals: specific summary tables, graphs, or figures. In fact from the analytical viewpoint, the whole exercise may as well have been conducted to generate reliable numbers to go into the tables and graphs. The analysis step is more likely to proceed smoothly too, since the evidence has been designed specifically to confirm or deny the questions of interests.

The analysis process of other exercises will likely fall between these two extremes. The degree to which exercise play is controlled and constrained by the operating area’s size and by various units’ tactical autonomy will determine the ease with which the analysts and data collectors can finish their work. Normally, the analysis is best in the small, controlled exercises designed to answer specific questions or to train units in specific tactics. As the exercise grows in size and more free-play is allowed, it is harder to collect data to answer the host of questions which may become of interest.

Limitations on the Use of Exercise Analysis

The reason for analyzing exercise operations is to learn from them. One learns about tactics, readiness levels of units and groups, hardware operational capabilities, and advantages or disadvantages we may face in certain scenarios. Let us see how exercise artificialities and an imperfect analysis process limit what we can learn.

Hardware operational capabilities can be dispensed with quickly. Special exercises are designed to measure how closely systems meet design specifications. The measures are engineering quantities such as watts per megahertz, time delay in a switching mechanism, sensitivity, and so on. As the human element enters either as the operator of the equipment or in a decision to use the system in a particular way, one moves into the realm of tactics.

Warfare Capabilities. One problem in learning about warfare capabilities from exercises lies in translating the exercise results into those one might expect in an actual battle. Setting aside the measurement errors which may crop up in the analysis process, consider the exercise artificialities. Suppose a battle group successfully engages 70 percent of the incoming air targets. This does not mean that the force would successfully engage 70 percent of an air attack in a real battle. Assuming identical scenarios and use of the same tactics, some artificialities make the exercise problem easier, others make it harder than the real-world battle problem. There is no known accurate way of adjusting for these artificialities. In fact only recently has there been general acceptance of the fact that the artificialities both help and hinder. A second problem is the lack of a baseline expected performance level for given forces in a given scenario. A baseline level would describe how well one expected a specific force to do, against a given opposition in a given scenario on average. One would compare exercise results with baseline expectations to conclude that the exercise force is worse or better than expected. But no such baseline exists; that is there are no models of force warfare which can predict the outcome of an exercise battle. Thus, we don’t know what the “zero” of the warfare effectiveness index is; neither do we know the forms of the adjustments necessary to translate exercise results into corresponding real-world results.

One might speculate that it would at least be possible to establish trends in warfare effectiveness from exercises. However, this too is difficult. The exercise scenarios as well as the forces involved will change over time. In any particular exercise, the missions, the geography, the forces (e.g., a CV rather than a CVN), and the threat simulation are likely to be different from those in any other exercise. Some scenarios may be particularly difficult, while others are easy. Comparing across exercises requires a way of adjusting for these differences. It requires knowing how a given force’s capabilities change with each of these factors, and right now we don’t know how. Of course, solving the problems of adjusting for exercise artificialities and of establishing an expected performance level for given battle problems would be a move in the right direction. But imperfections in the steps of the analysis process compound these conceptual difficulties. Recall that the data are imperfect to begin with, and errors enter during reconstruction and data reduction and organization. The numbers built from these data then have some error associated with them. These are the numbers which appear in summary tables and graphs depicting warfare effectiveness during an exercise. They are imprecise. This means that changes over time, even in exercises with roughly equivalent scenarios, must be large to be significant. Otherwise, such differences might only be statistical variations. Exactly how large they have to be, is still not clear but “big” differences bear further investigation.

What then is the usefulness of such numbers? They are useful because they result from examining the exercise from different viewpoints, and they allow judgment to he employed in a systematic manner. Without them one is completely in the dark. Clearly it is better to merge many different perspectives on how the operations went, than to rely on just one. The analysis process does this by examining objectively data collected from many different positions. It provides a framework for systematic employment of professional judgment concerning the effect of artificialities on exercise results. Recognizing each artificiality, professional judgment can be applied to assess the influence of each individually as opposed to the group as a whole. While obviously imprecise, the numbers appearing in the summary presentations, together with an understanding of the artificialities, the contextual factors, and the measurement errors, are better than a blind guess.

Evaluating an individual unit’s warfighting capability (as opposed to a group’s) is not easy either. The normal measures of unit readiness which come out of an exercise are at a lower mission level. An air squadron may have high sortie rates, and may be able to get on and off the carrier with ease, but the question of interest may be how effectively they contributed to the AAW defense. The link between task group AAW effectiveness and high sortie rates or pilot proficiency is not well understood. So while measurements at that level may be more precise than those at a higher level, and while the individual actions are more like actions in a real battle, it is not clear how measures of effectiveness at this level contribute to success at the group or force level. There is a need to research this crucial link between unit performance of low level mission actions and group mission effectiveness.

Tactics. As a vehicle for evaluating tactics, exercise analysis fares pretty well. Exercise artificialities and the analysis process still limit what we conceivably could learn and, practically, what we do learn.

The main artificiality to be careful of is threat simulation. Generally there are situations of short duration in an exercise which closely approximate those occurring in real battles, some in crucial respects. It is possible, then, to test a tactic in a specific situation which, except for the threat simulation, is realistic. The tactic may work well in the situation, but would it work against a force composed of true enemy platforms? This may be more problematic.

The limitations due to the analysis process stem more from improper execution rather than flaws in the process itself. To date, exercise analysis has failed to distinguish regularly between problems of tactical theory and those of tactical execution. If the analysis concludes that the employment of a tactic failed to achieve a desired result it seldom explains why. There is no systematic treatment of whether the tactic was ill-conceived, or employed in the wrong situation, or executed clumsily. The idea of the tactic may be fine, it may only have been employed in the wrong situation or it may have been executed poorly. In the event that a tactic does work, that is, the overall outcome is favorable, scant explicit attention is paid to the strength of the tactic’s contribution to the outcome. The outcome might have been favorable with almost any reasonable tactic because, say, one force was so much stronger than the other. Remember too that the data upon which the tactical evaluation is based is the same imperfect data as before. It is true that in some evaluations, the conclusion may be so clear as to swamp any reasonable error level in the data. Even if the error is 30 percent (say in detection range, or success ratio) the conclusion still might hold.

There are certain analytical standards which are achievable for tactics evaluation in exercises. The tactic or procedure should be defined clearly. The analysis should address whether the tactic was executed correctly and whether it was employed in the appropriate situation. It should answer the question of whether the influence of other contextual factors (aspects of the scenario for example) dominated the outcome. It should identify whether the tactic will only work when some factor is present. It should address whether the tactic integrates easily into coordinated warfare. Even if all these conditions are satisfied, the exercise may only yield one or two trials of the tactic. Definitive tests require more than one or two data points.

Scenarios. Judging how well Blue or Orange does in a scenario depends on the accuracy of the warfare capability assessments, the fidelity of the threat simulation, and the skill with which exercise results can be translated into real world expectations. It is clear from previous discussions on each of these topics that there are problems associated with each. Consequently, what we can learn about a scenario from playing it in an exercise is limited.

At best one can make gross judgments; an example might be “a CVTG cannot long operate from a Modloc in this specific area of the world without more than the usual level of ASW assets.” The exercise will provide an especially fertile environment for brainstorming about the scenario, and in a systematic way. The kinds of tactical encounters which are likely to cause problems will surface. Those engagements or operations which are absolutely crucial to mission success may also become clear. Serious thorough consideration of many courses of action may only occur in the highly competitive environment of an exercise. This can lead to the discovery of unanticipated enemy courses of action.

There are pitfalls of course in making even these gross assessments. For example, care must be taken to recognize very low readiness levels by exercise participants as a major contributor to the exercise outcome. But on the whole it should be possible to identify scenarios which are prohibitively difficult and should, therefore, be avoided. It may be possible to confirm what forces are essential for mission success and the rough force levels required.

What kinds of things might one reasonably expect to learn from exercises? First and foremost, the product of exercise analysis is well suited to correcting misperceptions about what happened during the exercise. It provides a picture of the exercise which is fashioned logically from data taken from many key vantage points instead of just one or two. As such, it is likely to be closer to the truth than a sketchy vision based on the experience of a single participant in the exercise. Second there is a capability to make some quantitative comment on warfare effectiveness. All the caveats developed earlier in the essay still apply, of course. It is safest to assume that there is a large error in the measures of effectiveness which are used. And a single exercise usually provides but a single data point of warfare effectiveness; extrapolation from a single such point is very risky.

Exercises are a very good vehicle for identifying any procedural difficulties which attend tactical execution. The exercise and analysis also provide a fertile opportunity to rethink the rationale underlying a tactic. More definitive evidence can be developed on ill-conceived tactics if the tactic was executed correctly and employed appropriately. The exercise and analysis also present an opportunity to observe the performance of the people and the systems. Examination may uncover areas where more training is needed, where operating procedures are not well understood, or where explicit operating and coordination procedures are absent.

Sweeping conclusions and strong, definitive judgments of capabilities, tactical effectiveness, and scenario advantages should be warning flags to exercise report readers. The reader should reassure himself that the exercise scenario, the exercise goal, and the tactical context are amenable to drawing such conclusions. For example, battle group tactical proficiency cannot be easily investigated in small, controlled exercises. Nor do capabilities demonstrated in easy battle problems imply like capabilities in harder, more realistic battle problems. The message is to read exercise reports with caution, continuously testing whether it makes sense that such results and conclusions could be learned from the exercise.

Dr. Thompson was the CNA field representative to the Commander, Sixth Fleet, from 1981 to 1984. He is currently a principal research scientist at CNA.

Featured Image: At sea aboard USS John F. Kennedy (CV 67) Mar. 18, 2002 — Air Traffic Controller 1st Class Michael Brown monitors other controlmen in the ship’s Carrier Air Traffic Control Center (CATCC) as aircraft conduct night flight operations. (U.S. Navy photo by Photographer’s Mate 2nd Class Travis L. Simmons.)

Publication Release: Alternative Naval Force Structure

By Dmitry Filipoff

From October 3 to October  7, 2016 CIMSEC ran a topic week where contributors proposed alternative naval force structures to spur thinking on how the threat environment is evolving, what opportunities for enhancing capability can be seized, and how navies should adapt accordingly. Contributors had the option to write about any nation’s navy across a variety of political contexts, budgetary environments, and time frames. 

Relevant questions include asking what is the right mix of platforms for a next-generation fleet, how should those platforms be employed together, and why will their capabilities endure? All of these decisions reflect a budgetary context that involves competing demands and where strategic imperatives are reflected in the warships a nation builds. These decisions guide the evolution of navies.

In a modern age defined by rapid change and proliferation, we must ask whether choices made decades ago about the structure of fleets remain credible in today’s environment. Navies will be especially challenged to remain relevant in such an unpredictable era. A system where an average of ten years of development precedes the construction of a lead vessel, where ships are expected to serve for decades, and where classes of vessels are expected to serve through most of a century is more challenged than ever before.

Authors:
Steve Wills
Javier Gonzalez
Tom Meyer 
Bob Hein
Eric Beaty
Chuck Hill
Jan Musil
Wayne P. Hughes Jr.

Editors:
Dmitry Filipoff
David Van Dyk
John Stryker

Download Here

Articles:

The Perils of Alternative Force Structure by Steve Wills

“Even the best alternative force structure that meets strategic needs, is more affordable than previous capabilities, and outguns the enemy could be subject to obsolescence before most of its units are launched. These case studies in alternative force structure suggest that such efforts are often less than successful in application.”

Unmanned-Centric Force Structure by Javier Gonzalez

“The conundrum and implied assumption, with this or similar future force structure analyses, is that the Navy must have at least a vague understanding of an uncertain future. However, there is a better way to build a superior and more capable fleet—by continuing to build manned ships based on current and available capabilities while also fully embracing optionality (aka flexibility and adaptability) in unmanned systems.”

Proposing A Modern High Speed Transport –  The Long Range Patrol Vessel by Tom Meyer

Is the U.S. Navy moving from an era of exceptional “ships of the line” – including LHA’s & LPD’s, FFG’s, CG’s, DDG’s, SSN’s and CVN’s – to one filled with USV’s, UAV’s, LCS’s, CV’s, SSK’s and perhaps something new – Long Range Patrol Vessels (LRPV’s)? But what in the world is an LRPV? The LRPV represents the 21stcentury version of the WWII APD – High Speed Transports.

No Time To Spare: Drawing on History to Inspire Capability Innovation in Today’s Navy by Bob Hein

“Designing and building new naval platforms takes time we don’t have, and there is still abundant opportunity to make the most of existing force structure. Fortunately for the Navy, histories of previous wars are a good guide for future action.”

Enhancing Existing Force Structure by Optimizing Maritime Service Specialization by Eric Beaty

“Luckily, the United States has three maritime services—the Navy, Coast Guard, and Marine Corps—with different core competencies covering a broad range of naval missions. Current investments in force structure can be maximized by focusing the maritime services on their preferred missions.”

Augment Naval Force Structure By Upgunning The Coast Guard by Chuck Hill

“The Navy should consider investing high-end warfighting capability in the Coast Guard to augment existing force structure and provide a force multiplier in times of conflict. A more capable Coast Guard will also be better able to defend the nation from asymmetrical threats.”

A Fleet Plan for 2045: The Navy the U.S. Ought to be Building by Jan Musil

“2045 is a useful target date, as there will be very few of our Cold War era ships left by then, therefore that fleet will reflect what we are building today and will build in the future. This article proposes several new ship designs and highlights enduring challenges posed by the threat environment.”

Closing Remarks on Changing Naval Force Structure by CAPT Wayne P. Hughes Jr., USN (Ret.)

“The biggest deficiencies in reformulating the U. S. Navy’s force structure are (1) a failure to take the shrinking defense budget into account which (2) allows every critic or proponent to be like the blind men who formulated their description of an elephant by touching only his trunk, tail, leg, or tusk. To get an appreciation of the size of the problem you have to describe the whole beast, and what is even harder, to get him to change direction by hitting him over the head repeatedly.”

Dmitry Filipoff is CIMSEC’s Director of Online Content. Contact him at [email protected].

Featured Image: PACIFIC OCEAN (Oct. 27, 2017) Ships from the Theodore Roosevelt Carrier Strike Group participate in a replenishment-at-sea with the USNS Guadalupe (hull number). (U.S. Navy photo by Mass Communication Specialist Seaman Morgan K. Nall/Released)

The Navy’s New Fleet Problem Experiments and Stunning Revelations of Military Failure

By Dmitry Filipoff

Losing the Warrior Ethos

“…despite the best efforts of our training teams, our deploying forces were not preparing for the high-end maritime fight and, ultimately, the U.S. Navy’s core mission of sea control.” –Admiral Scott Swift 1

Today, virtually every captain in the U.S. Navy has spent most of his or her career in the post-Cold War era where high-end warfighting skills were de-emphasized. After the Soviet Union fell, there was no navy that could plausibly contest control of the open ocean against the U.S. In taking stock of this new strategic environment, the Navy announced in the major strategy concept document …From the Sea (1992) achange in focus and, therefore, in priorities for the Naval Service away from operations on the sea toward power projection.”2 This change in focus was toward missions that made the Navy more relevant in campaigns against lower-end threats such as insurgent groups and rogue nations (Iran, Iraq, North Korea, Libya) that were the new focus of national security imperatives. None of these competitors fielded modern navies.

The relatively simplistic missions the U.S. Navy conducted in this power projection era included striking inland targets with missile strikes and airpower, presence through patrolling in forward areas, and security cooperation through partner development engagements. The focus on this skillset has led to an era of complacence where the high-end warfighting skills that were de-emphasized actually atrophied to a significant degree. This possibility was forewarned in another Navy strategy document that sharpened thinking on adapting for a power projection era, Forward…from the Sea (1994): “As we continue to improve our readiness to project power in the littorals, we need to proceed cautiously so as not to jeopardize our readiness for the full spectrum of missions and functions for which we are responsible.”3

Now the strategic environment has changed decisively. Most notably, China is aggressively rising, challenging international norms, and rapidly building a large, modern navy. Because of the predominantly maritime nature of the Pacific theater, the U.S. Navy may prove the most important military service for deterring and winning a major war against this ascendant and destabilizing superpower. If things get to the point where offensive sea control operations are needed and the fleet is gambled in high-end combat, then it is very likely that the associated geopolitical stakes of victory or defeat will be historic. The sudden rise of a powerful maritime rival is coinciding with the atrophy of high-end warfighting skills and the introduction of exceedingly complex technologies, making the recent stunning revelations about how the U.S. Navy has failed to prepare for great power war especially chilling.

Admiral Scott Swift, who leads U.S. Pacific Fleet (the U.S. Navy’s largest and most prioritized operational command), candidly revealed that the Navy was not realistically practicing high-end warfighting skills and operations, including sinking modern enemy fleets, until only two years ago. Ships were not practicing against other ships in the realistic, free-play environments necessary to train and refine tactics and doctrine to win in great power war.

In a recent U.S. Naval Institute Proceedings article, Admiral Swift detailed training and experimentation events occurring in a series of “Fleet Problems.” These events take their name and inspiration from a years-long series of interwar-period fleet experiments and exercises that profoundly influenced how the Navy transformed itself in the run-up to World War Two. While ships practiced against ships in the inter-war period Fleet Problems, the modern version began with the creation of a specialized “Red” team well-versed in wargaming concepts and competitor thinking born from intelligence insights. This Red team is pitched against the Navy’s frontline commanders in Fleet Problem scenarios that simulate high-end warfare through the command of actual warships. What makes their creation an admission of grave institutional failure is that this Red team is leading the first series of realistic high-threat training events at sea in recent memory.

The Navy’s units should be able to practice high-end warfighting skills against one another without the required participation of a highly-specialized Red team adversary to present a meaningful challenge. But Adm. Swift strikingly admits that the Navy’s current system of certifying warfighting skills is not representative of real high-end capability because the Navy “never practiced them together, in combination with multiple tasks, against a free-playing, informed, and representative Red.” Furthermore, “individual commanders rarely if ever [emphasis added] had the opportunity to exercise all these complex operations against a dynamic and thoughtful adversary.”

Core understanding on what makes training realistic and meaningful was absent. Warfighting truths were not being discovered and necessary skills were not being practiced because ships were not facing off against other ships in high-end threat scenarios to test their abilities under realistic conditions. If the nation sent the Navy to fight great power war tomorrow, it would amount to a coach sending a team that “rarely if ever” did practice games to a championship match.

These exercises are not just experiments that push the limits of what is known about modern war at sea. They are also experimental in that they are now figuring out if the U.S. Navy can even do what it has said it could do, including the ability to sink enemy fleets and establish sea control. According to Adm. Swift, the Navy had “never performed” a “critical operational tactic that is used routinely in exercises and assumed to be executable by the fleet [emphasis added]” until it was recently tested in a Fleet Problem. The unsurprising insight: “having never performed the task together at sea, the disconnect” between what the Navy thought it could perform and what it could actually do “never was identified clearly.” Adm. Swift concludes “It was not until we tried to execute under realistic, true free-play conditions that we discovered the problem’s causal factors…” In the Fleet Problems training and experimentation have become one and the same.

Why did the Navy assume it could confidently execute critical operational tactics it had never actually tried in the first place? And if the Navy assumed it could do it, then maybe the rest of the defense establishment and other nations thought so, too. Does this profound disconnect also hold true for foreign and allied navies? Is the unique tactical and doctrinal knowledge being represented by the specialized Red team an admission that competitors are training their units and validating their warfighting concepts through more realistic practice? Even though it is impossible to truly simulate all the chaos of real combat, only now are important ground truths of high-end naval warfare just being discovered which could prompt major reassessments of what the Navy can really contribute in great power war.

The entirety of the train, man, and equip enterprise that produces ready military forces for deployment must be built upon a coherent vision of how real war works. The advent of the Fleet Problems suggests that if one were to ask the Navy’s unit leaders what their real-world vision is of how to fight modern enemy warships as part of a distributed and networked force their responses would have little in common. If great power war breaks out tomorrow, the Navy’s frontline commanders could be forced to improvise warfighting fundamentals from the very beginning. Simple lessons would be learned at great cost in blood and treasure.

Many of the major revelations coming from the Fleet Problems are not unique innovations, but rather symptoms of deep neglect for a core element of preparing for war – pitting real-life units against one another to test people, ideas, and technology under realistic conditions. Adm. Swift surprisingly describes using a Red team to  connect intelligence insights, wargaming concepts, training, and real-life experimentation as “new ground.” Swift also noted that as the Navy attempted its purported concepts of operations in the Fleet Problems “it became apparent there were warfighting tasks that were critical to success that we could not execute with confidence.” In a normal context, it would not always be noteworthy for a military to invalidate concepts or realize it can’t do something well. What makes these statements revelations is that the process of testing concepts and people in realistic conditions simulating great power war has only just begun. 

This is a failure with profound implications. The insight that comes from training and experimenting against realistic threats forms a critical foundation for the rest of the military enterprise. Realistic experimentation and training is indispensable for developing meaningful doctrine, tactics, and operational art. Much of the advanced concept development on great power war by the Navy hasn’t been validated by real-world testing. The creation of the new Fleet Problems is fundamentally an admission that not only is the Navy unsure of its ability to execute core missions, but that major decisions about its future development were built on flaws. While the Fleet Problems are finally injecting much needed realism into the Navy’s thinking, their creation reveals that the entire defense establishment has suffered a major disconnect from the real character of modern naval warfare. The Fleet Problems have likely invalidated years of planning and numerous basic assumptions.

The Navy must now account for how many years it did not practice its forces in meaningful, high-end threat training in order to understand just how widespread this lack of realistic experience has penetrated its ranks. There should be no doubt that this has skewed decision-making at senior levels of leadership. How many leaders making important decisions about capability development, training, and requirements have zero firsthand experience commanding forces in high-end threat training? Could the fleet commanders operate networked and distributed formations if war breaks out? Has best military advice on the value of naval power for the nation’s national security interests been predicated on untested warfighting assumptions?

To Train the Fleet for What?

“The department directs that a board of officers, qualified by experience, be ordered to prepare a manual of torpedo tactics which will be submitted by the department to the War College, and after such discussion and revision as may be necessary, will be printed and issued to the torpedo officers of the service for trial. This order has not been complied with. If it had been, it would doubtless have resulted in a sort of tentative doctrine which, though it might well have been better than the flotilla’s first attempt, could not have been as complete or as reliable as one developed through progressive trials at sea; and it might well have contained very dangerous mistakes.”William S. Sims 4

Adm. Swift reveals that it was even debated whether free-play elements should play a role at all in certifying units to be combat ready: “there was concern in some circles that adding free-play elements to the limited time in the training schedule would come at the cost of unit certification. Others contended it was unrealistic and unfair to ask units that were not yet certified to perform our most difficult warfighting tasks.” The degree of certification is moot. Sailors are failing anyway because the shift in warfighting focus toward great power competition has not been matched by new training standards and therefore not penetrated down to the unit level.

Adm. Swift notes startling lessons: “In some scenarios, we learned that the ‘by the book’ procedure can place a strike group at risk simply because our standard operating procedures were written without considering a high-end wartime environment.” This is a direct result of the change in focus toward power projection missions against threats without modern navies. According to Adm. Swift the regular exercise schedule consisted of missions including “maritime interdiction operations, strait transits, and air wings focused on power projection from sanctuary” which meant that forces were “not preparing for the high-end maritime fight and, ultimately, the U.S. Navy’s core mission of sea control.” In this new context of a high-end fight in a Fleet Problem, according to Adm. Swift, “If we presented an accurate—which is to say hard—problem, there was a high probability the forces involved were going to fail. In our regular training events, that simply does not happen at the rate we assess will occur in war.” The Fleet Problems are revealing that Navy units are not able to confidently execute high-end warfighting operations regardless of the state of their training certifications. 

These revelations demonstrate that the way the Navy certifies its units as ready for war is broken. A profound disconnect exists between the Navy’s certification and training processes for various warfighting skills and what is actually required in war. Entire sets of training certifications and standard operating procedures born of the post-Cold War era are inadequate for gauging the Navy’s ability to fight great power conflict.

Mentally Absent in the Midst of the Largest Technological Revolution

“The American navy in particular has been fascinated with hardware, esteems technical competence, and is prone to solve its tactical deficiencies with engineering improvements. Indeed, there are officers in peacetime who regard the official statement of a requirement for a new piece of hardware as the end of their responsibility in correcting a current operational deficiency. This is a trap.” Capt. Wayne P. Hughes, Jr. (Ret.) 5

Regardless of a major shift in national security priorities toward lower-end threats, the astonishing pace of technological change constitutes an extremely volatile factor in the strategic environment that needs to be constantly paced by realistic training and experimentation under free-play conditions. The modern technological foundation upon which to devise tactics and doctrine is built on sand.

The advent of the information age has unlocked an unprecedented degree of flexibility for the conduct of naval warfare as platforms and payloads can be connected in real-time in numerous ways across great distances. This has resulted in a military-technical revolution as marked as when iron and steam combined to overtake wooden ships of sail. A single modern destroyer fully loaded with network-enabled anti-ship missiles has enough firepower to singlehandedly sink the entirety of the U.S. Navy’s WWII battleship and fleet carrier force.6 On the flipside, another modern destroyer could field the defensive capability to stop that same missile salvo.

Warfighting fundamentals are being reappraised in an information-focused context. The process by which forces find, target, and engage their opponents, known as the kill chain, is enabled by information at each individual step of the sequence. A key obstacle is meeting that burden of information in order to advance to the next step. This challenge is exacerbated by the great distances of open-ocean warfare and the difficulty of getting timely information to where it needs to be while the adversary seeks to deceive and degrade the network. Technological advancement means the kill chain’s information burdens can be increasingly met and interfered with.

The threshold of information needed for the archer to shoot decreases the smarter the arrow gets. Information-age advancements have therefore wildly increased the power of the most destructive conventional weapon ever put to sea, the autonomous salvo of swarming anti-ship missiles.

The next iteration of these missiles will have a robust suite of onboard sensors, datalinks, jamming capability, and artificial intelligence. These capabilities will combine to build resilience into the kill chain by containing as much of that process as possible within the missile itself. More and more of the need for the most up-to-date information will be met by the missile swarm’s own sensors and decided upon by its artificial intelligence. Once fired, these missiles are on a one-way trip, allowing them to discard survivability for the sake of seizing more opportunities to collect and pass information. Unlike most other information-gaining assets, these missiles will be able to close with potential targets to resolve lingering concerns of deception and identification. The missile’s infrared and electro-optical capabilities in particular will provide undetectable, jam-resistant sensors for final identification that will prove challenging to deceive with countermeasures. On final approach, the missile will pick a precise point on the ship to guarantee a kill, such as where ammunition is stored. 

The most fierce enemy in naval warfare has taken the form of autonomous networked missile salvos where the Observe, Orient, Decide, and Act (OODA) decision cycle will be transpiring within the swarm at machine speeds. Is the Navy ready to use and defend against these decisive weapons?

The Navy may feel inclined to say yes to the latter question sooner because shooting things out of the sky has been a special focus of the Surface Navy and naval aviation since WWII. The latest technology that will take this capability into the 21st century, the Naval Integrated Fire Control – Counter-Air (NIFC-CA) networking capability, will help unite the sensors and weapons of the Navy’s ships and aircraft. Aircraft will be able to use a warship’s missiles to shoot down threats the ship can’t see itself. This is decisive because anti-ship missiles will make their final approach at low altitudes below the horizon where they can’t be detected by a ship’s radar. Modern warships can be forced to wait until the final seconds to bring most of their defensive firepower to bear on a supersonic inbound missile salvo unless a networked aircraft can cue their fires with accurate sensor information from high above.

This makes mastering NIFC-CA perhaps the most important defensive capability the fleet needs to train for, but this will involve a steep learning curve. Speaking on the challenges of making this capability a reality, then-Captain Jim Kilby remarked that it involves “a level of coordination we’ve never had to execute before and a level of integration between aircrews and ship crews.”Is the Navy truly practicing and refining this capability in realistic environments? At least three years before the Fleet Problems started, the Chief of Naval Operations reported that concepts of operation were established for NIFC-CA.8

There should be little confidence that naval forces have a deep comprehension of how information has revolutionized naval warfare and how modern fleet combat will play out because there was a lapse in necessary realistic experimentation at sea. The way the Navy thought it would operate may not actually make sense in war, a key insight that experimentation will reveal as it did in the interwar period.

Training and Experimentation for Now and Tomorrow

If…the present system fails to anticipate and to adequately provide for the conditions to be expected during hostilities of such nature, it is obviously imperative that it be modified; wholly regardless of the effect of such change upon administration or upon the outcome of any peace activity whatsoever.” –Dudley W. Knox 9

The extent to which the Navy’s current capabilities have been tested by meaningful real-world training and experimentation is now in doubt. This doubt naturally extends to things that the Navy has just fielded or is about to introduce to the fleet. Yet Adm. Swift revealed a fatal flaw in the Fleet Problems that is not in keeping with a high-velocity learning or warfighting-first mindset: “We are not notionally employing systems and weapons that are not already deployed in the fleet. Each unit attacks the problem using what it has on hand (physically and intellectually) today.”

It is a mistake to not train forces to use future weapons. Units must absolutely attempt to experiment with capabilities not yet in the fleet to stay ahead of the ever-quickening pace of change. Realism should be occasionally sacrificed to anticipate the basic parameters of capabilities that are about to be fielded. Sailors should be thinking about how to employ advanced anti-ship missiles about to hit the fleet that feature hundreds of miles of range like the Long Range Anti-Ship Missile (LRASM), Standard Missile 6, and the Maritime Strike Tomahawk. These capabilities are far more versatile than the Navy’s only current ship-to-ship missile, the very short-range and antiquated Harpoon missile the Navy first fielded over 40 years ago and can’t even carry in its launch cells. Getting sailors to think about weapons before their introduction will mentally prepare them for new capabilities and warfighting realities.

Information-enabled capabilities have come to dominate every facet of offense, defense, and decision. Do naval aviators know how to retarget friendly salvos of networked missiles amidst a mass of deception and defensive counter-air capabilities while leveraging warship capabilities to target enemy missile salvos simultaneously? Do fleet commanders know how to maneuver numerous aerial network nodes to fuse sensors and establish flows of critical information that react to emerging threats and opportunities? Can commanders effectively manage and verify enormous amounts of information while the defense establishment and industrial base are being aggressively hacked by a great power? According to the Navy’s current service strategy document, A Cooperative Strategy for 21st Century Seapower, warfare concept development should involve efforts to…re-align Navy training, tactics development, operational support, and assessments with our warfare mission areas to mirror how we currently organize to fight.” 10

Despite all the enormous effort and long wait times that accompany the introduction of a new system, the Fleet Problems remind the defense establishment that the Navy can’t be expected to know how to use it simply because it is fielded. New warfighting certifications are in order and must be rapidly redefined and benchmarked by the Fleet Problems in order to pace technology and make the Navy credible. This will require that a significant amount of time be dedicated to real-world experimentation.

So How the Does the Navy Spend its Time? 

“Our forward presence force is the finest such force in the world. But operational effectiveness in the wrong competitive space may not lead to mission success. More fundamentally, has the underlying rule set changed so that we are now in a different competitive space? How will we revalue the attributes in our organization?” –Vice Admiral Arthur K. Cebrowski and John J. Garstka  11

These severe experimentation and training shortfalls are not at all due to lack of funding, but rather by faulty decisions on what is actually important for Sailors to focus their time on and what naval forces should be used for in the absence of great power war. Meanwhile, the power projection era featured extreme deployment rates that have run the Navy into the ground.

The Government Accountability Office states that 63 percent of the Navy’s destroyers, 83 percent of its submarines, and 86 percent of its aircraft carriers experienced maintenance overruns from FY 2011-2016 that resulted in almost 14,000 lost operational days – days where ships were not available for operations.12 How much of this monumental deployment effort went toward aggressively experimenting and training for great power conflict instead of performing lower-end missions? Hardly any if none at all because Adm. Swift termed the idea to use a unit’s deployment time for realistic experimentation an “epiphany.”

In order to more efficiently meet insatiable operational demand and slow the rate of material degradation the Navy implemented the Optimized Fleet Response Plan (OFRP) that reforms the cycle by which the Navy generates ready forces through maintenance, training, and sustainment phases.13 But Adm. Swift alleges that this major reform has caused the Navy to improperly invest its time:

“Commanders were busy following the core elements in our Optimized Fleet Response Plan (OFRP) training model, going from event to event and working their way through the list of training objectives as efficiently as possible. Rarely did we create an environment that allowed them to move beyond the restraints of efficiency to the warfighting training mandate to ensure the effectiveness of tactics, techniques, and procedures. We were not creating an environment for them to develop their own warfighting creativity and initiative.”

A check-in-the-box culture has been instituted to cope with crushing deployments rates at the expense of fostering leaders that embody the true warfighter ethos of imaginative tacticians and operational commanders. The OFRP cycle is under so much tension from insatiable demand and run-down equipment that Adm. Swift described it as a “Swiss watch—touching any part tended to cause the interlocking elements to bind, to the detriment of the training audience.” But as Adm. Swift already noted, pre-deployment training wasn’t even focused on preparing for the high-end fight anyway.

Every single deployment is an opportunity to practice and experiment. Simply teaching unit leaders to make time for such events will be valuable training itself as they figure out how to delegate responsibilities in an environment that more closely approximates wartime conditions. After all, if units are currently straining on 30 hours of sleep a week performing low-end missions and administrative tasks, how can we be sure they know how to make time to fight a high-stakes war while also maintaining a ship that’s falling apart?

Being a deckplate leader of a warship has always been an enormously busy job and there is always something a warship can do to be relevant. But it is a core competence of leaders at all levels to know what to make time for and how to delegate accordingly. From the sailor checking maintenance tasks to the combatant commander tasking ships for partner development engagements, a top-to-bottom reappraisal of what the Navy needs to spend its time doing is in order. Are Sailors performing tasks really needed to win a war? Are the ships being deployed on missions that serve meaningful priorities?

Major reform will be necessary in order to reestablish priorities to make large amounts of time for realistic training and experimentation. In addition to making enough time, it is also a question of having enough forces on hand when the fleet is stretched thin. Adm. Swift described a carrier strike group (CSG) being used in a Fleet Problem where “the entire CSG was OpFor [Red team] – an enormous investment that yielded unique and valuable lessons.” Does this mean that aircraft carriers, the Navy’s largest and most expensive warships, are especially hard-pressed to secure time for realistic experimentation and training? Can the Navy assemble more than a strike group’s worth of ships to simulate a competitor’s naval forces?

The recent deployment of three strike groups to the Pacific means it is possible. Basic considerations include asking whether the Navy has enough ships on hand to simulate a distributed fleet and enough units to simulate great power adversaries that have the advantages of time, space, and numbers. But with where the deployment priorities currently stand, the Navy may not have enough time or ships on hand to regularly simulate accurate scenarios. 

A Credibility Crisis in the Making

“…there are many, many examples of where our ships their commanding officers, their crews are doing very well, but if it’s not monitored on a continuous basis these skills can atrophy very quickly.”  Chief of Naval Operations Admiral John Richardson 14

When great power conflict last broke out in WWII the war at sea was won by admirals like Ernest King, Chester Nimitz, and Raymond Spruance whose formative career experiences were greatly influenced by the interwar-period Fleet Problems. This tradition of excellence based on realism is in doubt today.

What is clear is that business as usual cannot go on. The fundamental necessity of free-play elements for ensuring warfighting realism is beyond reproach. The reemergence of competition between the world’s greatest powers in a maritime theater is making many of the Navy’s power projection skillsets less and less relevant to geopolitical reality. New deployment priorities must preference realistic training and experimentation to make up for lost ground in concept development, accurately inform planning, understand the true limits and potential of technology, and test the mettle of frontline units. 

The recent pair of collisions challenged numerous assumptions about how the Navy operates and how it maintains its competencies. Tragic as those events were, they thankfully stimulated an energetic atmosphere of reflection and reform. But the competencies that such reforms are targeting include things like navigation, seamanship, and ship-handling. These basic maritime skills have existed for thousands of years. What is far newer, endlessly more complex, and absolutely vital to deter and win wars is the ability to employ networked and distributed naval forces in great power conflict. Compared to the fatal collisions, countless more sailors are dying virtual deaths in the Fleet Problems that are revealing shocking deficiencies in how the Navy prepares for war. Short of horrifying losses in real combat, there is no greater wake-up call.

Dmitry Filipoff is CIMSEC’s Director of Online Content. Contact him at [email protected].

References

[1] Admiral Scott H. Swift, “Fleet Problems Offer Opportunities” U.S. Naval Institute Proceedings, March 2018.  https://www.usni.org/magazines/proceedings/2018-03/fleet-problems-offer-opportunities

[2] Forward…From the Sea, U.S. Department of the Navy, 1994. https://www.globalsecurity.org/military/library/policy/navy/forward-from-the-sea.pdf 

[3] Ibid., 8. 

[4] William S. Sims, “Naval War College Principles and Methods Applied Afloat” U.S. Naval Institute Proceedings, March-April 1915. https://www.usni.org/magazines/proceedings/1915-03/naval-war-college-principles-and-methods-applied-afloat

[5] Wayne P. Hughes, Jr., Fleet Tactics: Theory and Practice, Second Edition, pg. 33, Naval Institute Press, 1999.

[6] Can be inferred from official U.S. Navy ship counts on battleships and aircraft carriers and near-term capabilities of anti-ship capabilities.

[7] Sam LaGrone, “The Next Act for Aegis”, U.S. Naval Institute News, May 7, 2014. https://news.usni.org/2014/05/07/next-act-aegis

[8] CNO’s Position Report 2013, U.S. Department of the Navy. http://www.navy.mil/cno/131121_PositionReport.pdf

[9] Dudley W. Knox, “The Role of Doctrine in Naval Warfare.” U.S. Naval Institute Proceedings, March-April 1915. https://www.usni.org/magazines/proceedings/1915-03/role-doctrine-naval-warfare

[10] A Cooperative Strategy for 21st Century Seapower. http://www.navy.mil/local/maritime/150227-CS21R-Final.pdf

[11] Vice Admiral Arthur K. Cebrowski and John J. Garstka, “Network Centric Warfare: It’s Origin, It’s Future.” U.S. Naval Institute Proceedings, January 1998. https://www.usni.org/magazines/proceedings/1998-01/network-centric-warfare-its-origin-and-future

[12] John H Pendleton, “Testimony Before the Committee on Armed Services, U.S. Senate Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Affecting the Fleet. Government Accountability Office, September 19, 2017. https://www.gao.gov/assets/690/687224.pdf

[13] “What is the Optimized Fleet Response Plan and What Will It Accomplish?” U.S. Fleet Forces Command, Navy Live, January 15, 2014. http://navylive.dodlive.mil/2014/01/15/what-is-the-optimized-fleet-response-plan-and-what-will-it-accomplish/

[14] Department of Defense Press Briefing by Adm. Richardson on results of the Fleet Comprehensive Review and investigations into the collisions involving USS Fitzgerald and USS John S. McCain, November 2, 2017. https://www.defense.gov/News/Transcripts/Transcript-View/Article/1361655/department-of-defense-press-briefing-by-adm-richardson-on-results-of-the-fleet/ 

Featured Image: SASEBO, Japan (Feb. 28, 2018) Operations Specialist 2nd Class Megann Helton practices course plotting during a fast cruise onboard the amphibious assault ship USS Wasp (LHD 1). (U.S. Navy photo by Mass Communication Specialist 3rd Class Levingston Lewis/Released)

Game-Changing Unmanned Systems for Naval Expeditionary Forces

By George Galdorisi

Perspective

In 2018 the United States remains engaged worldwide. The 2017 National Security Strategy addresses the wide-range of threats to the security and prosperity of United States.1 These threats range from high-end peer competitors such as China and Russia, to rogue regimes such as North Korea and Iran, to the ongoing threat of terrorism represented by such groups as ISIL. In a preview of the National Security Strategy at the December 2017 Reagan National Defense Forum, National Security Advisor General H.R. McMaster highlighted these threats and reconfirmed the previous administration’s “4+1” strategy, naming the four countries – Russia, China, Iran and North Korea—and the “+1” — terrorists, particularly ISIL — as urgent threats that the United States must deal with today.2

The U.S. military is dealing with this threat landscape by deploying forces worldwide at an unprecedented rate. And in most cases, it is naval strike forces, represented by carrier strike groups centered on nuclear-powered aircraft carriers, and expeditionary strike groups built around large-deck amphibious ships, that are the forces of choice for dealing with crises worldwide.

For decades, when a crisis emerged anywhere on the globe, the first question a U.S. president asked was, “Where are the carriers?” Today, that question is still asked, but increasingly, the question has morphed into, “Where are the expeditionary strike groups?” The reasons for this focus on expeditionary strike groups are clear. These naval expeditionary formations have been the ones used extensively for a wide-array of missions short of war, from anti-piracy patrols, to personnel evacuation, to humanitarian assistance and disaster relief. And where tensions lead to hostilities, these forces are the only ones that give the U.S. military a forcible entry option.

During the past decade-and-a-half of wars in the Middle East and South Asia, the U.S. Marine Corps was used extensively as a land force and did not frequently deploy aboard U.S. Navy amphibious ships. Now the Marine Corps is largely disengaged from those conflicts and is, in the words of a former commandant of the U.S. Marine Corps, “Returning to its amphibious roots.”3 As this occurs, the Navy-Marine Corps team is looking to new technology to complement and enhance the capabilities its amphibious ships bring to the fight. 

Naval Expeditionary Forces: Embracing Unmanned Vehicles

Because of their “Swiss Army Knife” utility, U.S. naval expeditionary forces have remained relatively robust even as the size of the U.S. Navy has shrunk from 594 ships in 1987 to 272 ships in early 2018. Naval expeditionary strike groups comprise a substantial percentage of the U.S. Navy’s current fleet. And the blueprint for the future fleet the U.S. Navy is building maintains, and even increases, that percentage of amphibious ships.4

However, ships are increasingly expensive and U.S. Navy-Marine Corps expeditionary forces have been proactive in looking to new technology to add capability to their ships. One of the technologies that offer the most promise in this regard is that of unmanned systems. The reasons for embracing unmanned systems stem from their ability to reduce the risk to human life in high-threat areas, to deliver persistent surveillance over areas of interest, and to provide options to warfighters that derive from the inherent advantages of unmanned technologies—especially their ability to operate autonomously.

The importance of unmanned systems to the U.S. Navy’s future has been highlighted in a series of documents, ranging from the 2015 A Cooperative Strategy for 21st Century Seapower, to the 2016 A Design for Maintaining Maritime Superiority, to the 2017 Chief of Naval Operations’ The Future Navy white paper. The Future Navy paper presents a compelling case for the rapid integration of unmanned systems into the Navy Fleet, noting, in part:

“There is no question that unmanned systems must also be an integral part of the future fleet. The advantages such systems offer are even greater when they incorporate autonomy and machine learning….Shifting more heavily to unmanned surface, undersea, and aircraft will help us to further drive down unit costs.”5

The U.S. Navy’s commitment to and growing dependence on unmanned systems is also seen in the Navy’s official Force Structure Assessment of December 2016, as well as in a series of “Future Fleet Architecture Studies.” In each of these studies—one by the Chief of Naval Operations staff, one by the MITRE Corporation, and one by the Center for Strategic and Budgetary Assessments—the proposed Navy future fleet architecture had large numbers of air, surface, and subsurface unmanned systems as part of the Navy force structure. Indeed, these reports highlight the fact that the attributes unmanned systems can bring to the U.S. Navy Fleet circa 2030 have the potential to be truly transformational.6

The Navy Project Team, Report to Congress: Alternative Future Fleet Platform Architecture Study is an example of the Navy’s vision for the increasing use of unmanned systems. This study notes that under a distributed fleet architecture, ships would deploy with many more unmanned surface (USV) and air (UAV) vehicles, and submarines would employ more unmanned underwater vehicles (UUVs). The distributed Fleet would also include large, self-deployable independent USVs and UUVs, increasing unmanned deployed presence to approximately 50 platforms.

This distributed Fleet study calls out specific numbers of unmanned systems that would complement the manned platforms projected to be part of the U.S. Navy inventory by 2030:

  • 255 Conventional take-off UAVs
  • 157 Vertical take-off UAVs
  • 88 Unmanned surface vehicles
  • 183 Medium unmanned underwater vehicles
  • 48 Large unmanned underwater vehicles

By any measure the number of air, surface, and subsurface unmanned vehicles envisioned in the Navy alternative architecture studies represents not only a step-increase in the number of unmanned systems in the Fleet today, but also vastly more unmanned systems than current Navy plans call for. But it is one thing to state the aspiration for more unmanned systems in the Fleet, and quite another to develop and deploy them. There are compelling reasons why naval expeditionary forces have been proactive in experimenting with emerging unmanned systems.

Testing and Evaluating Unmanned Systems

While the U.S. Navy and Marine Corps have embraced unmanned systems of all types into their force structures, and a wide-range of studies looking at the makeup of the Sea Services in the future have endorsed this shift, it is the Navy-Marine Corps expeditionary forces that have been the most active in evaluating a wide variety of unmanned systems in various exercises, experiments, and demonstrations. Part of the reason for this accelerated evaluation of emerging unmanned systems is the fact that, unlike carrier strike groups that have access to unmanned platforms such as MQ-4C Triton and MQ-8 Fire Scout, expeditionary strike groups are not similarly equipped.

While several such exercises, experiments, and demonstrations occurred in 2017, two of the most prominent, based on the scope of the events, as well as the number of new technologies introduced, were the Ship-to-Shore Maneuver Exploration and Experimentation (S2ME2) Advanced Naval Technology Exercise (ANTX), and Bold Alligator 2017. These events highlighted the potential of unmanned naval systems to be force-multipliers for expeditionary strike groups.

S2ME2 ANTX provided an opportunity to demonstrate emerging, innovative technology that could be used to address gaps in capabilities for naval expeditionary strike groups. As there are few missions that are more hazardous to the Navy-Marine Corps team than putting troops ashore in the face of a prepared enemy force, the experiment focused specifically on exploring the operational impact of advanced unmanned maritime systems on the amphibious ship-to-shore mission. 

For the amphibious assault mission, UAVs are useful—but are extremely vulnerable to enemy air defenses.  UUVs are useful as well, but the underwater medium makes control of these assets at distance problematic. For these reasons, S2ME2 ANTX focused heavily on unmanned surface vehicles to conduct real-time ISR (intelligence, surveillance, and reconnaissance) and IPB (intelligence preparation of the battlespace) missions. These are critical missions that have traditionally been done by our warfighters, but ones that put them at extreme risk.

Close up of USV operating during S2ME2; note the low-profile and stealthy characteristics (Photo courtesy of Mr. Jack Rowley).

In an October 2017 interview with U.S. Naval Institute News, the deputy assistant secretary of the Navy for research, development, test and evaluation, William Bray, stressed the importance of using unmanned systems in the ISR and IPB roles:

“Responding to a threat today means using unmanned systems to collect data and then delivering that information to surface ships, submarines, and aircraft. The challenge is delivering this data quickly and in formats allowing for quick action.”7

During the assault phase of S2ME2 ANTX, the expeditionary commander used a USV to thwart enemy defenses. For this event, he used an eight-foot man-portable MANTAS USV (one of a family of stealthy, low profile, USVs) that swam undetected into the “enemy harbor” (the Del Mar Boat Basin on the Southern California coast), and relayed information to the amphibious force command center using its TASKER C2 system. Once this ISR mission was complete, the MANTAS USV was driven to the surf zone to provide IPB on obstacle location, beach gradient, water conditions and other information crucial to planners. 

Unmanned surface vehicle (MANTAS) operating in the surf zone during the S2ME2 exercise (Photo courtesy of Mr. Jack Rowley).

Carly Jackson, SPAWAR Systems Center Pacific’s director of prototyping for Information Warfare and one of the organizers of S2ME2, explained the key element of the exercise was to demonstrate new technology developed in rapid response to real-world problems facing the Fleet:

“This is a relatively new construct where we use the Navy’s organic labs and warfare centers to bring together emerging technologies and innovation to solve a very specific fleet force fighting problem. It’s focused on ‘first wave’ and mainly focused on unmanned systems with a big emphasis on intelligence gathering, surveillance, and reconnaissance.”8

The CHIPS interview article discussed the technologies on display and in demonstration at the S2ME2 ANTX event, especially networked autonomous air and maritime vehicles and ISR technologies. Tracy Conroy, SPAWAR Systems Center Pacific’s experimentation director, noted, “The innovative technology of unmanned vehicles offers a way to gather information that ultimately may help save lives. We take less of a risk of losing a Marine or Navy SEAL.”

S2ME2 ANTX was a precursor to Bold Alligator 2017, the annual Navy-Marine Corps expeditionary exercise. Bold Alligator 2017 was a live, scenario-driven exercise designed to demonstrate maritime and amphibious force capabilities, and was focused on planning and conducting amphibious operations, as well as evaluating new technologies that support the expeditionary force.9

Bold Alligator 2017 encompassed a substantial geographic area in the Virginia and North Carolina OPAREAS. The mission command center was located at Naval Station Norfolk, Virginia. The amphibious force and other units operated eastward of North and South Onslow Beaches, Camp Lejeune, North Carolina. For the littoral mission, some expeditionary units operated in the Intracoastal Waterway near Camp Lejeune.

The Bold Alligator 2017 scope was modified in the wake of Hurricanes Harvey, Irma and Maria, as many of the assets scheduled to participate were used for humanitarian assistance and disaster relief. The exercise featured a smaller number of amphibious forces but did include a carrier strike group.10 The 2nd Marine Expeditionary Brigade (MEB) orchestrated events and was embarked aboard USS Arlington (LPD-24), USS Fort McHenry (LSD-43), and USS Gunston Hall (LSD-44).

The 2nd MEB used a large (12-foot) MANTAS USV, equipped with a Gyro Stabilized SeaFLIR230 EO/IR Camera and a BlueView M900 Forward Looking Imaging Sonar to provide ISR and IPB for the amphibious assault. The sonar was employed to provide bottom imaging of the surf zone, looking for objects and obstacles—especially mine-like objects—that could pose a hazard to the landing craft–LCACs and LCUs–as they moved through the surf zone and onto the beach.

The early phases of Bold Alligator 2017 were dedicated to long-range reconnaissance. Operators at exercise command center at Naval Station Norfolk drove the six-foot and 12-foot MANTAS USVs off North and South Onslow Beaches, as well as up and into the Intracoastal Waterway. Both MANTAS USVs streamed live, high-resolution video and sonar images to the command center. The video images showed vehicles, personnel, and other objects on the beaches and in the Intracoastal Waterway, and the sonar images provided surf-zone bottom analysis and located objects and obstacles that could provide a hazard during the assault phase.

Bold Alligator 2017 underscored the importance of surface unmanned systems to provide real-time ISR and IPB early in the operation. This allowed planners to orchestrate the amphibious assault to ensure that the LCACs or LCUs passing through the surf zone and onto the beach did not encounter mines or other objects that could disable—or even destroy—these assault craft. Providing decision makers not on-scene with the confidence to order the assault was a critical capability and one that will likely be evaluated again in future amphibious exercises such as RIMPAC 2018, Valiant Shield 2018, Talisman Saber 2018, Bold Alligator 2018 and Cobra Gold, among others.

Navy Commitment to Unmanned Maritime Systems

One of the major challenges to the Navy making a substantial commitment to unmanned maritime systems is the fact that they are relatively new and their development has been “under the radar” for all but a few professionals in the science and technology (S&T), research and development (R&D), requirements, and acquisition communities. This lack of familiarity creates a high bar for unmanned naval systems in particular. A DoD Unmanned Systems Integrated Roadmap provided a window into the magnitude of this challenge:

“Creation of substantive autonomous systems/platforms within each domain will create resourcing and leadership challenges for all the services, while challenging their respective warfighter culture as well…Trust of unmanned systems is still in its infancy in ground and maritime systems….Unmanned systems are still a relatively new concept….As a result; there is a fear of new and unproven technology.”11

In spite of these concerns—or maybe because of them—the Naval Sea Systems Command and Navy laboratories have been accelerating the development of USVs and UUVs. The Navy has partnered with industry to develop, field, and test a family of USVs and UUVs such as the Medium Displacement Unmanned Surface Vehicle (“Sea Hunter”), MANTAS next-generation unmanned surface vessels, the Large Displacement Unmanned Underwater Vehicle (LDUUV), and others.

Indeed, this initial prototype testing has been so successful that the Department of the Navy has begun to provide increased support for USVs and UUVs and has established program guidance for many of these systems important to the Navy and Marine Corps. This programmatic commitment is reflected in the 2017 Navy Program Guide as well as in the 2017 Marine Corps Concepts and Programs publications. Both show a commitment to unmanned systems programs.12

In September 2017, Captain Jon Rucker, the program manager of the Navy program office (PMS-406) with stewardship over unmanned maritime systems (unmanned surface vehicles and unmanned underwater vehicles), discussed his programs with USNI News. The title of the article, “Navy Racing to Test, Field, Unmanned Maritime Vehicles for Future Ships,” captured the essence of where unmanned maritime systems will fit in tomorrow’s Navy, as well as the Navy-after-next. Captain Rucker shared:

“In addition to these programs of record, the Navy and Marine Corps have been testing as many unmanned vehicle prototypes as they can, hoping to see the art of the possible for unmanned systems taking on new mission sets. Many of these systems being tested are small surface and underwater vehicles that can be tested by the dozens at tech demonstrations or by operating units.”13

While the Navy is committed to several programs of record for large unmanned maritime systems such as the Knifefish UUV, the Common Unmanned Surface Vehicle (CUSV), the Large Displacement UUV (LDUUV) and Extra Large UUV (XLUUV), and the Anti-Submarine Warfare Continuous Trail Unmanned Vessel (ACTUV) vehicle (since renamed the Medium Displacement USV [MDUSV] and also called Sea Hunter), the Navy also sees great potential in expanding the scope of unmanned maritime systems testing:

“Rucker said a lot of the small unmanned vehicles are used to extend the reach of a mission through aiding in communications or reconnaissance. None have become programs of record yet, but PMS 406 is monitoring their development and their participation in events like the Ship-to-Shore Maneuver Exploration and Experimentation Advanced Naval Technology Exercise, which featured several small UUVs and USVs.”14

The ship-to-shore movement of an expeditionary assault force remains the most hazardous mission for any navy. Real-time ISR and IPB will spell the difference between victory and defeat. For this reason, the types of unmanned systems the Navy and Marine Corps should acquire are those systems that directly support our expeditionary forces. This suggests a need for unmanned surface systems to complement expeditionary naval formations. Indeed, USVs might well be the bridge to the Navy-after-next.

Captain George Galdorisi (USN – retired) is a career naval aviator whose thirty years of active duty service included four command tours and five years as a carrier strike group chief of staff. He began his writing career in 1978 with an article in U.S. Naval Institute Proceedings. He is the Director of Strategic Assessments and Technical Futures at the Navy’s Command and Control Center of Excellence in San Diego, California. 

The views presented are those of the author, and do not reflect the views of the Department of the Navy or Department of Defense.

Correction: Two pictures and a paragraph were removed by request. 

References

[1] National Security Strategy of the United States of America (Washington, D.C.: The White House, December 2017) accessed at: https://www.whitehouse.gov/wp-content/uploads/2017/12/NSS-Final-12-18-2017-0905-2.pdf.

[2] There are many summaries of this important national security event. For one of the most comprehensive, see Jerry Hendrix, “Little Peace, and Our Strength is Ebbing: A Report from the Reagan National Defense Forum,” National Review, December 4, 2017, accessed at: http://www.nationalreview.com/article/454308/us-national-security-reagan-national-defense-forum-offered-little-hope.

[3] Otto Kreisher, “U.S. Marine Corps Is Getting Back to Its Amphibious Roots,” Defense Media Network, November 8, 2012, accessed at: https://www.defensemedianetwork.com/stories/return-to-the-sea/.

[4] For a most comprehensive summary of U.S. Navy shipbuilding plans, see Ron O’Rourke Navy Force Structure and Shipbuilding Plans: Background and Issues for Congress (Washington, D.C.: Congressional Research Service, November 22, 2017).

[5] The Future Navy (Washington, D.C.: Department of the Navy, May 2017) accessed at: http://www.navy.mil/navydata/people/cno/Richardson/Resource/TheFutureNavy.pdf. See also, 2018 U.S. Marine Corps S&T Strategic Plan (Quantico, VA: U.S. Marine Corps Warfighting Lab, 2018) for the U.S. Marine Corps emphasis on unmanned systems, especially man-unmanned teaming.

[6] See, for example, Navy Project Team, Report to Congress: Alternative Future Fleet Platform Architecture Study, October 27, 2016, MITRE, Navy Future Fleet Platform Architecture Study, July 1, 2016, and CSBA, Restoring American Seapower: A New Fleet Architecture for the United States Navy, January 23, 2017.

[7] Ben Werner, “Sea Combat in High-End Environments Necessitates Open Architecture Technologies,” USNI News, October 19, 2017, accessed at: https://news.usni.org/2017/10/19/open-architecture-systems-design-is-key-to-navy-evolution?utm_source=USNI+News&utm_campaign=b535e84233-USNI_NEWS_DAILY&utm_medium=email&utm_term=0_0dd4a1450b-b535e84233-230420609&mc_cid=b535e84233&mc_eid=157ead4942

[8] Patric Petrie, “Navy Lab Demonstrates High-Tech Solutions in Response to Real-World Challenges at ANTX17,” CHIPS Magazine Online, May 5, 2017, accessed at http://www.doncio.navy.mil/CHIPS/ArticleDetails.aspx?id=8989.

[9] Information on Bold Alligator 2017 is available on the U.S. Navy website at: http://www.navy.mil/submit/display.asp?story_id=102852.

[10] Phone interview with Lieutenant Commander Wisbeck, Commander, Fleet Forces Command, Public Affairs Office, November 28, 2017.

[11] FY 2009-2034 Unmanned Systems Integrated Roadmap, pp. 39-41.

[12] See, 2017 Navy Program Guide, accessed at: http://www.navy.mil/strategic/npg17.pdf, and 2017 Marine Corps Concepts and Programs accessed at:  https://marinecorpsconceptsandprograms.com/.

[13] Megan Eckstein, “Navy Racing to Test, Field, Unmanned Maritime Vehicles for Future Ships,” USNI News, September 21, 2017, accessed at: https://news.usni.org/2017/09/21/navy-racing-test-field-unmanned-maritime-vehicles-future-ships?utm_source=USNI+News&utm_campaign=fb4495a428-USNI_NEWS_DAILY&utm_medium=email&utm_term=0_0dd4a1450b-fb4495a428-230420609&mc_cid=fb4495a428&mc_eid=157ead4942

[14] “Navy Racing to Test, Field, Unmanned Maritime Vehicles for Future Ships.”

Featured Image: Marines with 3rd Battalion, 5th Marine Regiment prepare a Weaponized Multi-Utility Tactical Transport vehicle for a patrol at Marine Corps Base Camp Pendleton, Calif., July 13, 2016. (USMC photo by Lance Cpl. Julien Rodarte)