Tag Archives: featured

The Bilge Pumps 2 – The Sequel

By Alex Clarke

We are The Bilge Pumps, a podcast crew of three naval geeks, with occasional guests, who squeak a lot about naval stuff and sometimes when moving. We include myself – Alex Clarke – known as much for my addiction to Irn Bru as my PhD and NavalHistoryLive Youtube channel, Alex ‘Drach’ Pocklington, also known for his love of Irn Bru and being an engineering savant, but mainly for the excellent Drachinfinel Youtube channel, and Jamie Seidel, journalist with a passion for armored carriers so great he has set up a website, a Youtube channel, and a Twitter feed all about them. Join us for our second episode to enjoy what is an informative, but also humorous take on current-ish affairs in the maritime and naval world.

These roughly 70-minute podcasts will hopefully make you smile as much they think you think, so please listen, enjoy, and feel free to send us topic suggestions to our Twitter feeds, just make sure to include #Bilgepumps when you do.

Download The Bilge Pumps 2 – Launch Episode!

Links

Alex Clarke is the producer of The Bilge Pumps podcast.

Contact the CIMSEC podcast team at seacontrol@cimsec.org

Mapping Gray Maritime Networks for Hybrid Warfare

By Chris Callaghan, Rob Schroeder, and Dr. Wayne Porter

Introduction 

In light of the current National Security Strategy and the 2018 National Defense Guidance, the impact of hybrid warfare and ‘gray-zone’1 maritime activity in support of great power competition among nations has become an increasing area of concern. This includes the need for an increased focus on the identification and tracking of vessels of interest (VOI) and their associated owners, operators, and activities. Traditionally, maritime domain awareness (MDA) has consisted of intelligence, surveillance, and reconnaissance of activities at sea with limited cross-domain link analysis2 of events, carriers, and sponsors (Wallace & Mesko, 2013). While this methodology enables analysts and operators to sift and structure vast data from increasingly complex systems, it fails to consider how ties between similar entities create gray (non-transparent) shipping networks capable of supporting state-directed hybrid warfare. 

This is not to say that a network perspective has been absent from the maritime domain. Researchers from diverse analytic disciplines have conceptualized various constructs as networks, such as historic trade routes (Rivers, Evans, & Knappett, 2016; Wang, Notteboom, & Yang, 2016), global shipping patterns (Ducruet, Rozenblat, & Zaidi, 2010), cruise ship itineraries (Rodrigue & Notteboom, 2014), and logistics involved in global shipping (Ducruet & Lugo, 2013). Yet, much of the focus behind this work has been on understanding transparent (licit) networks.3 For their part, network researchers leveraged social network analysis to gain an understanding of dark networks – that is, covert and/or illicit organizations (Raab & Milward, 2003). This has included, for example, the study of terrorist groups (Krebs, 2002; Roberts & Everton, 2011), narcotic distribution networks (Morselli & Petit, 2007), street gangs (Papachristos, Hureau, & Braga, 2013), and cyber criminals on the dark web (Dupont, 2014) to name a few. 

We drew on network analysis (NA) to examine gray maritime networks (alternately operating licitly and illicitly) in relationship to two NATO-led exercises in 2018: BALTOPS and Exercise Trident Juncture. As previously demonstrated through research focused on mapping gray maritime networks in the South China Sea (Porter, et al., 2019), NA methods can be leveraged to develop longitudinal network depictions of vessels loitering in sensitive or disputed areas. Here, we leverage commercially available geo-temporal data, open-source databases, and home range detection algorithms to generate depictions of the subgroups of owners and operators associated with gray activities.

Although methodology driven, this research was not intended to provide solely an academic contribution but also to demonstrate how NA can improve real-time awareness and tracking for operational purposes. The methods and analysis presented here should enable a rich discussion of current and future methods for enhanced MDA. As such, we begin with a description of our data collection and methods then proceed to discuss findings and practical implications for MDA. Finally, we conclude with a series of recommendations for further research. 

Generating Networks: Data and Methods 

We use commercially available ship tracking data as the cornerstone of our analysis; specifically, in the process of identifying and tracking VOIs. Our team collected the feeds from commercial automatic identification system (AIS) transceivers from 13 March 2018 through 7 January 2019.4 These data points are particularly salient as AIS transmitters are required as navigation and anti-collision systems for all vessels exceeding 300 gross tonnage operating internationally, any vessels exceeding 500 gross tonnage not conducting international voyages, and all passenger ships regardless of size. To narrow the scope of our data set, we geofenced our data to include the Baltic Sea and the North Atlantic Ocean. The resulting daily AIS tracking logs provided both spatial and temporal variables relevant to our analysis; namely, a VOI’s date and time of transmission, maritime mobile service identity (MMSI) number, speed over ground, longitude, and latitude.  

Once the data was decoded and filtered, we proceeded to explore traffic patterns using the Time Local Convex Hull (T-LoCoH) method originally developed for the study of movement patterns in GPS-tracked ranging animals. T-LoCoH integrates time with space into the construction of local hulls (geometric shapes containing a location distribution within a home range) while accounting for an individual animal’s speed, which facilitates the use of metrics for revisitation and loitering duration (Lyons, Turner, & Getz, 2013). In our work, the AIS data that tracks vessel traffic over time and space is analogous to the GPS data used to analyze ranging animals. As such, we leveraged the application of this method to identify spatio-temporal patterns of ships loitering in areas proximal to NATO-led military exercises.  

To reduce traffic noise, we only included AIS transmissions for non-NATO nation commercial vessels transponding with a speed over ground less than or equal to two knots. We then generated spatial loitering polygons which may represent ports, anchorages, or other areas where a VOI loitered during the window of research (see Figure 1). As expected, areas exhibited differing loitering densities with some being dense (depicted as yellow on Figure 1) and others less dense (depicted in red). These loitering polygons served as the basis for developing a list of VOIs using their MMSI identification numbers as unique identifiers.

Figure 1. Loitering isopleths during BALTOPS (click to expand)

Matching loitering isopleths with the original AIS transmissions used to generate them yielded a ship-to-loitering location table (see Table 1) with a ship’s unique identifier, the AIS message date and time, and the loitering polygon identity. 

MMSI  Date-time  Polygon 
123456789  T=1  Polygon A 
987654321  T=1  Polygon A 
123456789  T=2  Polygon B 
123456789  T=3  Polygon C 

Table 1. Sample ship-to-loitering location table

From this table, we extracted a location-to-location network where loitering areas were interconnected if a VOI traveled from one location to the other location. Next, to examine the underlying organizations linked to the VOIs, the team gathered open-source information on the companies who own and/or operate these ships using the Lexis Nexis Advance Research Database. This corporate information was then joined to the ship data. The corporate information was used to create connections between companies if they were tied to the same ship, one was a subsidiary of the other, one had a major financial stake in the other, shared the same physical address, or had members of their boards of directors in common. The findings and analysis of these data follow in the subsequent section.

Analysis: Shedding Light on Gray Maritime Networks

From the AIS data on ship movements we extracted two networks for further analysis: the location-to-location network composed of loitering areas observed during BALTOPS (31 May 2018 through 16 June 2018) and loitering areas observed during Operation Trident Juncture (22 October 2018 through 25 November 2018).  Most of the VOI activity was concentrated within the Baltic Sea (see Figure 2). These findings are to be expected considering the geographic range of operations. While most VOIs in the sample set remained in the Baltic Sea, a few were also observed loitering off the coast of Norway during NATO exercise Trident Juncture.

Figure 2. Location-to-location networks during BALTOPS (left) and Operation Trident Juncture (right) (click to expand)

Upon closer examination, the VOIs active off the coast of Norway during Trident Juncture appear to have loitered near sensitive military locations and displayed abnormal movement patterns. For instance, Figure 3 illustrates the movements of two VOIs with abnormal tracking patterns. The first is an oil tanker owned by the Russian government and operated by a registered shipping company in that country. The second is a commercial chemical products tanker registered in the Marshall Islands, a country often used as a flag of convenience, shown loitering north of Norway.

Figure 3. Abnormal shipping patterns off the coast of northern Norway during Operation Trident Juncture, a Russian owned oil tanker (left) and chemical products tanker registered to the Marshall Islands (right) (click to expand)

Finally, Figure 4 is a network representation of connections between the companies associated with identified VOIs. In this graph, we see that many of the companies are related to each other, with the three largest components colored in blue, green, and orange. For instance, the large blue cluster on the right-hand side of the sociogram contains many small companies, all operating from the same address in northern Russia, each with connections to at most a few ships. The large orange component on the bottom left contains clusters of VOI-associated companies interconnected by sharing some of the same board members. In the green component, shipping companies associated with VOIs are connected by sharing parent, subsidiary, or holding companies. Companies occupying an apparent position of structural brokerage are depicted by larger nodes. One such shipping company (highlighted with an arrow), for instance, was connected to the broader family of like-companies, while also being linked to a large multinational oil company through partial ownership ties (Schelle, 2018).

Figure 4. Company-to-company network. The three largest components are colored and nodes are sized by brokerage potential.

Conclusions and recommendations for further research 

This analysis highlights the value of NA in real-time awareness and tracking of stakeholders associated with suspected gray maritime activities in a strategic era of great power competition. Using commercially available geospatial data, our team identified 56 VOIs loitering in areas proximal to NATO-led exercises in the Baltic Sea and North Atlantic. These vessels were then linked to over 196 state-owned and private companies/entities. Analysis such as this provides insight into a network of stakeholders that may support hybrid warfare, or so-called grey-zone activities, not directly attributable to a specific nation.

The use of the network analysis methodologies discussed here and the tools developed at the Naval Postgraduate School to identify, map, and track gray maritime networks can be applied to any number of threats. While our earlier research into Chinese reef enhancement activity in the South China Sea has already been cited, Maritime Operations Center (MOC) operators and MDA analysts could adapt this toolset to track and assess maritime and terrestrial networks associated with narcotics trafficking, terrorism, Illegal and Unregulated Fishing (IIU), arms and human trafficking, and other security concerns. Integrating these tools into existing MDA systems would also provide for enhanced awareness of how these networks overlap in multiple geographic areas and in malign activities. Further, and perhaps most significantly, they could provide operators timely and actionable information.   

Our research is not without room for improvement. Future iterations of this work should include a richer dataset of state/corporate linkages. This should include a deeper dive into state-sponsored (and military supported) parent-subsidiary company relationships and board memberships, or proximal geographic associations among companies, offices, and ships. Further research is also being considered through the application of system dynamics modeling, wargaming, campaign analysis, and discrete events modeling. 

Acknowledgment  

The authors would like to acknowledge that this research benefited immensely from the partnership between the Common Operational Research Environment (CORE) Lab and Littoral Operations Center at the Naval Postgraduate School, with the Norwegian Defense Research Establishment (Forsvarets Forskningsinstitutt, FFI). This research builds on a joint effort to integrate network analysis methodologies into the maritime domain, which won the 2019 NCI Agency’s Defense Innovation Challenge aimed at accelerating technological solutions in support of NATO C4ISR and cyber capabilities.

With more research and interest, these methods can help us better understand the non-linear relationships and feedback mechanisms that contribute to the complexity of great power competition and its manifestations in the maritime domain.

Chris Callaghan is a Research Associate in the Defense Analysis Department’s CORE Lab at the NPS. His work leverages open-source data analytics for understanding and modeling a variety of national and homeland security problems. 

Rob Schroeder is a Faculty Associate for Research in the CORE Lab within the Defense Analysis Department and a PhD Student in the Information Sciences Department at the Naval Postgraduate School (NPS). He is currently researching how to use open-source information gathered largely from social media in order to understand and map the changing dynamics in conflict areas and exploring the use of network analysis to analyze maritime traffic patterns. He has presented some of this research at conferences (INFORMS and INSNA).

Dr. Wayne Porter, CAPT, USN (ret.) is a Senior Lecturer in the Defense Analysis and Systems Engineering Departments of the Naval Postgraduate School, where he also serves as Co- Director of the CORE Lab and Director of the Littoral Operations Center.  He holds a Ph.D in Information Sciences and two Masters of Science degrees – in Computer Science and Joint C4I Systems Technology – from the Naval Postgraduate School.  Military duty included Japan, England, Italy, the Balkans, Bahrain (COMFIFTHFLT ACOS Intelligence and MOC Deputy of Operations in the Persian Gulf/East Africa), and three tours on the personal staff of ADM Mike Mullen, including Special Assistant for Strategy to both the Chief of Naval Operations (N00Z) and Chairman of the Joint Chiefs.  He subsequently served as Chair, Systemic Strategy and Complexity at Naval Postgraduate School in Monterey, California and retired from the Navy in July 2014 after 28 years of active service.  Dr. Porter has contributed to a number of DoD and USN Strategy projects, including serving as systems analyst for the SECNAV’s Strategic Readiness Review.

The views expressed in this paper are those of the authors and do not reflect the official position or policies of the United States Navy or the Department of Defense.

Endnotes

1. The opaque area in which illicit of malign activity co-exist with licit activity.

2. An analytical method for interactively curating and querying relational databases (Cunningham, Everton, & Murphy, 2016). In a link diagram, different types of entities (e.g., ports, events, ships, operators, and personnel to name a few) are tied to each other explicitly with the goal of describing the environment.

3. Those operating overtly and legally.

4. All collected AIS logs were encoded in AIVDM (data received from other vessels)/AIVDO (own vessel information) sentences and required decoding for further analysis.

References

Cunningham, D., Everton, S. F., & Murphy, P. (2016). Understanding Dark Networks: A strategic framework for the use of social network analysis. Lanham: Rowman & Littlefield.

Ducruet, C., & Lugo, I. (2013). Structure and dynamics of transportation networks: models, methods and applications. In J. Rodrigue, T. Notteboom, & J. Shaw, The SAGE Handbook of Transport Studies (pp. 347-364). London: SAGE Publications, Ltd. Retrieved from: http://sk.sagepub.com/reference/hdbk_transportstudies/n20.i1734.xml

Ducruet, C., Rozenblat, C., & Zaidi, F. (2010). Ports in multi-level maritime networks: evidence from the Atlantic (1996-2006). Journal of Transport Geography, 18(4), 508-518. Retrieved from: https://www.sciencedirect.com/science/article/abs/pii/S0966692310000372

Dupont, B. (2014). Skills and Trust: A Tour Inside the Hard Drives of Computer Hackers. In C. Morselli, Crime and Networks (pp. 195-217). New York, N.Y.: Routledge. Retrieved from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2154952

Krebs, V. E. (2002). Mapping networks of terrorist calls. Connections, 8(2), 43-52. Retrieved from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.192.4165&rep=rep1&type=pdf

LexisNexis. (2019, 09 01). Lexis advance research. Retrieved from LexisNexis: https://advance.lexis.com/

Lyons, A., Turner, W., & Getz, W. (2013). Home range plus: A space-time characterization of movement over real landscapes. BMC Movement Ecology, 1(2), . Retrieved from: https://movementecologyjournal.biomedcentral.com/articles/10.1186/2051-3933-1-2

Morselli, C., & Petit, K. (2007). Law-enforcement disruption of a drug importation network,. Global Crime, 8(2), 109-130. Retrieved from: https://www.tandfonline.com/doi/full/10.1080/17440570701362208

Papachristos, A., Hureau, D., & Braga, A. (2013). The Corner and the Crew: The Influence of Geography and Social Networks on Gang Violence,. American Sociological Review, 78(3), 417-447. Retrieved from: https://journals.sagepub.com/doi/10.1177/0003122413486800

Porter, W., Schroeder, R., Callaghan, C., Barreto, A., Bussell, S., Young, B., . . . von Eiff, J. (2019). Mapping Gray Maritime Networks. Connections, 39(1). Retrieved from: https://www.exeley.com/connections/doi/10.21307/connections-2019-006

Raab, J., & Milward, H. B. (2003). Dark networks as problems. Journal of Public Adminstration Research and Theory, 13(4), 413-439. Retrieved from: https://arizona.pure.elsevier.com/en/publications/dark-networks-as-problems

Rivers, R., Evans, T., & Knappett, C. (2016). From oar to sail: The role of technology and geography in the evolution of Bronze Age Mediterranean networks. In C. Ducruet, Maritime Networks: Spatial structures and time dynamics(pp. 63-76). New York: Routledge.

Roberts, N., & Everton, S. (2011). Strategies for Combating Dark Networks. Journal of Social Structure, 12(2). Retrieved from: https://www.cmu.edu/joss/content/articles/volume12/RobertsEverton.pdf

Rodrigue, J., & Notteboom, T. (2014). The geography of cruises: itineraries, not destinations. Applied Geography, 38(1), 31-34. Retrieved from: https://www.sciencedirect.com/science/article/abs/pii/S0143622812001373?via%3Dihub

Schelle, S. (2018). Kartlegging av maritime hybride trusler: Kan bruk av stordata og sosial nettverksanalyse bidra til økt maritim situasjonsbevissthet? [Survey of maritime hybrid threats: Use of big data and social network analysis to help increased maritime situational awareness?]. Retrieved May 06, 2020, from https://fhs.brage.unit.no/fhs-xmlui/bitstream/handle/11250/2583966/2018%20Masteroppgave%20Schnelle%20Stian.pdf?sequence=1  

Wallace, T., & Mesko, F. (2013, September 30). The Odessa Network: mapping facilitators of Russian and Ukrainian Arms Transfers. Retrieved 09 2019, from C4ADS.org: https://static1.squarespace.com/static/566ef8b4d8af107232d5358a/t/56af8a2dd210b86520934e62/1454344757606/The+Odessa+Network.pdf 

Wang, L., Notteboom, T., & Yang, L. (2016). British and Japanese Maritime Networks in China in the 1920s. In C. Ducruet, Maritime Networks: Spatial structures and time dynamics (pp. 112-133). New York: Routledge.

Featured Image: OSLO, Norway (Nov. 13, 2018) Sailors and Marines man the rails as the Wasp-class amphibious assault ship USS Iwo Jima (LHD 7) arrives in Oslo, Norway, for a scheduled port visit Nov. 13, 2018. (U.S. Navy photo by Mass Communication Specialist 3rd Class Daniel C. Coxwest/Released)

The Bilge Pumps 1 – Launch Episode!

By Alex Clarke

So it’s happened, we did three episodes and the listeners have spoken, at least the ones we’ve heard from, and it was a big thumbs up. So we’re here to stay, we even have our own channel – which is lovely and currently squeaky clean, but give us five minutes and we’ll have it full of ship pictures, models, books, and probably some comfy recliners. After all, we have to make ourselves at home!

We are The Bilge Pumps, a podcast crew of three naval geeks, with occasional guests, who squeak a lot about naval stuff and sometimes when moving. We include myself – Alex Clarke – known as much for my addiction to Irn Bru as my PhD and NavalHistoryLive Youtube channel, Alex ‘Drach’ Pocklington, also known for his love of Irn Bru and being an engineering savant, but mainly for the excellent Drachinfinel Youtube channel, and Jamie Seidel, journalist with a passion for armored carriers so great he has set up a website, a Youtube channel, and a Twitter feed all about them. Join us for what is an informative, but also humorous take on current-ish affairs in the maritime and naval world.

These roughly 70-minute podcasts will hopefully make you smile as much they think you think, so please listen, enjoy, and feel free to send us topic suggestions to our Twitter feeds, just make sure to include #Bilgepumps when you do.

Download The Bilge Pumps 1 – Launch Episode!

Links

Alex Clarke is the producer of The Bilge Pumps podcast.

Contact the CIMSEC podcast team at seacontrol@cimsec.org

War is a Learning Competition: How a Culture of Debrief Can Improve Multi-Domain Operations

The following article originally published on Over the Horizon and is republished with permission. Read it in its original form here.

By Tim “Diesel” Causey

Executive Summary

The Multi-Domain Operations (MDO) community continues to evolve and progress. MDO is, and will be the fundamental enabler for Joint All-Domain Command and Control (JADC2) and the way our nation fights future wars. As the maturing community integrates new concepts and processes, Multi-Domain Operators must identify and engrain the valuable lessons along the way. Creating a set of standards to capture feedback and drive improvement is vital for development in any organization. The debrief culture of the U.S. Air Force fighter community, among others, is well-known for its direct, highly effective feedback and learning methods. This type of focused feedback is important to the fighter community because the debrief is where the majority of learning takes place. The MDO community would benefit greatly by utilizing this debrief culture as a model from which to develop its own unique culture of consistent, iterative improvement. Because a standard day, or sortie-equivalent, is not yet fully fleshed out for Multi-Domain Operators, the purpose of this paper is to convey the necessity for debriefing lessons learned, and provide best practices in their current form. The ultimate objective is to create a foundation for the MDO community to adapt these practices as the details and nuance of its daily execution become more specific and clear.

Introduction

War is a learning competition; therefore professional learning—continuing education—is fundamental to winning wars. As the international strategic environment becomes increasingly complex, the Department of Defense (DoD) must synchronize efforts across domains to maintain its advantage. Achieving this goal requires planning and executing strategic response options utilizing a Multi-Domain Operations (MDO) framework. To become the world-leading standard in this complex environment, the MDO community must develop efficiencies to respond and innovate more rapidly and effectively. The first step to enable this advancement is instilling a culture of debrief, direct feedback, and constructive learning within the MDO community.

Many communities across the United States Air Force (USAF) embrace a debrief culture, though some have unique formats and standards to tailor learning to respective needs. The debrief is designed to focus analysis on either the accomplishment or failure to accomplish desired learning objectives (DLOs) and/or mission objectives. Mission objectives drive the planning or execution items that must be accomplished to be successful and therefore expose the areas of individual, crew, or team performance that must be addressed to correct for future iterations. Regardless of distinctive design, any effective debrief identifies errors and provides fixes for those errors, while also allowing those who did not directly commit a given error the advantage of learning from others’ mistakes. Since there is not enough time for each operator to make all the mistakes, this type of learning creates efficiency by reducing repetitive errors across the group that is present for a given debrief. Now, multiply this effect across entire communities.

The fighter aviation community has refined its debrief process over several decades; it is fundamental to fighter culture. Any organization can utilize fighter debrief concepts as a reference—or even baseline—to develop its own culture of debrief. Being composed of personnel from many different career fields and backgrounds, the MDO community must be deliberate about, and dedicated to, the development of appropriate debrief formats and standards. Since the MDO process is still early in its development, it is critical to build the foundation of this debrief culture in the Multi-Domain Warfare Officer schoolhouse (known as 13O/13Oscar schoolhouse) and Air Command and Staff College’s Multi-Domain Operational Strategist (MDOS) concentration (soon to become JADS – Joint All-Domain Strategists). One way to achieve this is for the 13O schoolhouse and MDOS to leverage the proven fighter debrief process in establishing an MDO debrief methodology. This can inform the MDO community’s initial, essential steps in developing a format and standards for efficient and effective feedback.

Fighter Debrief Culture

To understand fighter debrief culture in a way that helps the MDO community relate it to the eventual structure of an MDO day, or an MDO mission, it is important to describe that fighter culture in its native context. Debrief has always been an important part of fighter aviation culture, facilitating honest and direct feedback on every mission element. As Combat Air Force (CAF) flying hours continue to decrease, debrief has become even more important to ensuring everyone receives required training. Additionally, work-life balance and operations tempo require debriefs to be direct and succinct, due to the limited time available after mission planning, briefing, and flying the mission. By the time the debrief starts, aircrew likely have already been at work for a full day.

To maintain focus and aid efficiency, debriefers commonly use the mantra “Plan, Products, Brief, Administration, Tactical Admin, and Execution” to address all portions of the mission. At the beginning of the debrief, it is helpful to keep sections like “Brief” as simple as possible by asking, “was there anything from the brief negatively affecting your execution today or that you have questions on?” Directing this question to the room allows the debriefer to quickly address pre-execution issues, and then move to the mission itself. However, the brief may have negatively affected execution in a way that remains to be determined in debrief, so it should also be considered during the debrief focus point (DFP) development. Utilizing this debrief structure, the debriefer quickly addresses issues in each pre- and post-execution section with the flight participants until arriving at mission execution. Mission execution review is designed to focus the debrief so each person can improve for the next mission. This does not mean each person gets individually debriefed, but rather that those who made errors most impactful to mission success or failure have those errors identified and corrected in a way everyone can learn from them. All participants should leave understanding how to better execute the mission. The succinct, direct nature of fighter debrief is equally applicable to the MDO community.

An additional key to ensuring efficient and effective debrief is withholding personal feelings and ensuring rank does not impede instruction for correctable mistakes. Debrief attendees should behave professionally, and critiques of execution should not be personal in nature, nor taken personally by flight participants. Aircrew must avoid defensive attitudes and cannot make excuses for poor performance. To this end, mission reconstruction should focus on facts, so instructional fixes can be objective corrections to demonstrated errors. If crews take debrief points personally, or if pride stands in the way of learning, valuable lessons are lost. The person running the debrief sets rules of engagement (ROE), which are designed to help avoid hurt feelings and pride issues. ROE can vary depending on the squadron and the person in charge of the debrief. Below is an example of debrief ROE, developed over several years of flying fighter aircraft. Although not all-inclusive, it provides a good starting point.

Different communities have passed down similar rules throughout the years, and everyone has their favorite—or most important—rule. Another helpful source is an article written for the Judge Advocate General’s (JAG’s) corps by Major Mark Perry (an F-15C pilot) and Major Benjamin Martin (a JAG officer). Their five key rules offer great insight into a portion of the debrief process. They lay an initial foundation that helps underpin the essence of the debrief: an investigation into the errors made. The overall goal is to show the facts of what occurred in order to ascertain, prove, and teach the fix (i.e., a “lesson learned”) for everyone to internalize from the debrief. This type of debrief is only possible in the limited time available if everyone is honest about mistakes and is ready to learn.

Another critical facet to making this type of debrief possible is careful selection of who runs each debrief. It is important to develop a community standard. As a general rule, whoever established the desired learning objectives (DLOs) which drive the mission objectives should run the debrief. This is usually the same person who prepared the mission and gave the briefing. Ideally this is an instructor, unless someone is being upgraded, but it does not have to be. Especially important is the maxim that rank has nothing to do with who runs the debrief. The squadron commander—or the wing commander—may be in the formation, but the day’s lead or instructor is the most appropriate to lead assessment of facts and fixes. In that same vein, there is no rank in the debrief. Per the ROE, this does not mean one can say whatever he/she wants. Always remain professional. This helps establish a respectful balance, while taking advantage of the reality that learning can come from anyone, regardless of rank.

The Process

The USAF Weapons School (i.e., Weapons Instructor Course or WIC) utilizes a debrief standard across all the school’s platforms. The mission analysis process assesses accomplishment of the DLOs. If a formation fails to accomplish a specific DLO, the process then identifies the errors that led to the failure. These errors become DFPs or learning points (LPs), the former having a more significant impact on mission success than the latter. Once the debriefer identifies the DFP(s) or LP(s), he/she categorizes it/them into one of three areas: perception, decision, or execution. After error-categorization, the debriefer then provides an instructional fix to maximize learning and to ensure those present can make a tangible change or correction for future missions. Combining a DFP with an instructional fix results in a lesson learned—the critical element to community improvement.

DFPs and LPs should be the focal point of the debrief because they distill vast amounts of data into concise and effective lessons for each participant. If the debriefer does not identify the DFP or LP, untargeted analysis of the minutia can subjugate debrief focus, and those listening can lose interest or get confused. A debriefer identifying every minor error someone makes might not only waste valuable time, it can also serve to browbeat an individual, often leading to mental shutdown and an inability to actually learn. Instead, DFPs developed from the DLOs prevent aimless rambling and give the debrief focus. The debriefer identifies the DFPs during the reconstruction portion of the debrief. Whereas DFPs are failures in mission or tactical objectives (i.e., DLOs), learning points are when the formation accomplishes the DLO in spite of significant mistakes, or in a non-traditional way (e.g., the formation was able to complete the mission but made significant errors that can be debriefed). Learning can come from successes, using LPs, or from failures, identifying LPs and/or DFPs. In any of these three cases, the DFPs and/or LPs provide a common reference point and keep the debrief focused and succinct.

While the fighter community uses the mantra “Plan, Products, Brief, Admin, Tactical Admin, and Execution” to ensure all portions of the mission are addressed, another simple process applicable to any type of event is the five questions Air Force pilot Bill Crawford discusses in his 2015 TEDx Talk. These questions outline an easy-to-remember checklist to guide debriefs:

  1. What happened?
  2. What went right?
  3. What went wrong?
  4. Why?
  5. What are the Lessons Learned?

Step one: “What happened” is the process of validating the mission and tactical objectives. In other words, did the flight accomplish the DLOs?

Step two: “What went right” is an important part of the debrief process for two reasons. First, a debrief should not be just negative; and second, it is always good to use this step to show the group how things are supposed to look—it is motivating, reinforces good habits, and gives people something to replicate. Additionally, sometimes optimal execution is accomplished without recognition or by unintentional action, and should be highlighted to ensure understanding for application in the future.

Steps three and four: “What went wrong” and “why” is where the debrief loop, discussed below is utilized. Step three is not merely focused on “who made the mistake.” Similarly, step four is “why” not “who.” Referencing the aforementioned debrief ROE, do not make the debrief personal.

Step five: “What are the lessons learned” relates back to DFP and LP development; however, this discussion should be carried further, as described in Bill Crawford’s TEDx Talk. Incorporate lessons learned into the next execution cycle’s planning process. This process allows a wider group of people to learn from the debrief, growing the community as a whole.

When used properly, the debrief loop ensures DFPs and LPs are identified and fixed. Air Force then-Captain David Deptula formally described the debrief loop in his Weapons School Paper “Fundamentals of the Instructional Debrief.”

The Debrief Loop: Captain David Deptula, “Fundamentals of the Instructional Debrief,” USAF Weapon School Student Paper, F-22A Class 12BIN, December 2012.

Determining why the error occurred is a vital part of debrief and is unfortunately where most debriefers have trouble. The tendency is to make an assumption on why someone made an error and then give them a fix to that assumption. However, when the person running the debrief utilizes the third step of the debrief loop correctly, he/she asks direct questions of the person who made the mistake to get to the “why” of the error. This is where it is important that all participants of a debrief adhere to rules four and five of the Debrief ROE:

When determining the “why,” the debrief loop recommends the use of the P/D/E model—Perception, Decision, and Execution. Using this model, the debriefer asks the correct questions to accurately determine the “why.” The person running the debrief should ask questions which categorize the error in perception, decision, or execution and then use that information to deliver an instructional fix (IF). An IF should be easy to follow and easy to implement in future missions.

Debrief for the MDO Community 

There are many articlesbooks, and even TEDx Talks on the subject of debrief. Although all are useful, the target audiences are corporations, lawyers, and doctors; and while certain communities within the Air Force utilize very effective debrief methodologies, none of these directly address operations or planning in the MDO environment. There will be an initial hurdle of developing an accepted debrief standard for the MDO community, as it is built out of a diverse pool from around the DoD. Many people may not be familiar with the previously described “fighter” debrief style, or may find the direct feedback too personal in nature, and some may misconstrue the feedback as an official report instead of seeing it as a simply a way to improve future efforts. These differences in backgrounds, and in conceptions of feedback, make it even more important for the MDO community to establish a standard for debrief.

In conjunction with introducing the MDO community to the debrief process and etiquette, the MDO community would also benefit from identifying mission areas most appropriate to apply the debrief process. Five areas from the planning and execution stages are regularly occurring processes ripe for iterative learning, application of debrief methodology, and ultimately result in a reduction in execution errors.

Potential MDO Debrief Areas

When the MDO community formally develops a debrief methodology it is recommended that the following five areas be reviewed. These areas are not the answer to how to develop a debrief, but are instead intended to be ideas that spark discussion and drive development in the MDO community.

The first area the MDO community could benefit from debriefing is planning process assumptions. It does not matter if the planning process is for a wargame, for a staff-level task, or for an MDO mission. When executing the planning process, it is important to identify the assumptions made about the task at hand. Assumptions allow the team to maintain forward progress by focusing effort, but they also have varying degrees of inherent risk. This risk is dependent on multiple factors, including how the assumption was derived, the confidence level of the assessment, and the gravity of the consequences if the assumption turns out to be partially—or entirely—invalid. It is imperative to document these assumptions for all to see and for the team to periodically revisit. Putting them on a white board in the room is a great technique to enable constant review, and to allow mission partners or—late arrivals—to catch up to the group. Listing assumptions in plain view has the additional benefit of ensuring all participants can read, validate, or (in some cases) challenge an assumption during the planning process. If a late arrival or the commander is to highlight an invalid assumption, the team can make immediate and early adjustments to the scope and scale of the planning. However, if an assumption is invalid and not caught it can have an effect on the overall mission, and could result in a failure to accomplish a tactical objective. In this case the team should treat it like a DFP: “Why was assumption #8 incorrect and how did that effect the overall outcome of the planning process?”

Additionally, when the planning team arrives at the end of their process and briefs the plan, avoid assuming that, if the commander selected the planners’ recommendation, the assumptions were correct. Assumption validation occurs as execution unfolds and those assumptions prove valid or invalid in real-time. Because of this reality, it is best to validate assumptions after execution and capture the results of the debrief for future planning efforts. While some assumptions will ultimately be affected by enemy decision-making, a formal debrief will identify those factors the planning team could have predicted in the planning phase. It may also have the capacity to identify whether planners were cognizant of the risks to assumptions depending on enemy decisions, which should have been a significant factor in contingency planning.

Risk is a second area in which to apply the debrief process, as risk is vital to commanders at all levels. To facilitate this type of debrief, risk should be categorized into risk to mission failure, risk to force, and risk to timing and tempo. The risk involved with a decision is a large assumption made during the planning process. Comparing planners’ acceptable risk to the risk the commander wants mitigated can be an additional factor to debrief. LP 1: “Why did the planning team assume a higher risk than the commander was willing to accept?” Once developed, these risk lessons can be fed into the planning cycle to inform better future risk mitigation. Risk is not the same in every scenario, and every commander’s risk tolerance is not the same, but understanding allowable risk in a complex environment is a great place to debrief.

A third area where the debrief methodology would be appropriate is following wargame execution. Due to the time and monetary investment required to correctly execute a wargame, it is vital to execute the wargame process as correctly and effectively as possible. When developing courses of action for the commander, the MDO community can use wargames as a way to identify modifications or allow the commander to select the best course of action. As a result, war gaming can also benefit from a formal debrief process. For example the debrief ROE 3-6 can help ensure an effective and timely executed wargame. It is human nature to leave an experience like a wargame either patting yourself on the back or being angry at the other side for negating your opinion or planning. Executing a debrief at the end of the wargame can identify lessons learned for blue mission planners, and can ensure all participants leave with a shared, clear understanding of the outcome. This helps to prove what modifications to the plan are necessary. Since the red team has immersed itself in the enemy’s decision-making process, the red team should utilize the five questions to provide details to the blue team for their use in executing the debrief loop.

A fourth area for the MDO community to leverage the debrief methodology is during flexible deterrent option (FDO) and strategic response option (SRO) development. The MDO planning cycle can be time-consuming, as it consists of developing observed and desired systems, executing center of gravity and decisive point analysis, building a logic map, and filling out a decision support matrix, a decision support template, and a synchronization matrix to build the SRO. It may take 3-6 months to validate an SRO and, therefore, delay feedback to the planners, meaning lessons are potentially lost over time. By adopting a debrief culture, the MDO community could generate lessons learned during the process and incorporate them into the current and future planning cycle therefore reducing errors and increasing effectiveness across the entire community.

The final area the MDO community could utilize a community-wide debrief methodology is during exercises at the Air Operations Center (AOC) level. The tendency is to run the exercise, execute a 3 up and 3 down slide, and then return to standard business. The 3 up and 3 down debriefs only highlight 3 positives and 3 negatives from the entire exercise. This type of wave-top after action assessment does not maximize the learning and growth that can come from this type of exercise. Executing a robust exercise at the MDO level requires a great deal of time, effort, and resources. Therefore, it deserves a debrief methodology to ensure the lessons learned are fully captured. There are many ways to accomplish this, whether at the completion of each air tasking order (ATO) day, or at the completion of the entire exercise. Establishing a standard that facilitates root cause analysis and open discussion of errors among key participants is crucial in moving the MDO community forward. Preventing recurring mistakes in the five recommended areas is the ultimate benefit of a well-developed debrief process. This is why it is important for the MDO community to develop its debrief methodology (with appropriate ROEs) and find applicable areas in the community where it should be applied.

Conclusion

The MDO community currently lacks a standardized debrief process to allow the growth required to be effective in future MDO environments. There is no better time to establish a standard process of feedback than in the early stages of growth. The MDO community can leverage the debrief culture of the USAF fighter community. It is a proven system that allows effective and efficient feedback throughout a mission, a unit, and the entire community. Debrief culture requires buy-in from all levels of the MDO community and also requires all participants to follow a standard set of rules to ensure the process is followed; multiple ROE examples have been given to facilitate this process. The MDO community should develop a new ROE to fit their community in its expanding environment. If the MDO community does not establish some type of formal feedback system in the early stages of development, it will lose many lessons and will be forced to recreate the wheel, leading to loss of valuable time and potentially even falling behind the adversary in ability to anticipate, adapt, and react to enemy actions.

Recommendation

For the MDO community to evolve, it needs to establish and internalize a common trust and understanding that allows feedback to be passed effectively and efficiently between MDO planning cells and staffs. This critical feedback mechanism will ensure lessons are derived from errors and implemented in future planning and execution cycles. By establishing a culture of debrief and following the above debrief ROE, the MDO community can help ensure success as it moves into the future environment. To codify a debrief methodology and engender the required debrief culture for the benefit of the entire DoD, the schoolhouses must establish the standard. Therefore the 13O schoolhouse and MDOS should work together to develop the desired debrief methodology to ensure the enemy does not gain the intellectual high ground in an evolving and complex strategic environment.

Major Tim “Diesel” Causey is an Instructor Weapon Systems Officer and Weapons School Graduate with over 1700 hours in the F-15E. He is an MDOS graduate and is currently on the faculty of the Joint All-Domain Strategist Concentration at the Air Command and Staff College.

The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.

Featured Image: An F-15E Strike Eagle flies over Iraq May 5, 2018. (U.S. Air Force photo by Staff Sgt. Corey Hook)