The Unique Intelligence Challenges of Countering Naval Asymmetric Warfare

Naval Intelligence Topic Week

By CDR (ret.) Dr. Eyal Pinko

Introduction

“Hezbollah has the best missile boats in the world. He has many missiles, but he can’t drown.”—Major General Eli Sharvit, Israeli Navy Commander, January 2018

Navies increasingly must deal with asymmetric naval forces, whether they are terrorist organizations’ naval forces, pirates, or asymmetric naval forces operated by countries as part of a comprehensive and integrated naval strategy. This integration is seen in the Chinese Navy with its maritime militia and missile boat force, as well as the Iranian Revolutionary Guards Navy which operates alongside the regular Iranian Navy.

The characteristics of asymmetric naval warfare, including its force development, civilian integration, and combat doctrine of multidimensional attacks or gray zone operations, poses many challenges to navies. This is especially true for those naval intelligence organizations assigned to understand these asymmetric forces and their methods.

Naval intelligence organizations seeking to understand asymmetrical naval forces must promote a culture of creativity, daring, pluralism, and deep cultural knowledge in order to best understand how the asymmetric adversary behaves and operates. One pertinent example of how naval intelligence was stressed against asymmetric naval forces can be found in the Israeli’s Navy’s experience against Hezbollah in the Second Lebanon War.

The Second Lebanon War at Sea

On July 12, 2006, Hezbollah, the Lebanese terror organization, launched a surprise offensive attack against an Israeli Defense Force (IDF) unit guarding and patrolling Israel’s northern border. The Hezbollah force killed three Israeli soldiers and captured two other soldiers. At the same time, Hezbollah fired rockets toward northern Israeli cities. The IDF decided to respond unexpectedly. And so the Second Lebanon War broke out without warning.

During the war the Israeli Navy was assigned several missions. The missile boat flotilla carried out several essential roles, including intelligence gathering, naval gunfire support, missile attacks against significant targets on the Lebanese coast, and as artillery support to Israeli ground forces. The submarine flotilla carried out special operations, and the 13th flotilla, the Naval Commandos, carried out assault operations from the sea to the Lebanese coast. The Navy’s most important mission was to impose a naval blockade on Lebanese shores and prevent maritime trade.

On Friday, July 14, at 8:42 PM, Hezbollah Secretary-General Hassan Nasrallah burst onto the air. With a video running in the background, he excitedly recounted that his organization attacked an Israeli Navy missile boat sailing off Beirut’s coast and that the Israeli missile boat was drowning.

A day later, it became clear that the Lebanese terrorist organization’s naval force, with the Iranian Revolutionary Guards Quds Force’s help, fired a C-802 anti-ship missile at Israeli missile boats. One of the missiles hit the crane of the INS Hanit, which the ship managed to survive, return by tow to its homeport, and sail again after about three weeks for operational missions. However, four Israeli crewmembers lost their lives.

The INS Hanit (Photo via Tsahi Ben-Ami /Flash90)

Intelligence details regarding Iranian anti-ship missiles delivered to Hezbollah were being collected by the Israeli intelligence agencies about two years before the ship was hit. However, the assessment of naval intelligence in 2004 was that rockets had reached Hezbollah’s naval force (not missiles), and it estimated that Hezbollah would attempt to detect Israeli Navy ships using an array of coastal radars and fire the rockets at them.

A better understanding of the nature of asymmetric naval warfare and its associated force development could have better prepared the Israeli Navy for this attack. However, naval asymmetric warfare poses unique complexities that can strain the ability of naval intelligence to comprehend it.

Naval Asymmetric Warfare

Naval asymmetric warfare is warfare conducted at sea and from the sea, between two adversaries, one of whom has a significant superiority over the other in quantity, quality, combat doctrine, or technology. Asymmetric naval warfare is usually employed by the weaker side to challenge the maneuvering, warfighting, and command and control capabilities of the stronger side, therefore enabling the weaker side to offset the stronger side’s conventional superiority.

Asymmetric naval warfare is based on several principles:

Passive warfare: A set of complementary efforts and means which do not necessarily include combat or the use of combat systems, that will prevent losses to equipment, damage to infrastructure (civilian and military), and will prevent as much personal injury as possible. Passive warfare can include the use of deception, psychological warfare, and cyber warfare.

Assimilation: Employing nontraditional combatant platforms and means and integrating them into civilian infrastructure (such as civilian fishing boats or innocent civilian fishing ports). This can include embedding combat operatives among the civilian population.

Swarm attack tactics: Swarm attacks are based on attacking a target with numerous attacking vessels from different directions, simultaneously and multidimensionally. The multidimensional attack can feature surface, subsurface, aerial, and shore-based attacks, and be carried out by employing missiles and rockets, naval mines, torpedoes, unmanned aerial vehicles, and more. Swarm attacks attempt to impose a saturation dilemma on the target’s defenses, where the attacked vessel’s ability to defend itself is disrupted in a manner that may be out of proportion to the threat itself, because of how the threat is presented. The asymmetric force secures temporary warfighting advantages by concentrating for swarm attacks and then subsequently disperses to mitigate exposure to counter responses. 

The use of maritime topography: Topographical features such as coastal routes, coves, inlets, islands and more can be used to hide, conceal, resupply, insert special operations forces, and generally operate against the opponent.

Expansive target sets: The asymmetric naval force will often operate against both military and civilian infrastructure, such as oil platforms and seaborne economic assets.

Hezbollah’s Asymmetric Warfare

Hezbollah’s naval force was established several years before the Second Lebanon War in 2006, but began to develop significantly after the Israeli Navy ship was hit. Naval force development was further accelerated by an understanding that fighting in the maritime dimension has significant implications for a future battlespace in Lebanon, on Israeli force maneuvering at sea and on land, and the Israeli homefront capability to continue prolonged fighting.

The Iranian Revolutionary Guards Quds Force was the entity who helped set up the Hezbollah naval unit, worked to equip it with quality Iranian naval weapons, and trained the unit in Iranian Revolutionary Guards bases and training camps. An Iranian officer described Hezbollah’s naval unit capabilities and said it has a team of naval commando divers and Chinese-made speedboats trained to attack Israeli navy ships through Iranian swarm attack tactics. Another potential operational mode is the use of high-speed boats to carry out suicide attacks on Israeli vessels.

The Iranian officer added that Hezbollah enlisted Iranian and North Korean experts to build a 25-kilometer-long defensive strip along the Lebanese shore. Underground outposts were constructed, connected by canals, and allowed easy and quick passage from one position to another. This infrastructure enables Hezbollah to operate and execute combined attacks from different directions, to move information and weapons from place to place, all while defending itself by hiding in the folds of the ground and beneath it.

The Revolutionary Guards built warehouses for Hezbollah in the Bekaa area of ​​eastern Lebanon, where rockets, missiles, and ammunition are stored. The warehouses, managed by Revolutionary Guards officers and Hezbollah operatives, enable all Hezbollah units, including the naval force, to have a continuous supply of weapons and access to the logistical and technological backbone.

The force development of the organization’s naval arm is based on insights and lessons learned by Hezbollah with the help of Iran (especially lessons learned from the First Lebanon War (1982), the Second Lebanon War (2006), and the IDF’s military operations in the Gaza Strip. Hezbollah and Iran understood the IDF and Israel’s vulnerabilities, mainly Israel’s strategic infrastructure vulnerabilities (at sea and on the coastline), including gas rigs, ammonia storage facilities, seaports, energy facilities, water desalination plants, and more.

Moreover, the Israeli Navy ship hit during the Second Lebanese War, and the fact that the Israeli Navy subsequently removed its ships from the Lebanese coastal area, was perceived by Hezbollah as a groundbreaking event that disrupted Israeli naval operations and perceptions. Hezbollah understood that if its missiles hit ships (whether naval or merchant ships), this would inhibit seaborne commercial movement to and from Israel. This could create a naval blockade and a state of sea denial which Israel would not be able to tolerate in the long run.

According to those insights, Hezbollah began to strengthen its naval forces, building long-range detection capabilities, coastal missile batteries, and well-developed naval commando forces. The coastal missile batteries, including the 300-km Russian long-range Yakhont missiles, which were apparently transferred from the Syrian Navy (with or without the knowledge of Russia) and the Iranian coastal anti-ship missiles of various models and ranges (such as Noor, Gahder, Ghadir, and C-802 missiles), enable Hezbollah in future armed conflicts to hit Israeli naval ships operating off the coast of Lebanon. More than that, some of these capabilities will enable Hezbollah to hit the ports of Haifa and Ashdod within missile range, and even Israel’s gas infrastructure, therefore moving the campaign into Israeli territory. A naval blockade on its ports would disrupt maritime commerce routes to and from Israel and paralyze its energy supplies.

https://gfycat.com/circularfearlessdeermouse

A maritime barrier constructed north of Gaza to guard against infiltration by Hamas naval commandos. (Footage via Haaretz/Israel Ministry of Defense)

Beyond the naval blockade and sea denial capabilities, the Hezbollah naval unit under Iranian auspices is developing capabilities to carry out seaborne commando raids and sea mine attacks on Israeli ports using Iranian midget submarines and diver transportation vehicles. These can be launched from Hezbollah’s permanent bases along the Lebanese coast or even through merchant ships.

During the 2011 Syrian campaign, Hezbollah’s naval forces, along with Syrian and Iranian Revolutionary Guards naval forces, were stationed in the Syrian city of Ladqiya with fast patrol boats. These could be used if necessary to attack rebel forces and against Israel in case of direct Israeli intervention in the Syrian campaign.

In summary, Hezbollah’s asymmetric naval force is developing in several dimensions:

  • Coastal launched long-range anti-ship missiles. Some of these missiles can also be used against seaports, coastal infrastructure, and gas rigs.
  • The development of detection measures and intelligence systems enables the missile operators to build a real-time and accurate maritime operating picture that will be used to identify and mark targets for a missile attack.
  • Rapid attack capabilities using high-speed boats, enabling the naval unit to carry out commando operations, attack Israeli ships, gather intelligence, and attack strategic infrastructure.
  • Unmanned surface and undersea vehicles for intelligence collection operations and suicide attacks.
  • Undersea warfare capabilities, including midget submarines and SDVs (Swimmers Delivery Vehicles).

The Role of Naval Intelligence

The role of naval intelligence is to formulate and present an intelligence picture about rivals and activities in the maritime arena. Naval intelligence provides a strategic and tactical intelligence picture, enabling decision-makers and operational units to make decisions regarding force development, combat doctrine, force allocation, operations, and defining ongoing missions. Naval intelligence is required to present an intelligence picture, based on processes of collection, evaluation, research, and assimilation on several levels.

Opponents’ capabilities and infrastructure: Technological intelligence focuses on the opponent’s combat systems capabilities, performance, parameters, advantages, and vulnerabilities, along with the number of combat systems in the opponent’s possession and its procurement and force development processes. This intelligence also includes information about the opponent’s combat systems’ level of maintenance, readiness, and operational competence. Technological intelligence and the assessment of the opponents’ capabilities influence the strategic processes of force development, procurement, and developing combat doctrines.

The opponent’s intentions: Strategic intelligence can focus on the opponent’s intentions and propensities to launch military campaigns, offensive actions, and proliferate weapons. This intelligence can allow the operating forces to receive an early warning about the opponent’s operational activity and set in motion a preventive or defensive response.

Operations Intelligence: Tactical and operational intelligence can support planning and executing peacetime operations, operations between military campaigns, or during them. Operational naval intelligence is required to support sea operations (at sea and from the sea), including surface operations, submarine operations, unmanned aerial vehicle employment, and special commando operations. This intelligence includes target intelligence, enabling one to identify an opponent’s units’ infrastructure, headquarters, and operations, which may be targeted for attacks.

Exposure Intelligence: Intelligence can help one appreciate the exposure of one’s own forces’ during operational activities, technological capabilities, force development processes, combat doctrine, and other information. Such intelligence is required as a decision-making support tool in carrying out naval operations to minimize force exposure and prevent casualties. It is also used to develop strategic and tactical options for deception operations.

Naval Intelligence Challenges in the Face of Asymmetric Forces

Asymmetrical naval forces impose several significant challenges on naval intelligence. The first challenge is the collection challenge, which includes efforts to gather information about asymmetric naval forces at the strategic and tactical levels.

At the strategic level, an intelligence challenge is to understand force development and procurement. The force development efforts of regular navies are usually visible to an extent to the public and through open-source media. But for asymmetric naval forces, force development is usually covert and clandestine. Weapons are secretively procured, and identifying covert proliferation is a significant challenge, without which it is not possible to build ideal responses and operational plans. Without intelligence on proliferation actions cannot be to thwart and stop deliveries. The entities that deliver these weapons and the related training, whether they be arms traffickers or state-affiliated actors like the Quds Force, often take extensive measures to conceal their deliveries and associations.

A display of weapons and other military equipment confiscated from the MV Karine A vessel, used in an attempt to smuggle weapons to the Gaza strip. (Photo via Wikimedia Commons)

Moreover, technological intelligence regarding the performance and capabilities of combat systems in the hands of the asymmetric opponent requires not only very complex continuous intelligence gathering, but also profound and intimate research, including understanding of the technological limitations of the opponent’s combat systems, especially when software-based combat systems can be modified or upgraded. Technological intelligence makes it possible to examine the capability of electronic warfare systems and hard-kill defense systems to defend against present and future threats that the asymmetric adversary is likely to use.

At the tactical level, the collection challenge concerns the information needed to accurately recognize and identify the naval asymmetrical adversary forces, which can be embedded among civilian infrastructure and an innocent population.

After the collection challenge, the next major hurdle is the assessment challenge, which also concerns two levels that aim to build an accurate picture of the opponent and complete the intelligence puzzle.

At the strategic level, the challenge is to understand the asymmetric opponent’s capabilities, their combat systems’ capabilities and performances, and his operational doctrines. Perhaps the most significant challenge is to recognize his training and operating routines. Knowing the opponent’s routines and operational patterns makes it possible to assess when a change occurs and perceive his attention to shift into an attack or different operating pattern.

The tactical level challenge, which relates to identifying fishing boats, fishers, and merchant ships as a suspicious maritime activity, is very complex. The tactical intelligence challenge requires identifying abnormal signs and distinguishing with a high-level of confidence between innocent civilians and vessels versus naval or terrorist activities. The covert force development processes, assimilation among civilians, and camouflage in civilian infrastructure can make it very challenging to assess intelligence and indicate changes in activity and operational patterns.

The final challenge in this context is the assimilation challenge. This challenge involves intelligence officers’ and organizations’ roles in changing perceptions, combat doctrine, and changing force buildup and procurement processes among decision-makers and the commanding officers. Intelligence assimilation among decision-makers is also required to carry out operations and ongoing operational activities during wartime and peacetime. Fighting against asymmetric naval forces, operating from a civilian population, in coordinated multidimensional swarm attacks, using different combat systems, requires different naval combat concepts than those applied in the classical naval battlefields.

Intelligence officers are required to skip over the “walls” of information security and sources-security, and to produce a clear and open dialogue with the operational levels, without exposing sources and incurring risk.

For example, intelligence officers will almost always choose to present threats or the opponent’s operational activity using probability levels. The operational level or decision-makers will classify a threat as “Yes” or “No,” and at its severity level, for example, whether the suspected boat belongs to an innocent citizen or a terrorist.

Intelligence officers’ role is to overcome this cultural gap and present a precise, up-to-date, and reliable intelligence picture that will influence decision-making to make optimum decisions while saving lives, even if they encounter personal resistance and doubts from the operational level.

Conclusion

Since the attack on the Israeli missile boat, further asymmetrical naval forces have developed in the Middle East and Asia, most of them inspired by Iranian naval doctrine, including Hezbollah, Hamas naval units, and the Houthi rebels in Yemen. It should be noted that Iran and China use asymmetric naval forces, which operate alongside traditional navies.

Asymmetric naval forces pose highly complex intelligence challenges to the navies that may need to comfort them, and require different intelligence gathering, research, and assessment techniques. Yet even in the asymmetric context the purpose of naval intelligence will remain the same: to understand rival navy capabilities, intentions, and operations.

Eyal Pinko served in the Israeli Navy for 23 years in operational, technological, and intelligence duties. He served for almost five more years as the head of the division at the prime minister’s office. He holds Israel’s Security Award, Prime Minister’s Decoration of Excellence, DDR&D Decoration of Excellence, and IDF Commander in Chief Decoration of Excellence. Eyal was a senior consultant at the Israeli National Cyber Directorate. He holds a bachelor’s degree with honor in Electronics Engineering and master’s degrees with honor in International Relationships, Management, and Organizational Development. Eyal holds a Ph.D. degree from Bar-Ilan University (Defense and Security Studies).

Featured Image: Palestinian divers from the al-Qassam Brigades, Hamas’ armed wing, take part in a parade marking the 27th anniversary of the Islamist movement’s creation on December 14, 2014 in Gaza City. (Photo via AFP/ Mahmud Hams)

Trustable AI: A Critical Challenge for Naval Intelligence

Naval Intelligence Topic Week

By Stephen L. Dorton & Samantha Harper

With a combination of legitimate potential and hype, artificial intelligence and machine learning (AI/ML) technologies are often considered the future of naval intelligence. More specifically, AI/ML technologies promise to not only increase the speed of analysis, but also deepen the quality of insights generated from large datasets.1 One can readily imagine numerous applications for AI/ML in naval intelligence at tactical, operational, and strategic levels: threat detection and tipping and cueing from signals intelligence (SIGINT) or electronic intelligence (ELINT), target classification with acoustic intelligence (ACINT) or imagery intelligence, using AI/ML to predict enemy movements for anti-submarine warfare, and many others.

The government, industrial, and academic sectors will continue to work fervently on challenges such as data collection, preprocessing, and storage, the development of better algorithms, and the development of infrastructure for storage and compute resources. However, even the best performing AI/ML technologies are moot if the analyst or downstream decision maker cannot trust the outputs. Given the gravity of decisions driven by naval intelligence, AI/ML outputs must be readily interpretable, and not only provide the what, but the why (i.e. why the answer is what it is) and the how (i.e. how specifically the AI/ML arrived at the answer). 

The Challenge: Trust in AI

To illustrate this challenge, consider the following hypothetical scenario: a watch supervisor on an aircraft carrier is doing pre-deployment qualifications off the coast of Virginia. After making a brief head call they come back to find out that one of their junior watchstanders has reported a dangerous, but unlikely, aerial threat to the Tactical Action Officer (TAO). After nearly putting the ship in general quarters, the TAO realized that based on the operating area, the considerable range from the threat, and other intelligence on the threat’s location, it was impossible for that threat to be there. Further inspection shows that the AI/ML in the system was programmed to automatically classify tracks as the most dangerous possible entity that could not be ruled out, but the junior watchstander was unaware of this setting. Unfortunately, the AI/ML did not explain why it classified the track as a high threat contact, nor did it explain what signatures or parameters it considered, nor how it generated a list of possible tracks, so the junior watchstander made a bad call based on an incomplete understanding of the AI system.

The problem is that this is not a purely hypothetical scenario, but is a real event that happened several years ago, as recounted during an ongoing study to investigate the role of trust and AI/ML in intelligence. While one may easily dismiss this and say “no harm, no foul,” that would be myopic. First, if this same scenario happened in contested waters or with a less experienced TAO, there could have been serious ramifications (such as another Vincennes incident, in which an Iranian airliner was shot down by a U.S. Navy cruiser). Second, this “boy who cried wolf” scenario caused the TAO to lose trust in the watchstander, the supervisor, and the entire section. Not only was the watchstander afraid to make decisive calls after the event, but it took nearly half of the deployment making correct calls and answering requests for information to regain the trust of the TAO. This lack of trust might have caused the TAO to hesitate to act on their reports if a real threat were to be identified. These kinds of delays and second guessing can cost lives.

This example highlights another dimension to the challenge facing employment of AI/ML in naval intelligence. The goal is not to simply develop systems that sailors and analysts trust as much as possible. Having too much trust in AI/ML can result in misuse of the system (e.g. immediately accepting its outputs without considering the other available intelligence). Conversely, having too little trust can result in disuse of the system (missing out on genuine benefits of the system). Therefore, the pressing challenge for the future of naval intelligence is to develop AI/ML capabilities that allow operators to rapidly develop and calibrate their trust to appropriate levels in the right contexts and scenarios, the same way they would with their human teammates.

What is Trust? What Affects It?

The experimental psychology community has studied trust for years, defining it as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability.”2 In other words, trust is the degree to which one is willing to make oneself vulnerable, or put oneself in the hands of another agent (e.g. a person, or an AI/ML system). It is critical to understand what makes people gain or lose trust, as trust greatly impacts the adoption of new systems, and can make or break the performance in a human-machine team. This is especially challenging in the context of naval intelligence, where uncertainty and vulnerability are always present.

Designing AI/ML systems to engender trust is a complicated affair, due in no small part to what a complex and highly-dimensional phenomenon trust is. There are roughly a dozen factors that affect trust, including the following:3

  • Reputation: The AI/ML has received endorsement or reviews from others. 
  • Usability: The AI/ML is easy to interact with.
  • Predictability: The ability to predict the actions of the AI/ML.
  • Security: The importance of operational safety and data security to the AI/ML.
  • Utility: The usefulness of the AI/ML in a task.
  • Goal Congruence: The extent to which the AI/ML’s goals align with the user.
  • Reliability: The AI/ML is reliable and consistent in functioning over time.
  • Understandability/Explainability/Feedback: The extent to which one can understand what the AI/ML is doing, why it is doing it, and how it is doing it, either implicitly or explicitly.
  • Trialability: There is opportunity to interact with the AI/ML prior to accepting or adopting it for operational use.
  • Job Replacement: There is concern about the AI/ML taking one’s job.
  • Errors/ False Alarms: Information provided by the AI/ML does not contain errors or false alarms.

A Naturalistic Study of Trust, AI, and Naval Intelligence: Early Findings

We are currently conducting a study to test the factors and better understand how trust is gained or lost in the context of naval intelligence, using a naturalistic decision making approach. Naturalistic decision making is the study of how people use their experiences in naturalistic settings, rather than in a controlled laboratory environment.4 This approach allows us to understand how these factors affect trust and decision making in the chaos of real world operations, complicated by missing information and time pressure.

More specifically, we used the Critical Incident Technique, a structured and repeatable means to collect data on prior incidents to solve practical problems.5 We recruited participants who had experience in intelligence, including planning, collection, analysis, or even military decision making as an active consumer of intelligence. Those in naval intelligence had experience in different intelligence fields, including ACINT, SIGINT, ELINT, GEOINT, and all-source intelligence, although most of their experiences were in tactical intelligence or operations using AI/ML that exploits intelligence. 

Participants were asked to identify an AI/ML technology they worked with in the context of intelligence, and then to think of any defining event (or series of events) that made them gain or lose trust in that technology. This resulted in a sample of nine stories about trust in AI/ML in the context of naval intelligence: four about gaining trust, and five about losing trust. These stories were similar to the earlier story about the junior watchstander reporting an impossible threat. A research team coded each story for the presence or absence of each trust factor, allowing insights to be gained from the data. So, what factors affected trust in AI/ML in naval intelligence?

Explainability and Utility are Paramount

Understandability/Explainability/Feedback was the most common factor in gaining or losing trust, which was found in eight of the nine examples. It was present in all five stories about losing trust, where a lack of explainability manifested itself in multiple ways. A lack of understanding how the AI/ML generated results prevented the captain of a ship from knowing if they could safely override navigation recommendations from a GEOINT tool. In another case, it prevented search and rescue planners from even knowing if there were errors or limitations in another GEOINT product: “they put garbage in and got garbage out… but our people didn’t understand the theory behind what the machine was doing, so they couldn’t find [the] errors [in the first place].” In stories about gaining trust, analysts said that understanding the underlying algorithms enabled them to trust the AI/ML, because even when the outputs were wrong, they knew why. This knowledge enabled a SIGINT collector to adapt their workflow based on their understanding of the strengths and weaknesses of their AI/ML system, capitalizing on its strengths (as a tipper) and mitigating its weaknesses (as a classifier), “ultimately I was happy with the system… it gave me good enough advice as a tipper that a human could have missed.

Utility, or the usefulness of the AI/ML in completing tasks, was the second-most commonly cited factor in gaining or losing trust. It was present in three stories about gaining trust, and three stories about losing trust. Ultimately, if the AI/ML helps someone do their job successfully, then it is trusted, and the inverse is true if it makes success more difficult. As an all-source analyst said of one of their AI/ML tools, “it’s an essential part of my job… if I can’t use this tool it’s a mission failure for me.” Conversely, another all-source analyst lost trust in an AI/ML tool because its capabilities were so limited that it did not help them complete their tasking, “When I first heard of it I thought it was going to be useful… then I learned it was built on bad assumptions… then I saw the answers [it produced]…

Other Findings and Factors

Reputation, or the endorsement from others was cited in half of the stories about gaining trust, but never as a factor in losing trust. Because of the immense interpersonal trust required in naval intelligence, endorsement from another analyst can carry significant weight, “the team was already using the tool and I trusted the team I was joining… that made me trust the tool a bit before even using it.” Interestingly, predictability of the AI/ML was not cited as a factor in gaining or losing trust. One participant seemed to explain that the operational domain is rife with uncertainty, so one cannot expect predictability in an inherently unpredictable environment, “I’m smart enough to know that the [AI/ML tools] are taking data and making estimates… the nature of submarine warfare is dealing with ambiguous information…” 

Finally, errors and false alarms were cited in three of the five stories with a loss of trust in AI/ML, but were never cited as factors for gaining trust. It seems plausible that this may be because a lack of errors may manifest itself as utility or reliability (it functions consistently over time), or it could be because of the previous sentiment: there will always be errors in an inherently uncertain domain such as naval intelligence, so there is no reasonable expectation of error-free AI/ML.

Conclusions

AI/ML tools will become more ubiquitous in naval intelligence across a wide variety of applications. Several factors affect trust in AI/ML, and some naturalistic investigation identified factors, such as explainability and utility, that play a role in gaining or losing trust in these systems. Appropriately calibrated trust, based on an understanding of the capabilities and limitations of AI/ML, is critical. Even in cases where the AI/ML does not produce a correct answer, operators will adapt their workflows and reasoning processes to use it for the limited cases or tasks for which they do trust it. 

Unfortunately, AI/ML capabilities are often developed with good intentions, but fall into disuse and fail to provide value if they do not consider the human element of analysis. Analyst reasoning and sensemaking is one such component of the human element,6 but trust is another component that must be considered in the development of these systems, particularly in regard to explainability. Greatly complicating the matter of trust, but not addressed adequately yet, is that AI/ML can be deceived.7 Our potential adversaries are well aware of this weakness, so developing an understanding of how our AI/ML systems can be deceived and ultimately protected from deception is crucial.

If an analyst were asked how they arrived at their findings and their response was simply “.79” the commander would likely not trust their findings enough to make a high-stakes decision from them, so why would that be acceptable output from AI/ML? Developing trustable AI/ML technologies is one of the greatest challenges facing the future of naval intelligence.

Steve Dorton is a Human Factors Scientist and the Director of Sonalysts’ Human-Autonomy Interaction Laboratory. He has spent the last decade conducting RDT&E of complex human-machine systems for the Navy and other sponsors. More recently, his research has focused on human interactions with AI/ML and applying crowdsourcing in the context of intelligence analysis.  

Samantha Harper is a Human Factors Engineer in Sonalysts’ Human-Autonomy Interaction Laboratory, who has experience in the design, execution, analysis, and application of user-centered research across various technical domains, including intelligence analysis, natural language processing, undersea warfare, satellite command and control, and others.

Acknowledgments

This work was supported in part by the U.S. Army Combat Capabilities Development Command (DEVCOM) under Contract No.W56KGU-18-C-0045. The views, opinions, and/or findings contained in this report are those of the authors and should not be construed as an official Department of the Army position, policy, or decision unless so designated by other documentation. This document was approved for public release on 10 March 2021, Item No. A143.

Endnotes

[1] McNeese, N. J., Hoffman, R. R., McNeese, M. D., Patterson, E. S., Cooke, N. J., & Klein, G. (2015). The human factors of intelligence analysis. Proceedings of the Human Factors and Ergonomics Society 59th Annual Meeting, 59(1), 130-134.

[2] Lee, J. & See, K. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46, 50-80. 10.1518/hfes.46.1.50.30392. 

[3] Siau, K. & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31, 2. 

Muir, B. M. (1994). Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 37(11), 1905-1922.

Rempel, J. K., Holmes, J. G., & Zanna, M. P. (1985). Trust in close relationships. Journal of Personality and Social Psychology, 49(1), 95–112. https://doi.org/10.1037/0022-3514.49.1.95

Balfe, N., Sharples, S., & Wilson, J. R. (2018). Understanding is key: An analysis of factors pertaining to trust in a real-world automation system. Human Factors, 60(4), 477–495. 

Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434.

[4] Klein, G. (2017). Sources of Power: How People Make Decisions (20th Anniversary Edition). Cambridge, MA: MIT Press.

[5] Flanagan, J.C. (1954). The Critical Incident Technique. Psychological Bulletin, 5, 327-358. doi: http://dx.doi.org/10.1037/h0061470

[6] Moon, B. M. & Hoffman, R. R. (2005). How might “transformational” technologies and concepts be barriers to sensemaking in intelligence analysis, Proceedings of the Seventh International Naturalistic Decision Making Conference, J. M. C. Schraagen (Ed.), Amsterdam, The Netherlands, June 2005.

[7] Brennan, M. & Greenstadt, R. (2009). Practical attacks against authorship recognition techniques. Proceedings of the Twenty-First Innovative Applications of Artificial Intelligence Conference, 60-65.

Featured image: Lt. Jon Bielar, and tactical action officer Lt. Paul O’Brien call general quarters from inside the combat information center during the total ship’s survivability exercise aboard the Ticonderoga-class guided-missile cruiser USS Antietam (CG 54).  (U.S. Navy photo by Mass Communication Specialist 3rd Class Walter M. Wayman/Released)

Sea Control 236 – SEA SHANTIES!!! YARGGGGH!

By Jared Samuelson

We have an all-star cast to discuss (and SING!) some sea shanties! Two of the faces of the Shantytok movement, Frank and Promise Uzowulu, join us to discuss how they discovered sea shanties and why they appreciate the genre. Craig Edwards breaks down the history of the shanty, and John Bromley and Craig perform a series of shanties for us! Please see the links for some of our favorites!

Download Sea Control 236 – SEA SHANTIES!!! YARGGGGH!

Links

3. The Wellerman (featuring Promise & Frank Uzuwolu)

8. FiddleCraig.com

Jared Samuelson is Executive Producer and Co-Host of the Sea Control podcast. Contact him at [email protected].

Calling in Thunder: Naval Intelligence Enabling Precision Long-Range Fires

Naval Intelligence Topic Week

By Lieutenant Commander Gerie Palanca, USN

“The essential foundation of all naval tactics has been to attack effectively by means of superior concentration, and to do so first, either with longer-range weapons, an advantage of maneuver, or shrewd timing based on good scouting.”Captain Wayne P. Hughes, U.S. Navy

Rear Admiral Michael McDevitt states in his 2020 Proceedings article that by 2035 the People’s Liberation Army Navy (PLAN) will have approximately 430 ships. Former Pacific Fleet Chief of Intelligence, Capt. (ret.) Jim Fanell called the span between 2020 and 2030 a “decade of concern” – Chinese Communist Party leaders likely assess 2030 as their last opportunity to militarily “reunite” Taiwan and mainland China. By that time the PLAN fleet will dwarf the estimated U.S. Navy fleet size of 355 ships. This imbalance in fleet size will likely embolden China’s regional efforts to deny American presence within the 9-dash line, China’s territorial claim in the South China Sea. By 2035, the PLAN will not only have a larger maritime force, but they will also procure anti-surface weapons and supporting capabilities that will either match or outshine U.S estimated capabilities. To characterize this scenario, the Congressional Research Service report on precision-guided munitions highlighted that the current anti-access/area denial weapon systems deployed along China’s coast and afloat outrange U.S. weapon systems, with ranges of almost 1000 nautical miles, creating a need for U.S. ships and aircraft to engage the adversary at longer ranges in order to maintain survivability. According to Fleet Tactics, increasing a weapon’s range squares the scouting (i.e. intelligence) requirement for that system.1

This exponential growth in the need for scouting to support fires is underscored in the congressional report on intelligence, surveillance, and reconnaissance (ISR) design for great power competition (GPC). The report describes the need for a system that embraces disruptive technology and the importance of operational integration, specifically in the form of “sensor-to-shooter.” For the U.S. to maintain its information advantage and dominate in a long-range fight, the military will need to adopt an information warfare approach that is rapid enough to operate within the adversary’s decision cycle. To achieve this effectively, the U.S. will need to find, fix, and disseminate targets to the warfighter at a speed far greater than ever before.

In 2020, Vice Admiral White, Commander Fleet Cyber Command, emphasized that the Navy Information Warfare Community (NIWC), including the Naval Intelligence Community, must provide warfighting capabilities enabling precision long-range strike and the community must normalize these critical capabilities with urgency. GPC has added a dimension that has created a greater requirement to support the tactical decision makers executing these fires. With the adversary’s adoption of long-range weapons to combat U.S. carrier strike groups, the decision space and tempo of traditional ISR is obsolete. 

In the long-range fight, rapid, actionable, targetable information is now the center of gravity. For the NIWC to execute an ISR construct that supports this evolving nature of warfare effectively, the community will need to develop a tailored artificial intelligence (AI) capability. Scouting in support of maritime fires is a culture shift for the NIWC, but it is not the only change that needs to happen. For over a decade, NIWC has been primarily focused on either supporting the Global War on Terror and combating violent extremist organizations or tracking global civilian shipping. While these focused efforts have been immensely important, it is time for a pivot.

GPC with China may depart from previous examples, such as the Cold War, by resulting in an open conflict between great powers. An escalation with China would involve weapon systems that are designed to engage the adversary at greater than 1000 NM. This paradigm is not new, with the Navy’s reliance on the BGM-109 Tomahawk cruise missile since 1983, but future attacks against unplanned, mobile targets at that range, while at risk of the adversary’s long-range anti-ship ballistic and cruise missiles is an altogether novel and unfamiliar challenge.

To address this issue by 2035, the NIWC will need to regain information superiority. Rapid acquisition of realistic AI will be only one of many tools required to accomplish this. The NIWC will need swift development of doctrine and tactics, techniques, and procedures (TTPs) to execute AI-supported intelligence and modern support to precision long-range fires. Additionally, there will not be any need for new TTPs that do not address the high-end fight. The NIWC will need to support rapid acquisition of the capabilities it needs to fight tonight. 

Speed up the decision cycle by defining the role of the analyst in a world of mature AI

The director of the National Geospatial Intelligence Agency (NGA) stated that “if trends hold, intelligence organizations could soon need more than eight million imagery analysts to analyze the amount of data collected, which is more than five times the total number of people with top secret clearances in all of government.” The DoD noticed this trend and established the Joint AI Center to develop products across “operations intelligence fusion, joint all-domain command and control (JADC2), accelerated sensor-to-shooter timelines, autonomous and swarming systems, target development, and operations center workflows.” Under these architectures, AI is the only technological way that the IC, and NIWC, will be able to use all the data available to support the warfighter in precision long-range fires. 

AI is a force multiplier for the NIWC, and the integration of the technology is a matter of when, not if. The NIWC must identify the role of the operator and analyst when augmented with AI. According to the 2019 Deloitte article titled “The future of intelligence analysis,” the greatest benefit from automation and AI blooms when human workers use technology to increase how much value they bring to the fight. This newfound productivity allows analysts to spend more time performing tasks that have a greater benefit to the NIWC, instead of focusing on writing detailed intelligence reports or spending twelve hours creating a daily intelligence brief. If nurtured and trained now and over the next decade, by 2035 AI will have the ability to make timely, relevant and predictive briefs for commanders, freeing analysts to provide one of those most valuable analytical tools: recommendations.

Since AI is inextricably linked to the rapid analytic cycle required to enable long-range fires, the NIWC needs to determine the ideal end state of the analyst-AI relationship. One of the biggest misunderstandings about AI in the IC is the fear of losing the intelligence analyst. The opposite is more probable: the IC will fail to incorporate and use AI to its fullest potential to solve its hardest problems. As AI matures, the NIWC will need to integrate AI afloat and ashore to allow analysts to focus on tracking hard targets, elevating predictive analysis, and collaborating across the strike group and IC, while communicating the results effectively to the warfighter.

The advancement of AI alone will not ensure the NIWC’s success in conflict. The process and outputs of intelligence must be refocused to effectively enable fires and fully integrate into operations. The Navy and Air Force have also both heavily invested in the smart, network-enabled AGM-158 Long-Range Anti-Ship Missile and the Navy and Marine Corps have invested in the RGM-184 Naval Strike Missile. With the estimated flight times of these vehicles as a matter of minutes, coupled with their impressive ranges, target intelligence generated by non-organic sensors must get directly to the end user. The IC provides a majority of target intelligence to the warfighter and is woefully unprepared for this new paradigm. The IC needs to embrace a fully informed, holistic intelligence picture for the DoD to effectively execute a long-range fight in GPC. These advanced weapons also require machine speed intelligence to keep up with the timeline of engagement and the pace of dynamic targeting. Machine-to-machine systems are not new in the DoD, but AI is the avenue connecting those systems to modern missions. Because of traditional hurdles due to stovepipes and information security, the IC and DoD have the arduous chore of ensuring AI does not become an empty technology hindered by issues of classification and policy, ultimately minimizing the inputs into the algorithms. AI is also idealized as the savior of all hard intelligence problems. The NIWC needs to use AI for what it can actually do now. To that point, the IC is the key player in ensuring the NIWC has the data it needs to develop this capability.

Speed up the development of doctrine 

For the NIWC, operating alongside AI and supporting precision long-range fires are doctrine gaps. In GPC the most important intelligence will be actionable intelligence with the fight progressing at a tempo where information must be available directly to the shooter and provide confidence in the target within seconds to minutes. Not to mention, the target will most likely be well outside of any organic sensors of the distributed platforms. The NIWC is not famous for disseminating information within seconds to minutes directly to the warfighter, but the right doctrine will enable this construct. This doctrine must be developed rapidly, meet the needs of the future war, and be promulgated to the Fleet for feedback based on the environment. 

According to JP 1-02 DoD Dictionary of Military and Associated Terms, doctrine is the fundamental principle that guides the actions of military forces or elements thereof in support of national objectives. The national objectives in this case are decision superiority and LRF. These require a culture shift away from intelligence support to military operations towards intelligence-driven operations. This culture is not new. An example can be seen through the effectiveness of special operations forces (SOF). In the SOF community, intelligence is the foundation of the mission plan and deliberately phased execution. To tackle this modern adversary in a dynamic maritime environment, we need to adopt this culture. Within the SOF skillset, GWOT-type targets were time sensitive targets of opportunity. The intelligence team supporting these targets needed rapid processing and dissemination so the operators could engage on a compressed timeline. While cutting-edge technology played a part, the culture was the clear distinction from traditional intelligence operations. New doctrine needs to be developed to support and sustain this large culture shift within the NIWC for intelligence driving long-range fires.

In addition to a culture shift, pushing the authority to engage targets down to lower levels will enable the speed required for decision superiority and LRFs . The PROJECT CONVERGENCE exercise, the Army’s contribution to the JADC2 joint warfighting construct, highlighted that improvements must be made in mission command and command and control (C2) . These improvements can be technology-based, but many modifications can happen at the TTP and doctrine level. To make this possible, the NIWC will have to develop TTPs within battlespace awareness, assured C2, and integrated fires that inform, ensure, and synchronize mission command in precision long-range fires. Doctrine is how the NIWC will standardize this tradecraft, allowing ashore C2 and afloat mission command even in a contested environment. 

While these sound like simple changes, those in the doctrine community may not be comfortable with rapid doctrine development and dissemination, especially on topics that are continually evolving like intelligence collaboration with AI and IW support to precision long-range fires. The risk of incomplete and insufficient TTPs on nascent capabilities is real for the warfighter. Unfortunately, there are countless anecdotes of systems delivered to platforms without proper doctrine and training for the sailors to be able to use or integrate the systems into current operations. With the ability to win in a GPC fight hanging on the ability to rapidly integrate emerging and disruptive technology into present operations, the NIWC cannot operate without doctrine any longer nor can it wait for the archaic doctrine process to catch up.

Speed up the acquisition process 

VADM White prioritized delivering warfighting capabilities and effects as the third goal for Fleet Cyber Command. Specifically, delivering warfighting capabilities that enable movement, maneuver, and fires using emerging concepts and technologies. A rapid acquisition culture allowing for risks is the only way to achieve VADM White’s desire for persistent engagement that will allow the USN to compete during day to day operations, especially in support of holding the adversary at risk via long-range kinetic capabilities.2 This concept raises a concern that speed should not be the only goal post. The NIWC needs to ensure that it buys what is needed for the future war. If it buys the programs that are in the process now, but faster, it might not actually solve any problems. It must create a culture that is able to let go of programs that do not meet the growing threat. The NIWC also needs a strategy that integrates with the joint warfighting concept for supporting precision long-range fires. 

According to the Director of the Space Development Agency, Dr. Derek Tournear, during the Sea-Air-Space 2020 Modern Warfighter panel, U.S. adversaries execute an acquisition timeline of about three to five years at the longest. By contrast, the U.S. acquisition cycle is about 10 years at the shortest. While this comment was space systems-focused, the reality rings true across the DoD acquisition system. This means a capability gap between the U.S. and an adversary will be short lived. Dr. Tournear also highlighted that this issue is not particular to the DoD acquisition program, but is a culture and process issue within the acquisition community. The DoD acquisition community is currently designed around not taking risks and overdesigning any issues that could impact program progress. A technique to combat this culture issue would be iterative designs that embrace 80% solutions on compressed time scales allowing feedback loops. Getting something to the fleet that addresses today’s problems without having to wait for full deliveries would drastically increase lethality, while real-world operator feedback would improve the end-state acquisition delivery. This is one example solution to address this problem that would complement other solutions such as digital acquisition and open architectures

Conclusion

The NIWC has been a cornerstone of every decisive point in every major naval battle in history. Despite this pedigree, GPC has placed an exciting challenge on the NIWC. To deter and win a GPC fight in 2035 and beyond, the NIWC must evolve to meet the challenge. To embrace the problems of the future, the NIWC must build a force that can integrate with the most important disruptive technologies like AI, train the force to quickly integrate and employ those technologies, and to acquire those technologies at the right pace. 

LCDR Gerie Palanca is a Cryptologic Warfare Officer and Information Warfare – Warfare Tactics Instructor specializing in intelligence operations and maritime space operations. His tours include department head at NIOC Colorado, signals warfare officer on USS Lassen (DDG 82), and submarine direct support officer deployed to the western Pacific. LCDR Palanca also attended the Naval Postgraduate School and received a M.Sc. in space system operations.

Endnotes

[1] CAPT W. P. Hughes, RADM R. P. Girrier, and ADM J. Richardson. Fleet Tactics and Naval Operations 3rd Edition, US Naval Institute Press. 2018. Annapolis, Maryland.  

[2] Congressional Research Service. “Intelligence, Surveillance, and Reconnaissance Design for Great Power Competition,” 04 June 2020. Available at: https://crsreports.congress.gov/product/pdf/R/R46389.

Featured image:  USS Gabrielle Giffords (LCS 10) launches a Naval Strike Missile (NSM) during exercise Pacific Griffin.  (U.S. Navy photo by Mass Communication Specialist 3rd Class Josiah J. Kunkle/Released)

Fostering the Discussion on Securing the Seas.