In Fall 2021 CIMSEC will host the seventh annual CIMSEC Forum for Authors and Readers (CFAR), an event where our readers and the public get to select the top CIMSEC authors of the preceding year, and engage with them on their work and topics of interest. The evening will provide a chance to engage your favorite CIMSEC contributors, hear their thoughts on how their pieces have held up, and explore their predictions.
Thanks to the generous partnership of the Center for Naval Analyseswe are pleased to offer a professional conference on a range of maritime security issues. We will also hold CFAR virtually via Zoom, so you can join in the discussion no matter where in the world you are!
Event Details
August 25–September 1: Nominations open September 6-9: Voting on finalists September 15: Winners and speakers lineup announced
How will the speakers be chosen? All CIMSEC readers are welcome tosubmit nominations for articles with the only criteria that the article nominated must have published on CIMSEC on or after June 8th, 2020. After nominations close, CIMSEC members will vote on the selected pieces and the finalists will receive invites to speak at CFAR. Not yet a member? Consider joining CIMSEC for free!
Submit your nominations using the form below.
We hope you can join us for an exciting event where authors chosen by CIMSEC readers will present on their writing and research. See you in the fall!
Jimmy Drennan is the President of CIMSEC. Contact him at [email protected].
The Advent of Laser Weapon Systems Presents a Highly Complex Decision Space
The Navy is advancing rapidly with the development and integration of high energy laser (HEL) weapon systems onto ships to support the ship self-defense mission. HEL systems offer novel hard-kill and soft-kill engagement options with targeting accuracy and narrowly focused speed-of-light lasing with a relatively low cost per shot. HEL hard-kill engagements provide a more traditional weapon function of burning through the target to cause enough damage to render the threat useless. HEL soft-kill engagements offer “softer” options of blinding threat sensors and optics, rather than complete destruction.
HEL systems differ significantly from traditional kinetic shipboard weapon systems. Laser weapons concentrate a very highly focused beam of coherent energy on targets at a distance. They must have line of sight with the threat target. Although the laser beam travels at the speed of light, the beam must “dwell” on the target for a period of time long enough to induce soft or hard kill effects. Environmental and atmospheric effects can greatly affect laser beams, diminishing the amount of irradiance that makes it to the threat. Laser weapons require significant amounts of power, and when facing threat situations that require longer dwell times or multiple engagements, operators may need to make sure that sufficient power is available.
Figure 1 – Laser Weapon Factors of Complexity. Click to expand.
Operating laser weapons is a complex endeavor. Figure 1 identifies the many characteristics of HEL operations that lead to complexity in this decision space. At the outset, tactical operations for defensive missions have inherent complexity: threats are often unexpected and offer a very limited reaction time, situational awareness is often incomplete and uncertain, the environment is dynamic and changing rapidly, human operators can become overwhelmed with information, uncertainty, and decision options, and the consequences can be dire.
Laser weapon systems contribute additional complexity to the operator’s decision space. The operator must weigh many factors within the dynamic threat situation to choose a soft-kill or hard-kill option, select an effective target aimpoint, calculate the required laser power-in-the-bucket (amount of actual laser irradiance per area that makes it to the target) and calculate the required dwell time. The operator must consider environmental effects and must determine if enough power is available to support the engagement. The operator may also decide to use an existing kinetic weapon system instead of a HEL system depending on a comparative prediction of kill success.
During combat operations, a ship’s warfare operators will make critical kill chain (weapon engagement) decisions under highly time-critical and uncertain conditions. Figure 2 illustrates an example of a ship’s tactical operations picture in a situation involving UAV threats. In this scenario, the operators must weigh what is known about the threat with what the ship’s defensive weapon systems are capable of. In this example, the operators must predict and compare how successful the Sea Sparrow, the laser weapon system (LaWS), and the Phalanx CIWS will be against the threat UAVs. The threat’s proximity and incoming speed will dictate how much time the operators have to make these comparative predictions. In many cases, the human operators may be well-served with an automated decision support system that can quickly calculate preferred weapon options based on the situation, such as doctrine statements. The emerging capabilities of artificial intelligence can be leveraged to enable automated decision aids for laser weapons—thus creating a cognitive laser approach for laser weapon systems.
Figure 2 – Complex Decisions for Naval Weapons Operator. Click to expand. (Source: Blickley et al, 2021)
Combining Emerging Technologies: Laser Weapons and Artificial Intelligence
Two emerging technologies lead to the cognitive laser concept: laser weapon technology and artificial intelligence. The Navy has been researching laser technologies for decades and lasers have recently matured to the point where they are being integrated and tested on ships for operational use. In parallel with this evolution, there have been significant advances in artificial intelligence (AI)—particularly in the development of intelligent computer systems that can support complex decision-making. The marriage of these two emerging technologies is the genesis of the proposed cognitive laser concept.
Laser weapon systems and their use in the defense of naval ships presents a complex decision space for human tactical warfare operators that requires the assistance of AI to process, fuse, and make sense of large amounts of data and information in short timeframes, and to develop and evaluate effective courses of action involving complex systems (including laser weapons). The laser weapon kill chain requires the intuitive, adaptive, and creative cognitive skills of humans as well as the abilities of automated systems to rapidly fuse large amounts of disparate data, construct and assess vast permutations of options, predict performance, and deal with uncertainty. Automation, artificial intelligence, and machine learning can provide a human-machine teaming cognitive solution.
November 26, 2014 — Chief of Naval Operations (CNO) Adm. Jonathan Greenert gets a firsthand look at the directed energy Laser Weapon System (LaWS) operator’s console aboard the interim afloat forward staging base USS Ponce (AFSB(I) 15) (U.S. Navy photo by Chief Mass Communication Specialist Peter D. Lawlor/Released)
Cognitive Laser Concept
Graduate students at the Naval Postgraduate School (NPS) have been studying various aspects of the cognitive laser concept. A systems engineering capstone team developed Figures 3 and 4 as they developed a conceptual design of an automated decision aid to support laser weapon engagement decisions for a naval shipboard HEL system (Blickley et al, 2021). Figure 3 presents a context diagram illustrating how the decision aid might retrieve threat information and laser resource information from onboard sensors and weapons scheduling in order to develop engagement recommendations and provide these to HEL operators.
Figure 3 – Cognitive Laser Context Diagram. Click to expand. (Source: Blickley et al, 2021)
The capstone team performed a functional analysis of the conceptual cognitive laser decision aid. Figure 4 contains a functional flow diagram from this analysis. It highlights some of the decision factors involved in determining whether or not to fire an HEL system: if there is sufficient time, if atmospheric conditions are favorable, if there is sufficient power, if the threat’s material composition can be effectively lased, and if there are no deconfliction issues (if there is no risk of friendly fire in the path of the laser beam).
Figure 4 – Cognitive Laser – Flow Diagram. Click to expand. (Source: Blickley et al, 2021)
NPS SE thesis students are studying other aspects of the cognitive laser concept. One study is widening the scope of the problem beyond laser weapon system decisions (Carr 2021). This study is asking the broader question: how do warfare operators on ships make the determination of which weapon to select when they have kinetic weapons and laser weapons to choose from? For this higher-level kill chain function, the operator needs to be able to compare the predicted performance of the kinetic weapon with that of the laser weapon for a given threat scenario. The threat is not stationary—as it moves, the range between the weapon and target changes and therefore the amount of “atmosphere” that the laser beam must traverse changes. Real-time changes in the threat’s proximity and kinematics continuously affect the projected performance of the two types of weapon systems differently. Weapon operators will be more familiar with when and how to engage a dynamic threat with kinetic weapons. They may be less familiar with the intricacies of engaging a dynamic threat with a laser weapon. The required laser’s dwell time and power needs will change as the threat moves and maneuvers. The complexities of a projected performance prediction between the two different types of weapons warrants the use of AI and automated decision aids to support this complex decision space.
As threats advance in complexity, naval operators will need to use laser weapon systems in more sophisticated and complex operations. NPS is studying the use of laser weapons to defend against future swarms of drones (Taylor 2021). The study is first characterizing possible drone swarms—their configuration, the number of drones, and the types of drones. The study is exploring the capabilities of laser weapons to address the swarms—soft-kills, hard-kills, and engagement timelines to understand how many drones can be addressed in a given situation. The study is developing strategies to apply different engagement logic to different threat scenarios—a series of soft-kills, or strategic hard-kills, or combinations of lasing and using kinetic weapons, as examples. The rapid development of effective laser weapon engagement logic in such complex tactical situations will require a cognitive laser approach to aid laser operators.
May 16, 2020 — USS Portland (LPD-27) successfully disables an unmanned aerial vehicle (UAV) with a Solid State Laser. (Video via USNI News)
Tactical energy management, as illustrated in Figure 5, is a cognitive laser concept for allowing laser weapon operators to understand and manage the dynamic energy resources during tactical operations. Laser weapons require significant amounts of energy when they are fired, and energy is a constrained resource on ships. This concept taps into the power sources on a ship to give laser operators insight into how much power is available and to determine how much power will be required to defeat specific threats as they are encountered.
Figure 5 – Tactical Energy Management. Click to expand. (Source: Armentrout et al, 2017)
Machine learning is an AI method that involves computers “learning” effective solutions or answers by training them using great amounts of data or scenarios. Recent research projects at NPS have been studying the use of machine learning approaches for determining the required dwell time based on the properties of the material composition of targets (Blickley et al 2021) and for target selection and engagement strategies against drone swarm threats (Edwards 2021). From the operator’s perspective, a machine learning algorithm would enhance a real-time decision aid by providing an expert-level laser weapon system knowledge base as shown in Figure 6. As real-time sensor data provides information about the threat—its location (or locations for a swarm threat), kinematics, and characteristics, the decision aid can assess and predict the target type, location of components (fuselage, sensors, seekers, etc.), material composition and thickness. This information is compared with the machine learning knowledge base which produces accurate recommendations for engagement strategy, aimpoint selection, and laser dwell time.
Figure 6 – Machine Learning for the Cognitive Laser (Source: Blickley et al, 2021)
Laser weapon operations pose a friendly fire risk. Lethal laser beams can unintentionally harm nearby friendly forces (aircraft, ships, etc.) or civilian entities in the vicinity. Deconfliction planning is a critical function in the laser weapon kill chain to ensure that the “coast is clear” so that the path of the laser beam is free of friendly and civilian assets. NPS studies are developing concepts for ensuring and managing deconfliction for different military laser weapon applications (Kee et al. 2020, Clayton et al. 2021). In time-critical tactical operations, laser weapon operations will require a cognitive laser approach to ensure for proper deconfliction.
The realization of a cognitive laser requires advances in human-machine teaming research to ensure the effective and safe employment of AI methods. Several studies at NPS are researching different aspects of applying AI to the tactical domain. Jones et al (2020) studied the air and missile defense kill chain to show that human-machine teaming arrangements can adapt in response to the threat situation timeline. The threat will dictate how much time the operator has to react, and this can be incorporated into the design of AI-enabled automated decision aids. Burns et al (2021) are embarking on a research project to map specific AI methods to the specific functions of the kill chain. Tactical kill chains (including laser weapon kill chains) require a variety of cognitive skills and decisions. These include data fusion, assessment, knowledge discovery, addressing uncertainty, developing course-of-action alternatives, predicting system performance, weighing risks, and gaming second- and third-order strategies.
A wide variety of AI methods will be needed to support these kill chain functions. Cruz et al (2021) are studying the potential safety risks and failure modes that may be introduced as AI and automation is adopted in the tactical domain. Safety risks may be inherent to the AI systems and their decision recommendations, or they may come in the form of cyber vulnerabilities as AI is introduced into tactical systems, or they may arise from the interactions of humans with intelligent machines. Peh (2021) is taking a deep dive into the complex dynamics of trust between humans and AI systems by researching methods to engineer AI systems for tactical operations. Peh’s research mission is to engineer AI systems as tactical decision aids that are trustworthy and achieve an effective trust balance to avoid both over-trust (humans blindly trusting AI) and under-reliance (humans disregarding AI).
Conclusion
Two emerging technologies are pairing up to provide new capabilities for the warfighter of the future: laser weapons and AI. Laser weapons are becoming an operational reality for defending ships and fleets, but they also pose an operational challenge in the form of decision complexity. AI is the necessary companion that can tackle this decision complexity and support effective human-machine teaming to operate laser weapons effectively and safely. A cognitive laser solution marries these two emerging technologies. The cognitive laser concept opens a diverse and challenging field of research for innovations in the application of AI methods to both laser weapon operations and the military tactical domain in general.
Dr. Bonnie Johnson is a senior lecturer of systems engineering at the Naval Postgraduate School. She was previously a senior systems engineer in the defense industry from 1995–2011 working on naval and joint air and missile defense systems. A graduate of Virginia Tech with a bachelor of science in physics and a graduate of Johns Hopkins with a master of science degree in systems engineering, Dr. Johnson received her PhD in systems engineering from the Naval Postgraduate School.
References
Armentrout, A., Behre, C., Ngo, T., Rowney, D., Schroder, E., and Stopper M., 2017. “Objective architecture for tactical energy management of directed energy weapons,” Naval Postgraduate School Capstone Report, March 2017.
Blickley, W., Carlson, J., Magana, M., Pacheco, A., and Roscher J., 2021. “Cognitive laser – automated decision aid for a system of laser weapon systems,” Naval Postgraduate School Capstone Report, March 2021.
Burns, G., Collier, T., Cornish, R., Curley, K., Freeman, A., and Spears, J., 2021. “Evaluating artificial intelligence methods for use in kill chain functions,” Naval Postgraduate School Capstone Proposal, April 2021.
Carr, A. 2020. “A proposed model for a shipboard high energy laser and kinetic weapons system automated decision aid,” Naval Postgraduate School Thesis Proposal, October 2020.
Clayton, B., Scott, M., Shelton, J., Williamson, J., and Vermillion, M., 2021. “Highway to HEL – USMC expeditionary employment of a high energy laser to counter drone threats,” Naval Postgraduate School Capstone Proposal, July 2021.
Cruz, L., Hoopes, A., Pappa, R., Shilt, S., and Wuornos, S., 2021. “Evaluation of the safety risks of developing and implementing automated battle management aids for air and missile defense,” Naval Postgraduate School Capstone proposal, May 2021.
Edwards, D. 2020. “Application of machine learning for a laser weapon system aimpoint selection decision aid in support of a cognitive laser.” Naval Postgraduate School Thesis Proposal, August 2020.
Jones, J., Kress, R., Newmeyer, W., and Rahman, A., 2020. “Leveraging artificial intelligence for air and missile defense: an outcome-oriented decision aid,” Naval Postgraduate School Capstone Report, September 2020.
Kee, R., Lutz, T., Schwitzing, M., Murray, E., 2020. “Impact on shipboard power generation and storage when utilizing high energy laser systems to counter anti-ship cruise missiles in fleet defense scenarios,” Naval Postgraduate School Capstone Report, September 2020.
Peh, M., 2021. “Developing a trust metric in engineering an artificial intelligence enabled air and mission defense system,” Naval Postgraduate School Thesis Proposal, November 2020.
Taylor, A. 2021. “Shipboard laser weapon system automated decision aid: countering unmanned aerial vehicle swarm threats,” Naval Postgraduate School Thesis Proposal, January 2021.
Featured Image: Dahlgren, VA – ARABIAN GULF (Nov. 16, 2014) The Afloat Forward Staging Base (Interim) USS Ponce (ASB(I) 15) conducts an operational demonstration of the Office of Naval Research (ONR)-sponsored Laser Weapon System (LaWS) while deployed to the Arabian Gulf. (U.S. Navy photo by John F. Williams/Released)
Defined by their remoteness and extreme climate, the polar regions present an array of tactical and operational challenges to US forces as sea icing, repeated thawing and freezing cycles, permafrost, and frequent storms can complicate otherwise simple operations. However, often overlooked are the challenges to communications, which are critical to Navy and Coast Guard vessels operating in the polar regions. Perhaps once possible to ignore, these challenges are becoming more pressing as the Marines, Navy and Coast Guard increase their operations at higher latitudes and place more emphasis on the arctic and more arguments are made for sending Marines and soldiers to the arctic for training and presence. In order for US naval forces to compete in the polar regions and fight if needed, the military needs to invest in persistent and reliable communications capabilities. One solution is high-altitude balloons.
Arctic experts have long understood the difficulty of communicating in the arctic, noting that “While communicating today might be easier than it was for Commodore Perry 111 years ago, it’s not that much better.” Arctic communications are especially difficult for a number of reasons. Satellite-based options are limited or nonexistent because the vast majority of satellites maintain equatorial orbits, which means the polar region’s extreme latitudes fall outside satellite range. Though a few satellites follow non-equatorial orbits, there are simply not enough to provide continuous connectivity at the bandwidth needed for modern operations.
There are also natural barriers to communications in the arctic. The ionosphere covering the polar regions has a high-level of electron precipitation, which is the same characteristic that produces the Northern Lights. However, this interferes with and degrades the high-frequency (HF) radios that the military normally uses for long-range communications in the absence of satellites. Additionally, the extreme climate and cold weather in the arctic presents another challenge to communications infrastructure such as antennas and ground stations. Arctic conditions make it harder to access and maintain ground arrays, batteries expire faster in colder temperatures, and equipment can easily be buried by falling snow and lost.
Finally, the near complete lack of civilian infrastructure complicates arctic communications. The polar regions comprise about eight percent of the earth’s surface, accounting for over 10 million square miles of land on which only about 4 million people live. Most are clustered in small communities, resulting in sparse commercial communications infrastructure across the region. However, persistent and reliable communications are absolutely essential for the successful employment of maritime forces in the arctic.
One solution is for naval forces to use high-altitude balloons that provide temporary communications capabilities. Balloons are far cheaper than satellites and much more responsive. They can be quickly deployed where coverage is needed and fitted with communications payloads specific to the mission. They are also low-cost and effective enough that they can be used not only in operations but also in training at austere locations.
Balloons offer a degree of flexibility critical for operations in remote environments like the arctic. Differently sized balloons can be fitted with specific capacities for mission-tailored requirements and priorities. The size of payload, loiter time, and capabilities are primarily a function of balloon size. Large balloons and stratospheric airships can stay aloft for months, while smaller “zero pressure” balloons might last hours or a few days. Given their diverse uses and capabilities, high-altitude balloons have already been used to provide communications in hard-to-access environments by organizations such as NASA, the US Air Force, and Google. For example, researchers at the Southwest Research Institute and NASA have supported atmospheric balloon flights over the poles that lasted up to a month – more than enough time to meet operational needs.
Though there are various ways to launch and lift high-altitude balloons, recent advances show that hydrogen gas is the best candidate. Researchers at the Massachusetts Institute of Technology’s Lincoln Laboratory recently discovered a new way to generate hydrogen with aluminum and water. With this new ‘MIT process,’ researchers have already demonstrated the ability to fill atmospheric balloons with hydrogen in just minutes – a fraction of the time it takes using other methods. The MIT process promises to be not just faster, but also cheaper and safer than other methods of hydrogen generation. It also means that units can generate hydrogen at the point of use – obviating the need to store or transport the volatile gas or other compressed gasses. The researchers have demonstrated effective hydrogen generation with scrap and recycled aluminum and with non-purified water including coffee, urine, and seawater.
The deployment of balloons utilizing this new hydrogen generation process would be extremely simple. A balloon system could conceivably be developed where the system is simply dropped into the ocean from a ship, airplane, or helicopter with a mechanism that causes it to self-deploy when it comes into contact with seawater. This single system – one that does not require stores of compressed gas or an electrolyzer to generate hydrogen – would also take up far less space than other balloons and the associated equipment required to get them aloft. Balloons full of hydrogen gas could also act as giant batteries as the hydrogen can also be used to power communications equipment or sensors.
So far, the US Coast Guard has been leading the way with arctic communications. The service has highlighted improving communications in the arctic as part of their first line of effort in the 2019 Arctic Strategic Outlook and as a key initiative in their 2015 Arctic Strategy Implementation Plan. Along with the Marine Corps, the service has also been experimenting with Lockheed’s Mobile User Objective System (MUOS), a next-generation satellite communication constellation intended to replace the constellation that the Pentagon relies on today. But even the systems’ creators are clear that in extreme polar regions, MOUS may only offer eight hours of coverage per day. Constellations of small and cheap cube satellites might also be a partial fix for the communications dead zones, but hundreds or thousands would be required to cover a region as large as the arctic. The Army and the Air Force are also interested and intend to invest $50 millioneach toward arctic communications. The Army has previously experimented with using high-altitude balloons to support multi-domain operations and might be a key partner in developing an arctic communications capability, and the Air Force is looking at using commercial broadband satellites to meet service and joint communications needs in the arctic.
Communications issues are a consequence of the polar operating environment and an obstacle for the military services operating there. But just because the environment is difficult does not mean that US forces have to go without persistent and reliable communications. High-altitude balloons could plug the communications gap not just for maritime forces but also for the Army and special operations units operating in these extreme latitudes. Developing and deploying high-altitude communications balloons, lifted by hydrogen gas generated by the MIT process, offers near-term capability for US forces operating in polar regions with underdeveloped communications infrastructure.
Walker D. Mills is a U.S. Marine Corps officer serving as an exchange officer in Cartagena, Colombia, the 2021 Military Fellow with Young Professionals in Foreign Policy, a non-resident WSD-Handa Fellow at Pacific Forum, and a Non-Resident Fellow with the Brute Krulak Center for Innovation and Future War.
The views expressed are his alone and do not represent the United States government, the Colombian government, the United States military, or the United States Marine Corps.
Feature Image: A NASA long duration balloon is prepared for launch on Antarctica’s Ross Ice Shelf near McMurdo Station in 2004. (NASA photo)
Even as the private sector and academia have made rapid progress in the field of Artificial Intelligence (AI) and Machine Learning (ML), the Department of Defense (DoD) remains hamstrung by significant technical and policy challenges. Only a fraction of this civilian-driven progress can be applied to the AI and ML models and systems needed by the DoD; the uniquely military operational environments and modes of employment create unique development challenges for these potentially dominant systems. In order for ML systems to be successful once fielded, these issues must be considered now. The problems of dataset curation, data scarcity, updating models, and trust between humans and machines will challenge engineers in their efforts to create accurate, reliable, and relevant AI/ML systems.
Recent studies recognize these structural challenges. A GAO report found that only 38 percent of private sector research and development projects were aligned with DoD needs, while only 12 percent of projects could be categorized as AI or autonomy research.1 The National Security Commission on Artificial Intelligence’s Final Report also recognizes this gap, recommending more federal R&D funding for areas critical to advance technology, especially those that may not receive private sector investment.2 The sea services face particular challenges in adopting AI/ML technologies to their domains because private sector interest and investment in AI and autonomy at sea has been especially limited. One particular area that needs Navy-specific investment is that of ML systems for passive sonar systems, though the approach certainly has application to other ML systems.
Why Sonar is in Particular Need of Investment
Passive sonar systems are a critical component on many naval platforms today. Passive sonar listens for sounds emitted by ships or submarines and is the preferred tool of anti-submarine warfare, particularly for localizing and tracking targets. In contrast to active sonar, no signal is emitted, making it more covert and the method of choice for submarines to locate other vessels at sea. Passive sonar systems are used across the Navy in submarine, surface, and naval air assets, and in constant use during peace and war to locate and track adversary submarines. Because of this widespread use, any ML model for passive sonar systems would have a significant impact across the fleet and use on both manned and unmanned platforms. These models could easily integrate into traditional manned platforms to ease the cognitive load on human operators. They could also increase the autonomy of unmanned platforms, either surfaced or submerged, by giving these platforms the same abilities that manned platforms have to detect, track, and classify targets in passive sonar data.
Passive sonar, unlike technologies such as radar or LIDAR, lacks the dual use appeal that would spur high levels of private sector investment. While radar systems are used across the military and private sector for ground, naval, air, and space platforms, and active sonar has lucrative applications in the oil and gas industry, passive sonar is used almost exclusively by naval assets. This lack of incentive to invest in ML technologies related to sonar systems epitomizes the gap referred to by the NSC AI report. Recently, NORTHCOM has tested AI/ML systems to search through radar data for targets, a project that has received interest and participation from all 11 combatant commands and the DoD as a whole.3 Due to its niche uses, however, passive sonar ML systems cannot match this level of department wide investment and so demands strong advocacy within the Navy.
Dataset Curation
Artificial Intelligence and Machine Learning are often conflated and used interchangeably. Artificial Intelligence refers a field of computer science interested in creating machines that can behave with human-like abilities and can make decisions based on input data. In contrast, Machine Learning, a subset of the AI filed, refers to computer programs and algorithms that learn from repeated exposure to many examples, often millions, instead of operating based on explicit rules programmed by humans.4 The focus in this article is on topics specific to ML models and systems, which will be included as parts in a larger AI or autonomous system. For example, an ML model could classify ships from passive sonar data, this model would then feed information about those ships into an AI system that operates an Unmanned Underwater Vehicle (UUV). The AI would make decisions about how to steer the UUV based on data from the sonar ML model in addition to information about mission objectives, navigation, and other data.
Machine learning models must train on large volumes of data to produce accurate predictions. This data must be collected, labeled, and prepared for processing by the model. Data curation is a labor- and time-intensive task that is often viewed as an extra cost on ML projects since it must occur before any model can be trained, but this process should be seen as an integral part of ML model success. Researchers recently found that one of the most commonly used datasets in computer vision research, ImageNet, has approximately 6 percent of their images mislabeled 5. Another dataset, QuickDraw, had 10 percent of images mislabeled. Once the errors were corrected, model performance on the ImageNet dataset improved by 6 percent over a model trained on the original, uncorrected, dataset.5
For academic researchers, where the stakes of an error in a model are relatively low, this could be called a nuisance. However, ML models deployed on warships face greater consequences than those in research labs. A similar error, of 6 percent, in an ML model to classify warships would be far more consequential. The time and labor costs needed to correctly label data for use in ML model training needs to be factored into ML projects early. In order to make the creation of these datasets cost effective, automatic methods will be required to label data, and methods of expert human verification must ensure quality. Once a large enough dataset has been built up, costs will decrease. However, new data will still have to be continuously added to training datasets to ensure up to date examples are present in the training of models.
A passive acoustic dataset is much more than an audio recording: Where and when the data is collected, along with many other discrete factors, are also important and should be integrated into the dataset. Sonar data collected in one part of the ocean, or during a particular time of year, could be very different than other parts of the ocean or the same point in the ocean at a different time of year. Both the types of vessels encountered and the ocean environment will vary. Researchers at Brigham Young University demonstrated how variations in sound speed profiles can affect machine learning systems that operate on underwater acoustic data. They showed the effects of changing environmental conditions when attempting to classify seabed bottom type from a moving sound source, with variations in the ability of their ML model to provide correct classifications by up to 20 percent.6 Collecting data from all possible operating environments, at various times of the year, and labeling them appropriately will be critical to building robust datasets from which accurate ML models can be trained. Metadata, in the form of environmental conditions, sensor performance, sound propagation, and more must be incorporated during the data collection process. Engineers and researchers will be able to analyze metadata to understand where the data came from and what sensor or environmental conditions could be underrepresented or completely missing.
These challenges must be overcome in a cost-effective way to build datasets representative of real world operating environments and conditions.
Data Scarcity
Another challenge in the field of ML that has salience for sonar data are the challenges associated with very small, but important datasets. For an academic researcher, data scarcity may come about due to the prohibitive cost of experiments or rarity of events to collect data on, such as astronomical observations. For the DoN, these same challenges will occur in addition to DoN specific challenges. Unlike academia or the private sectors, stringent restrictions on classified data will limit who can use this data to train and develop models. How will an ML model be trained to recognize an adversary’s newest ship when there are only a few minutes of acoustic recording? Since machine learning models require large quantities of data, traditional training methods will not work or result in less effective models.
Data augmentation, replicating and modifying original data may be one answer to this problem. In computer vision research, data is augmented by rotating, flipping, or changing the color balance of an image. Since a car is still a car, even if the image of the car is rotated or inverted, a model will learn to recognize a car from many angles and in many environments. In acoustics research, data is augmented by adding in other sounds or changing the time scale or pitch of the original audio. From a few initial examples, a much larger dataset to train on can be created. However, these methods have not been extensively researched on passive sonar data. It is still unknown which methods of data augmentation will produce the best results for sonar models, and which could produce worse models. Further research into the best methods for data augmentation for underwater acoustics is required.
Another method used to generate training data is the use of models to create synthetic data. This method is used to create datasets to train voice recognition models. By using physical models, audio recordings can be simulated in rooms of various dimensions and materials, instead of trying to make recordings in every possible room configuration. Generating synthetic data for underwater audio is not as simple and will require more complex models and more compute power than models used for voice recognition. Researchers have experimented with generated synthetic underwater sounds using the ORCA sound propagation model.6 However, this research only simulated five discrete frequencies used in seabed classification work. A ship model for passive sonar data will require more frequencies, both discrete and broadband, to be simulated in order to produce synthetic acoustic data with enough fidelity to use in model training. The generation of realistic synthetic data will give system designers the ability to add targets with very few examples to a dataset.
The ability to augment existing data and create new data from synthetic models will create larger and more diverse datasets, leading to more robust and accurate ML models.
Building Trust between Humans and Machines
Machine learning models are good at telling a human what they know, which comes from the data they were trained on. They are not good at telling humans that they do not recognize an input or have never seen anything like it in training. This will be an issue if human operators are to develop trust in the ML models they will use. Telling an operator that it does not know, or the degree of confidence a model has in its answer, will be vital to building reliable human-machine teams. One method to building models with the ability to tell human operators that a sample is unknown is the use of Bayesian Neural Networks. Bayesian models can tell an operator how confident they are in a classification and even when the model does not know the answer. This falls under the field of explainable AI, AI systems that can tell a human how the system arrived at the classification or decision that is produced. In order to build trust between human operators and ML systems, a human will need some insight into how and why an ML system arrived at its output.
Ships at sea will encounter new ships, or ships that were not part of the model’s original training dataset. This will be a problem early in the use of these models, as datasets will initially be small and grow with the collection of more data. These models cannot fail easily and quickly, they must be able to distinguish between what is known and what is unknown. The DoN must consider how human operators will interact with these ML models at sea, not just model performance.
Model Updates
To build a great ML system, the models will have to be updated. New data will be collected and added to the training dataset to re-train the model so that it stays relevant. In these models, only certain model parameters are updated, not the design or structure of the model. These updates, like any other digital file can be measured in bytes. An important question for system designers to consider is how these updates will be distributed to fleet units and how often. One established model for this is the Acoustic- Rapid COTS Insertion (ARCI) program used in the US Navy’s Submarine Force. In the ARCI program, new hardware and software for sonar and fire control is built, tested, and deployed on a regular, two-year cycle.7 But two years may be too infrequent for ML systems that are capable of incorporating new data and models rapidly. The software industry employs a system of continuous deployment, in which engineers can push the latest model updates to their cloud-based systems instantly. This may work for some fleet units that have the network bandwidth to support over the air updates or that can return to base for physical transfer. Recognizing this gap, the Navy is currently seeking a system that can simultaneously refuel and transfer data, up to 2 terabytes, from a USV.8 This research proposal highlights the large volume of data will need to be moved, both on and off unmanned vessels. Other units, particularly submarines and UUVs, have far less communications bandwidth. If over-the-air updates to submarines or UUVs are desired, then more restrictions will be placed on model sizes to accommodate limited bandwidth. If models cannot be made small enough, updates will have to be brought to a unit in port and updated from a hard drive or other physical device.
Creating a good system for when and how to update these models will drive other system requirements. Engineers will need these requirements, such as size limitations on the model, ingestible data type, frequency of updates needed by the fleet, and how new data will be incorporated into model training before they start designing ML systems.
Conclusion
As recommended in the NSC AI report, the DoN must be ready to invest in technologies that are critical to future AI systems, but that are currently lacking in private sector interest. ML models for passive sonar, lacking both dual use appeal and broad uses across the DoD, clearly fits into this need. Specific investment is required to address several problems facing sonar ML systems, including dataset curation, data scarcity, model updates, and building trust between operators and systems. These challenges will require a combination of technical and policy solutions to solve them, and they must be solved in order to create successful ML systems. Addressing these challenges now, while projects are in a nascent stage, will lead to the development of more robust systems. These sonar ML systems will be a critical tool across a manned and unmanned fleet in anti-submarine warfare and the hunt for near-peer adversary submarines.
Lieutenant Andrew Pfau, USN, is a submariner serving as an instructor at the U.S. Naval Academy. He is a graduate of the Naval Postgraduate School and a recipient of the Rear Admiral Grace Murray Hopper Computer Science Award. The views and opinions expressed here are his own.
Endnotes
1. DiNapoli, T. J. (2020). Opportunities to Better Integrate Industry Independent Research and Development into DOD Planning. (GAO-20-578). Government Accountability Office.
2. National Security Commission on Artificial Intelligence (2021), Final Report.
3. Hitchens, T. (2021, July 15) NORTHCOM Head To Press DoD Leaders For AI Tools, Breaking Defense, https://breakingdefense.com/2021/07/exclusive-northcom-head-to-press-dod-leaders-for-ai-tools/
4. Denning, P., Lewis, T. Intelligence May Not be Computable. American Scientist. Nov-Dec 2019. http://denninginstitute.com/pjd/PUBS/amsci-2019-ai-hierachy.pdf
5. Hao, K. (2021, April 1) Error-riddled data sets are warping our sense of how good AI really is. MIT Technology Review. https://www.technologyreview.com/2021/04/01/1021619/ai-data-errors-warp-machine-learning-progress/
6. Neilsen et al (2021). Learning location and seabed type from a moving mid-frequency source. Journal of the Acoustical Society of America. (149). 692-705. https://doi.org/10.1121/10.0003361
7. DeLuca, P., Predd, J. B., Nixon, M., Blickstein, I., Button, R. W., Kallimani J. G., and Tierney, S. (2013) Lessons Learned from ARCI and SSDS in Assessing Aegis Program Transition to an Open-Architecture Model, (pp 79-84) RAND Corperation, https://www.jstor.org/stable/pdf/10.7249/j.ctt5hhsmj.15.pdf
8. Office of Naval Research, Automated Offboard Refueling and Data Transfer for Unmanned Surface Vehicles, BAA Announcement # N00014-16-S-BA09, https://www.globalsecurity.org/military/systems/ship/systems/oradts.htm
Featured Image: Sonar Technician (Surface) Seaman Daniel Kline performs passive acoustic analysis in the sonar control room aboard the guided-missile destroyer USS Ramage (DDG 61). (U.S. Navy photo by Mass Communication Specialist 2nd Class Jacob D. Moore/Released)