Tag Archives: machine learning

A Roadmap to Successful Sonar AI

Emerging Technologies Topic Week

By LT Andrew Pfau

Even as the private sector and academia have made rapid progress in the field of Artificial Intelligence (AI) and Machine Learning (ML), the Department of Defense (DoD) remains hamstrung by significant technical and policy challenges. Only a fraction of this civilian-driven progress can be applied to the AI and ML models and systems needed by the DoD; the uniquely military operational environments and modes of employment create unique development challenges for these potentially dominant systems. In order for ML systems to be successful once fielded, these issues must be considered now. The problems of dataset curation, data scarcity, updating models, and trust between humans and machines will challenge engineers in their efforts to create accurate, reliable, and relevant AI/ML systems.

Recent studies recognize these structural challenges. A GAO report found that only 38 percent of private sector research and development projects were aligned with DoD needs, while only 12 percent of projects could be categorized as AI or autonomy research.1 The National Security Commission on Artificial Intelligence’s Final Report also recognizes this gap, recommending more federal R&D funding for areas critical to advance technology, especially those that may not receive private sector investment.2 The sea services face particular challenges in adopting AI/ML technologies to their domains because private sector interest and investment in AI and autonomy at sea has been especially limited. One particular area that needs Navy-specific investment is that of ML systems for passive sonar systems, though the approach certainly has application to other ML systems.

Why Sonar is in Particular Need of Investment

Passive sonar systems are a critical component on many naval platforms today. Passive sonar listens for sounds emitted by ships or submarines and is the preferred tool of anti-submarine warfare, particularly for localizing and tracking targets. In contrast to active sonar, no signal is emitted, making it more covert and the method of choice for submarines to locate other vessels at sea. Passive sonar systems are used across the Navy in submarine, surface, and naval air assets, and in constant use during peace and war to locate and track adversary submarines. Because of this widespread use, any ML model for passive sonar systems would have a significant impact across the fleet and use on both manned and unmanned platforms. These models could easily integrate into traditional manned platforms to ease the cognitive load on human operators. They could also increase the autonomy of unmanned platforms, either surfaced or submerged, by giving these platforms the same abilities that manned platforms have to detect, track, and classify targets in passive sonar data.

Passive sonar, unlike technologies such as radar or LIDAR, lacks the dual use appeal that would spur high levels of private sector investment. While radar systems are used across the military and private sector for ground, naval, air, and space platforms, and active sonar has lucrative applications in the oil and gas industry, passive sonar is used almost exclusively by naval assets. This lack of incentive to invest in ML technologies related to sonar systems epitomizes the gap referred to by the NSC AI report. Recently, NORTHCOM has tested AI/ML systems to search through radar data for targets, a project that has received interest and participation from all 11 combatant commands and the DoD as a whole.3 Due to its niche uses, however, passive sonar ML systems cannot match this level of department wide investment and so demands strong advocacy within the Navy.

Dataset Curation

Artificial Intelligence and Machine Learning are often conflated and used interchangeably. Artificial Intelligence refers a field of computer science interested in creating machines that can behave with human-like abilities and can make decisions based on input data. In contrast, Machine Learning, a subset of the AI filed, refers to computer programs and algorithms that learn from repeated exposure to many examples, often millions, instead of operating based on explicit rules programmed by humans.4 The focus in this article is on topics specific to ML models and systems, which will be included as parts in a larger AI or autonomous system. For example, an ML model could classify ships from passive sonar data, this model would then feed information about those ships into an AI system that operates an Unmanned Underwater Vehicle (UUV). The AI would make decisions about how to steer the UUV based on data from the sonar ML model in addition to information about mission objectives, navigation, and other data.

Machine learning models must train on large volumes of data to produce accurate predictions. This data must be collected, labeled, and prepared for processing by the model. Data curation is a labor- and time-intensive task that is often viewed as an extra cost on ML projects since it must occur before any model can be trained, but this process should be seen as an integral part of ML model success. Researchers recently found that one of the most commonly used datasets in computer vision research, ImageNet, has approximately 6 percent of their images mislabeled 5. Another dataset, QuickDraw, had 10 percent of images mislabeled. Once the errors were corrected, model performance on the ImageNet dataset improved by 6 percent over a model trained on the original, uncorrected, dataset.5

For academic researchers, where the stakes of an error in a model are relatively low, this could be called a nuisance. However, ML models deployed on warships face greater consequences than those in research labs. A similar error, of 6 percent, in an ML model to classify warships would be far more consequential. The time and labor costs needed to correctly label data for use in ML model training needs to be factored into ML projects early. In order to make the creation of these datasets cost effective, automatic methods will be required to label data, and methods of expert human verification must ensure quality. Once a large enough dataset has been built up, costs will decrease. However, new data will still have to be continuously added to training datasets to ensure up to date examples are present in the training of models.

A passive acoustic dataset is much more than an audio recording: Where and when the data is collected, along with many other discrete factors, are also important and should be integrated into the dataset. Sonar data collected in one part of the ocean, or during a particular time of year, could be very different than other parts of the ocean or the same point in the ocean at a different time of year. Both the types of vessels encountered and the ocean environment will vary. Researchers at Brigham Young University demonstrated how variations in sound speed profiles can affect machine learning systems that operate on underwater acoustic data. They showed the effects of changing environmental conditions when attempting to classify seabed bottom type from a moving sound source, with variations in the ability of their ML model to provide correct classifications by up to 20 percent.6 Collecting data from all possible operating environments, at various times of the year, and labeling them appropriately will be critical to building robust datasets from which accurate ML models can be trained. Metadata, in the form of environmental conditions, sensor performance, sound propagation, and more must be incorporated during the data collection process. Engineers and researchers will be able to analyze metadata to understand where the data came from and what sensor or environmental conditions could be underrepresented or completely missing.

These challenges must be overcome in a cost-effective way to build datasets representative of real world operating environments and conditions.

Data Scarcity

Another challenge in the field of ML that has salience for sonar data are the challenges associated with very small, but important datasets. For an academic researcher, data scarcity may come about due to the prohibitive cost of experiments or rarity of events to collect data on, such as astronomical observations. For the DoN, these same challenges will occur in addition to DoN specific challenges. Unlike academia or the private sectors, stringent restrictions on classified data will limit who can use this data to train and develop models. How will an ML model be trained to recognize an adversary’s newest ship when there are only a few minutes of acoustic recording? Since machine learning models require large quantities of data, traditional training methods will not work or result in less effective models.

Data augmentation, replicating and modifying original data may be one answer to this problem. In computer vision research, data is augmented by rotating, flipping, or changing the color balance of an image. Since a car is still a car, even if the image of the car is rotated or inverted, a model will learn to recognize a car from many angles and in many environments. In acoustics research, data is augmented by adding in other sounds or changing the time scale or pitch of the original audio. From a few initial examples, a much larger dataset to train on can be created. However, these methods have not been extensively researched on passive sonar data. It is still unknown which methods of data augmentation will produce the best results for sonar models, and which could produce worse models. Further research into the best methods for data augmentation for underwater acoustics is required.

Another method used to generate training data is the use of models to create synthetic data. This method is used to create datasets to train voice recognition models. By using physical models, audio recordings can be simulated in rooms of various dimensions and materials, instead of trying to make recordings in every possible room configuration. Generating synthetic data for underwater audio is not as simple and will require more complex models and more compute power than models used for voice recognition. Researchers have experimented with generated synthetic underwater sounds using the ORCA sound propagation model.6 However, this research only simulated five discrete frequencies used in seabed classification work. A ship model for passive sonar data will require more frequencies, both discrete and broadband, to be simulated in order to produce synthetic acoustic data with enough fidelity to use in model training. The generation of realistic synthetic data will give system designers the ability to add targets with very few examples to a dataset.

The ability to augment existing data and create new data from synthetic models will create larger and more diverse datasets, leading to more robust and accurate ML models.

Building Trust between Humans and Machines

Machine learning models are good at telling a human what they know, which comes from the data they were trained on. They are not good at telling humans that they do not recognize an input or have never seen anything like it in training. This will be an issue if human operators are to develop trust in the ML models they will use. Telling an operator that it does not know, or the degree of confidence a model has in its answer, will be vital to building reliable human-machine teams. One method to building models with the ability to tell human operators that a sample is unknown is the use of Bayesian Neural Networks. Bayesian models can tell an operator how confident they are in a classification and even when the model does not know the answer. This falls under the field of explainable AI, AI systems that can tell a human how the system arrived at the classification or decision that is produced. In order to build trust between human operators and ML systems, a human will need some insight into how and why an ML system arrived at its output.

Ships at sea will encounter new ships, or ships that were not part of the model’s original training dataset. This will be a problem early in the use of these models, as datasets will initially be small and grow with the collection of more data. These models cannot fail easily and quickly, they must be able to distinguish between what is known and what is unknown. The DoN must consider how human operators will interact with these ML models at sea, not just model performance.

Model Updates

To build a great ML system, the models will have to be updated. New data will be collected and added to the training dataset to re-train the model so that it stays relevant. In these models, only certain model parameters are updated, not the design or structure of the model. These updates, like any other digital file can be measured in bytes. An important question for system designers to consider is how these updates will be distributed to fleet units and how often. One established model for this is the Acoustic- Rapid COTS Insertion (ARCI) program used in the US Navy’s Submarine Force. In the ARCI program, new hardware and software for sonar and fire control is built, tested, and deployed on a regular, two-year cycle.7 But two years may be too infrequent for ML systems that are capable of incorporating new data and models rapidly. The software industry employs a system of continuous deployment, in which engineers can push the latest model updates to their cloud-based systems instantly. This may work for some fleet units that have the network bandwidth to support over the air updates or that can return to base for physical transfer. Recognizing this gap, the Navy is currently seeking a system that can simultaneously refuel and transfer data, up to 2 terabytes, from a USV.8 This research proposal highlights the large volume of data will need to be moved, both on and off unmanned vessels. Other units, particularly submarines and UUVs, have far less communications bandwidth. If over-the-air updates to submarines or UUVs are desired, then more restrictions will be placed on model sizes to accommodate limited bandwidth. If models cannot be made small enough, updates will have to be brought to a unit in port and updated from a hard drive or other physical device.

Creating a good system for when and how to update these models will drive other system requirements. Engineers will need these requirements, such as size limitations on the model, ingestible data type, frequency of updates needed by the fleet, and how new data will be incorporated into model training before they start designing ML systems.

Conclusion

As recommended in the NSC AI report, the DoN must be ready to invest in technologies that are critical to future AI systems, but that are currently lacking in private sector interest. ML models for passive sonar, lacking both dual use appeal and broad uses across the DoD, clearly fits into this need. Specific investment is required to address several problems facing sonar ML systems, including dataset curation, data scarcity, model updates, and building trust between operators and systems. These challenges will require a combination of technical and policy solutions to solve them, and they must be solved in order to create successful ML systems. Addressing these challenges now, while projects are in a nascent stage, will lead to the development of more robust systems. These sonar ML systems will be a critical tool across a manned and unmanned fleet in anti-submarine warfare and the hunt for near-peer adversary submarines.

Lieutenant Andrew Pfau, USN, is a submariner serving as an instructor at the U.S. Naval Academy. He is a graduate of the Naval Postgraduate School and a recipient of the Rear Admiral Grace Murray Hopper Computer Science Award. The views and opinions expressed here are his own.

Endnotes

1. DiNapoli, T. J. (2020). Opportunities to Better Integrate Industry Independent Research and Development into DOD Planning. (GAO-20-578). Government Accountability Office.

2. National Security Commission on Artificial Intelligence (2021), Final Report.

3. Hitchens, T. (2021, July 15) NORTHCOM Head To Press DoD Leaders For AI Tools, Breaking Defense, https://breakingdefense.com/2021/07/exclusive-northcom-head-to-press-dod-leaders-for-ai-tools/

4. Denning, P., Lewis, T. Intelligence May Not be Computable. American Scientist. Nov-Dec 2019. http://denninginstitute.com/pjd/PUBS/amsci-2019-ai-hierachy.pdf

5. Hao, K. (2021, April 1) Error-riddled data sets are warping our sense of how good AI really is. MIT Technology Review. https://www.technologyreview.com/2021/04/01/1021619/ai-data-errors-warp-machine-learning-progress/

6. Neilsen et al (2021). Learning location and seabed type from a moving mid-frequency source. Journal of the Acoustical Society of America. (149). 692-705. https://doi.org/10.1121/10.0003361

7. DeLuca, P., Predd, J. B., Nixon, M., Blickstein, I., Button, R. W., Kallimani J. G., and Tierney, S. (2013) Lessons Learned from ARCI and SSDS in Assessing Aegis Program Transition to an Open-Architecture Model, (pp 79-84) RAND Corperation, https://www.jstor.org/stable/pdf/10.7249/j.ctt5hhsmj.15.pdf

8. Office of Naval Research, Automated Offboard Refueling and Data Transfer for Unmanned Surface Vehicles, BAA Announcement # N00014-16-S-BA09, https://www.globalsecurity.org/military/systems/ship/systems/oradts.htm

Featured Image: Sonar Technician (Surface) Seaman Daniel Kline performs passive acoustic analysis in the sonar control room aboard the guided-missile destroyer USS Ramage (DDG 61). (U.S. Navy photo by Mass Communication Specialist 2nd Class Jacob D. Moore/Released)

Hyper-Converged Networks and Artificial Intelligence: Fighting at Machine Speed

By Travis Howard

Lieutenant Stacey Alto sits in the Joint Intelligence Center aboard the Wasp-class Amphibious Assault ship USS ESSEX (LHD 2). As the Force Intelligence Watch Officer (FIWO), her job is to absorb relevant information related to current and future operations of the Essex Amphibious Ready Group, as well as the general intelligence within the operating theater. Her zero-client, virtual desktop environment (VDE) 6-panel display at her watch station allows her a single-pane-of-glass into Unclassified, Secret, Top Secret, and Coalition enclaves through the Consolidated Afloat Networking and Enterprise Services (CANES) network.

One of her watch standers, an Intelligence Specialist Second Class, approaches her desk with new information from the Joint Operations Center (JOC), the nerve center of ARG operations, announcing new orders from the fleet commander to enter the Gulf of Oman, which represents a shift in operating theater from their current position in the Arabian Sea.

Stacey goes to work immediately, enlisting the help of two Intelligence Specialists and one of the Information Systems Technicians standing watch in the Ship’s Signal Exploitation Space (SSES). She queries the onboard widget carousel on her CANES SECRET terminal. Using a combination of mouse, keyboard, and touchscreen, she pulls together several ready-made widgets and snaps them into place, each taking advantage of a pool of “big data” information stored on the ship’s carry-on Distributed Common Ground System-Navy (DCGS-N) and off-ship sources from the intelligence cloud. Her development work gets passed to the next watch team, as they set the application’s variables for data parsing, consolidating inputs, and terrain mapping to put together a relevant, real-time intelligence picture.

By the time Stacey returns to her watch station almost 24 hours later, the IT personnel in SSES have put the new application through the automated cybersecurity testing process and have released it to the onboard “app store,” which Stacey can now install on her virtualized, thin-client desktop within seconds. She calls the JOC, the Marine Landing Force Operations Center (LFOC), and the ship’s Combat Information Center (CIC) announcing the system’s readiness with separate logins at the appropriate classification level for each watch station. By the time ESSEX enters the Gulf of Oman, the application has mapped adversarial positions and capabilities, pulled from several disparate databases afloat and ashore, all at varying levels of classification necessary for operational planning throughout the ship.

Building a More Maneuverable Network Afloat

The above scenario is almost a reality, representing several emergent advances in network technology and application portability (the “mobility” factor) that the Navy will soon capitalize on: a hardware and network-layer software architecture known as hyper converged infrastructure (HCI). The performance and cost efficiencies realized by this architecture will pave the way for disruptive changes to how we maneuver the network across the entire spectrum of operations: as a business system, as a decision support system, and as a warfighting platform.

Hyper-convergence is the integration of several hardware devices through a hypervisor, which acts as an intermediary and resource broker between software and hardware. Independent IT components are no longer siloed but combined, simplifying the entire infrastructure and improving speed and agility of the virtual network.1 The advantages of HCI seem obvious, but the real disruptive effect is how we can build upon it. The opening scenario describes on-demand application development at the tactical edge. This is achievable through HCI efficiency and another emerging network process known as Agile Core Services (ACS), a joint software development initiative being built into several programs throughout the Navy and Air Force, and one that CANES (as the afloat and maritime operations center network provider) is leveraging.

Hyper-Convergence in Network Hardware combines storage and processing power into a single appliance for simplified management, faster deployment, and could even lower acquisition costs ( Helixstorm.com)

ACS allows applications to use a common mix of services at the platform level, reducing cost and time of development but also forcing all applications to “speak the same language.” All that is needed to make on-demand, tactical application delivery a reality is a framework for plug-ins that takes advantage of big data we already have aboard ships and available at both the operational and tactical levels of war.

Previous articles in the United States Naval Institute’s magazine Proceedings have argued for thin-client solutions aboard warships,2 leveraging the CANES network program to ultimately achieve network efficiency that can remove “fat clients” (standard computer desktops) from the architecture to be replaced by thin or zero-clients (user workstation nodes with virtualized desktops and no onboard storage or input devices beyond keyboard and mouse). Removing clients from the equation eases the burden on shipboard technicians, consolidates the information security posture, and overall presents a more efficient network management picture through smart automation that makes better use of available manpower. HCI is the architecture solution that will eventually enable a full-scale, afloat, thin-client solution.

Hyperconverged.org is a website dedicated to delivering the message of advantages that HCI can bring,3 and lists ten compelling advantages that HCI brings to any IT infrastructure, to include:

  • Focus on software-defined data centers to allow faster software modernization and more agile vulnerability patching
  • Use of commercial off the shelf (COTS) commodity hardware that provides failure avoidance without the additional costs
  • Centralized systems and management
  • Enhanced agility in network management, automation, virtualization of operating systems, and shared resources across a common resource manager (such as hypervisor)
  • Improved scalability and efficiency
  • Potentially lower costs (caveat: in the commercial sector this may be truer than in the government sector, but smart contract competitions and vendor choices can drive down costs for the government as well)
  • Consolidated data protection through improved backup and recovery options, more efficient resource utilization, and faster network management tools

The advantages of HCI are numerous, and represent the true next step in IT architecture that will enable future software capabilities. How can we, as warfighters, take advantage of this emerging technology? It cannot be overstated that our current processes for procuring and delivering software-based services and capabilities must be revamped to keep pace with industry and take advantage of the speed and agility that HCI brings.

Faster, More Efficient Application Development is the Next Step

In our current hardware development methodology, programs of record within the Department of Defense (DoD) have little difficulty determining a clear modernization path that fits within the cost, schedule, and performance constraints outlined by the DoD acquisition framework. However, software development is an entirely different story, and is no longer agile enough to suit our needs. If we can iterate hardware infrastructure at near the speed of industry, then software and application development becomes the pacing function that we must address before we can realize the opening scenario of this essay.

The key term when discussing the speed of system development is agility, defined by the Massachusetts Institute of Technology (MIT) as “the speed of operations within an organization and speed in responding to customers…or reduced cycle times.”4 The federal government, DoD in particular, has been struggling with acquisition reform for some time, and with the signing of the National Defense Authorization Act in fiscal year 2010, Congress placed renewed emphasis on the need to transform the acquisition process for information technology. Several programmatic changes to acquisition helped (such as the approval of the “IT Box” programmatic framework in the joint requirements process), but the agility of software development and modernization remains challenged. Ensuring proper testing and evaluation (T&E) methodology, bureaucratic approval processes to ensure affordability, joint interoperability testing, and lengthy proof-in testing are just some of the processes facing software applications prior to gaining approval for full-rate production and fielding to the warfighter.

Matthew Kennedy and Lieutenant Colonel Dan Ward (U.S. Air Force), in a 2012 article for Defense Acquisition University, argued for agility in system development by discussing flaws in the current “agile software development” model.5 Developed in the early 2000s, this model is not as agile as the name would imply, and still defines requirements to be developed in advance, which doesn’t leave room for innovation or rapid, iterative changes to keep pace with the speed of industry. Exciting initiatives are being fielded in the commercial sector, such as cloud-based development and learning models, and mobility technology that many of the services would use to great effect. Innovative prototyping of disruptive technology at the service or component level of DoD, such as the now-disbanded Chief of Naval Operation’s Rapid Innovation Cell (CRIC), proved that there are operational advantages to emerging tech such as wearable mobile devices, if only we could “turn a tighter circle” within our acquisition framework and work with agility to field newer and better versions to the force.

Thankfully, we don’t have to reinvent the wheel when implementing a more agile software development framework; we must take lessons from industry and apply them to the unique needs of each of the DoD components. This may be easier said than done, but Kennedy and Ward, and indeed likely many other acquisition professionals and scholars, would agree that it is entirely possible if leadership demanded it, and the policies, procedures, and resourcing followed suit to support it. Kennedy and Ward offered a common set of software and business aspect practices to support agile practices that would allow a predictable, faster software refresh cycle (not just patches, but cumulative updates) to ensure software remains agile and relevant to the warfighter. Using small teams for incremental development, lean initiatives to shorten timelines, and continuous user involvement with co-located teams are just some of the practices offered.6

Improving our software development and modernization framework to be even more agile than it is now is necessary considering the recent industry shift to software-as-a-service and cloud-based business models. No longer will software versions be deliberate releases, but rather iterative updates such as Microsoft’s “current branch for business” (CBB) model. With this model, Microsoft envisions that Windows 10 could be the last “version” of Windows to be released, which will then be built upon in future “service pack-like” updates every 12-18 months. Organizations that do not update their operating systems to the latest CBB will be left behind with unsupported versions. Not only does such a change demand a rapid speed-to-force update solution for DoD, but it represents a disruptive process change that will ultimately allow us to reach the opening scenario’s on-demand tactical application process, leveraging big data in a way that units at the tactical edge have never done before – and in a way that may never have been imagined by the system’s original developers.

Hyper-convergence infrastructure, together with agility-based application development and modernization, represents a near-term possibility that will enable true innovation at the tactical level of war and put the power of information superiority into the hands of the warfighter. While re-developing the acquisition framework to achieve this may be difficult, it is entirely possible and, many would say, necessary if DoD is to keep pace with emerging threats, take advantage of emerging technology and innovation, and ultimately retain its status as the best equipped and trained force the world has ever known.

Artificial Intelligence: The Next AEGIS Combat System

Now let’s imagine another scenario. USS LYNDON B. JOHNSON (DDG 1002), last of the Zumwalt-class destroyer line and used primarily to test emergent technology prototypes in real-world scenarios, slips silently through the South China Sea in the dead of night. She is the first ship in the U.S. Navy to possess Nelson, a recursively-improving artificial intelligence (RIAI). Utilizing an HCI supercomputer core, Nelson acts as an integrator for the various shipboard combat systems in a similar concept to today’s AEGIS Combat System, except much faster and with machine-speed environmental adaption.

American relations with China have broken down, resulting in a shooting war in the South China Sea that threatens to spill into the Pacific proper, and eventually reach Hawaii. In an effort to change the dynamic, DDG-1002 forward deploys in stealth to collect intelligence on enemy force disposition and, if the opportunity presents itself, offer a first-strike capability to the U.S. Pacific Command. JOHNSON is spotted by a surface action group of three Chinese destroyers, who take immediate action by firing a salvo of anti-ship cruise missiles followed by surface gunnery fire once in range.

At the voice command of the Tactical Action Officer, Nelson goes to work, taking control of the ship’s self-defense system and prioritizing targets in a similar fashion to Aegis, only much faster, while constantly providing voice feedback on system readiness, target status, and battle damage assessments through the internal battle circuit, essentially acting as a member of the CIC team. Nelson’s adaptability as an AI allows it to evolve its tactical recommendations based on the environment and the sensory input from the ship’s 3D and 2D radars, intelligence feeds, and even the voice reports over the battle circuit. Compiling the tactical picture on a large display in CIC, Nelson simultaneously responds to threats against the ship while providing a fused battle management display to the Captain and Tactical Action Officer. The RIAI does much to lift the fog of war, and automates enough of the ship’s defensive and information-gathering functions to allow the humans to focus on tactically employing the ship to stop the threat rather than reacting to it.

While hyper-convergence, coupled with agile and rapidly-developed software innovation, is the emerging technology, recursively-improving artificial intelligence is the ultimate disruptive technology in the near to medium-term and represents the giant leap forward that many research and development efforts are striving towards. AI has often been relegated to the work of science fiction, and while many futurists see it as the inevitable “singularity” to happen as soon as the mid-21st century, it has not quite gained acceptance in the mainstream technical community. What must be focused on from a warfighter’s perspective is the near-term (within the next 30-50 years) prospects of advances in quantum computing, neural networks, robotics, nanotechnology, and hyper-convergence. These advances could put us on a path towards artificial intelligence within the lifetime of generations currently serving or about to serve in the armed forces.

The debate over whether recursively self-improving artificial intelligence is possible continues,5 with some theorists stating that such an AI cannot be achieved because intelligence could be “upper bounded” in a way that transcends processor speed, available memory, and sensor resolution improvements. Others suggest that intelligence “is the ability to find patterns in data”7 and that, regardless of the more fringe theories surrounding AI, transhumanism, and the ontological discussions of the singularity, “a sub-human level system capable of self-improvement can’t be excluded.”8  It is the sub-human AI, capable of adapting to changing data patterns, that makes a combat system AI an exciting near-future prospect. 

Conclusion

This article presented two hypothetical scenarios. In the near-term, a Navy watchstander takes advantage of a hyper-converged infrastructure network environment onboard a U.S. Navy warship to rapidly develop a tactical application to take advantage of disparate databases and cloud data resources, ultimately producing a battle management aid for the ship’s next mission. This scenario took advantage of two emerging technological concepts: hyper-convergence in hardware infrastructure, a reality some major defense acquisition programs such as the Navy’s CANES has already resourced and on-track to field in the coming years, and agile software development in defense acquisition, which is a conceptual framework that must be developed to ensure more rapid and innovative software capabilities are delivered to the force.

The funding for these technological advances must remain stable to deliver HCI to our operating forces as a hardware baseline for future development, and policy makers must continue to find efficiencies in IT acquisition that lead to agile software development to really take advantage of the efficiencies HCI brings. Additionally, DoD IT leaders must think critically and dynamically about how future software updates will be tested and fielded rapidly; our current lengthy testing and evaluation cycle is no longer compatible with either the speed of industry’s vulnerability patching, a fluid content upgrade schedule, or the pace of adversarial threats.

The second scenario describes a near-future incorporation of recursively-improving artificial intelligence within a combat system, which builds upon hyper converged hardware and recursively improving software to deliver a warfighting platform that can defend itself more rapidly and learn from its tactical situation. The simple fact is that technology is changing at a pace no one dared dream as early as 20 years ago, and if we don’t build it, our adversaries will. A recent (2016) article in Reuters, and reported in other media outlets, showcases the People Republic of China’s (PRC) desire to build AI-integrated weapons,9 citing Wang Changqing of China Aerospace and Industry Corp with saying “our future cruise missiles will have a very high level of artificial intelligence and automation.” DoD must adapt its processes to keep pace and remain the world’s leader in incorporating emerging and disruptive technology into its warfighting systems.

Travis Howard is an active duty U.S. Naval Officer assigned to the staff of the Chief of Naval Operations in Washington D.C. He holds advanced degrees and certifications in cybersecurity policy and business administration, and has over 16 years of enlisted and commissioned experience in surface warfare and Navy information systems. The views expressed here are solely those of the author and do not necessarily reflect those of the Department of the Navy, Department of Defense, or the United States Government.

References

1. Scott Morris. “Putting The ‘Hyper’ Into Convergence.” NetworkWorld Asia 12.2 (2015): 44. 28 Jan 2017.

2. Travis Howard, LT, USN. “’The Next Generation’ of Afloat Networking.” Proceedings Magazine, Mar 2015, Vol. 141/3/1,345

3. Hyperconverged.org. “Ten Things Hyperconverged Can Do For You: Leveraging the Benefits of Hyperconverged Infrastructure.” Retrieved Feb 2 2017, http://www.hyperconverged.org/10-things-hyperconvergence-can-do/

4. Matthew Kennedy & Lt Col Dan Ward. “Inserting Agility In System Development.” Defense Acquisition Research Journal: A Publication Of The Defense Acquisition University 19.3 (2012): 249-264. 4 Feb 2017.

5. Ibid

6. Ibid

7. Roman Yampolskiy. “From Seed AI to Technological Singularity via Recursively Self-Improving Software.” Cornell University Library. arXiv:1502.06512 [cs.AI]. 23 Feb 2015.

8. Ibid

9. Ben Blanchard. “China eyes artificial intelligence for new cruise missiles.” Reuters, World News. 19 Aug 2016, http://www.reuters.com/article/us-china-defence-missiles-idUSKCN10U0EM

Featured Image: Electronic Warfare Specialist 2nd Class Sarah Lanoo from South Bend, Ind., operates a Naval Tactical Data System (NTDS) console in the Combat Direction Center (CDC) aboard the USS Abraham Lincoln as it conducts combat operations in support of Operation Southern Watch. (U.S. Navy photo by Photographer’s Mate 3rd Class Patricia Totemeier)