Tag Archives: autonomy

Will Artificial Intelligence Be Disruptive to Our Way of War?

By Marjorie Greene


At a recent Berkshire Hathaway shareholder meeting Warren Buffett said that Artificial Intelligence – the collection of technologies that enable machines to learn on their own – could be “enormously disruptive” to our human society. More recently, Stephen Hawking, the renowned physicist, predicted that planet Earth will only survive for the next one hundred years. He believes that because of the development of Artificial Intelligence, machines may no longer simply augment human activities but will replace and eliminate humans altogether in the command and control of cognitive tasks.

In my recent presentation to the annual Human Systems conference in Springfield, Virginia, I suggested that there is a risk that human decision-making may no longer be involved in the use of lethal force as we capitalize on the military applications of Artificial Intelligence to enhance war-fighting capabilities. Humans should never relinquish control of decisions regarding the employment of lethal force. How do we keep humans in the loop? This is an area of human systems research that will be important to undertake in the future.       


Norbert Wiener in his book, Cybernetics, was perhaps the first person to discuss the notion of “machine-learning.” Building on the behavioral models of animal cultures such as ant colonies and the flocking of birds, he describes a process called “self-organization” by which humans – and by analogy – machines learn by adapting to their environment. Self-organization refers to the emergence of higher-level properties of the whole that are not possessed by any of the individual parts making up the whole. The parts act locally on local information and global order emerges without any need for external control. The expression “swarm intelligence” is often used to describe the collective behavior of self-organized systems that allows the emergence of “intelligent” global behavior unknown to the individual systems.

Swarm Warfare

Military researchers are especially concerned about recent breakthroughs in swarm intelligence that could enable “swarm warfare” for asymmetric assaults against major U.S. weapons platforms, such as aircraft carriers.  The accelerating speed of computer processing, along with rapid improvements in the development of autonomy-increasing algorithms also suggests that it may be possible for the military to more quickly perform a wider range of functions without needing every individual task controlled by humans.

Drones like the Predator and Reaper are still piloted vehicles, with humans controlling what the camera looks at, where the drone flies, and what targets to hit with the drone’s missiles. But CNA studies have shown that drone strikes in Afghanistan caused 10 times the number of civilian casualties compared to strikes by manned aircraft. And a recent book published jointly with the Marine Corps University Press builds on CNA studies in national security, legitimacy, and civilian casualties to conclude that it will be important to consider International Humanitarian Law (IHL) in rethinking the drone war as Artificial Intelligence continues to flourish.

The Chinese Approach

Meanwhile, many Chinese strategists recognize the trend towards unmanned and autonomous warfare and intend to capitalize upon it. The PLA has incorporated a range of unmanned aerial vehicles into its force structure throughout all of its services. The PLA Air Force and PLA Navy have also started to introduce more advanced multi-mission unmanned aerial vehicles. It is clear that China is intensifying the military applications of Artificial Intelligence and, as we heard at a recent hearing by the Senate’s U.S. – China Economic and Security Review Commission (where CNA’s China Studies Division also testified), the Chinese defense industry has made significant progress in its research and development of a range of cutting-edge unmanned systems, including those with swarming capabilities. China is also viewing outer space as a new domain that it must fight for and seize if it is to win future wars.

Armed with artificial intelligence capabilities, China has moved beyond just technology developments to laying the groundwork for operational and command and control concepts to govern their use. These developments have important consequences for the U.S. military and suggest that Artificial Intelligence plays a prominent role in China’s overall efforts to establish an effective military capable of winning wars through an asymmetric strategy directed at critical military platforms.

Human-Machine Teaming

Human-machine teaming is gaining importance in national security affairs, as evidenced by a recent defense unmanned systems summit conducted internally by DoD and DHS in which many of the speakers explicitly referred to efforts to develop greater unmanned capabilities that intermix with manned capabilities and future systems.

Examples include: Michael Novak, Acting Director of the Unmanned Systems Directorate, N99, who spoke of optimizing human-machine teaming to multiply capabilities and reinforce trust (incidentally, the decision was made to phase out N99 because unmanned capabilities are being “mainstreamed” across the force); Bindu Nair, the Deputy Director, Human Systems, Training & Biosystems Directorate, OASD, who emphasized efforts to develop greater unmanned capabilities that intermix with manned capabilities and future systems; and Kris Kearns, representing the Air Force Research Lab, who discussed current efforts to mature and update autonomous technologies and manned-unmanned teaming.


Finally, it should be noted that the Defense Advanced Projects Agency (DARPA) has recently issued a relevant Broad Agency Announcement (BAA) titled “OFFensive Swarm-Enabled Tactics” – as part of the Defense Department OFFSET initiative.  Notably, it includes a section asking for the development of tactics that look at collaboration between human systems and the swarm, especially for urban environments. This should certainly reassure the human systems community that future researchers will not forget them, even as swarm intelligence makes it possible to achieve global order without any need for external control.


As we capitalize on the military applications of Artificial Intelligence, there is a risk that human decision-making may no longer be involved in the use of lethal force. In general, Artificial Intelligence could indeed be disruptive to our human society by replacing the need for human control, but machines do not have to replace humans in the command and control of cognitive tasks, particularly in military contexts. We need to figure out how to keep humans in the loop. This area of research would be a fruitful one for the human systems community to undertake in the future.  

Marjorie Greene is a Research Analyst with the Center for Naval Analyses. She has more than 25 years’ management experience in both government and commercial organizations and has recently specialized in finding S&T solutions for the U. S. Marine Corps. She earned a B.S. in mathematics from Creighton University, an M.A. in mathematics from the University of Nebraska, and completed her Ph.D. course work in Operations Research from The Johns Hopkins University. The views expressed here are her own.

Featured Image: Electronic Warfare Specialist 2nd Class Sarah Lanoo from South Bend, Ind., operates a Naval Tactical Data System (NTDS) console in the Combat Direction Center (CDC) aboard USS Abraham Lincoln. (U.S. Navy photo by Photographer’s Mate 3rd Class Patricia Totemeier)

Distributed Lethality: Old Opportunities for New Operations

Distributed Lethality Topic Week

By Matthew Hipple

The BISMARCK, a single ship capable of striking fear into the heart of an entire nation.
The BISMARCK, a single ship whose threat was sufficient to muster an entire fleet for the hunt.

The essence of naval warfare has always been opportunism – from the vague area of gravity generated by an in-port “fleet in being,” to the fleet-rallying threat generated by even a BISMARK or RANGER alone. The opportunity is generated by forces more mobile and self-contained than any army, more persistent than an air force, and empowered to act with no connection to higher authority in a domain that leaves no trace.  It is that ability for a small number of independent ships, or even a single vessel, to provide opportunity and create, “battlespace complexity,” that is distributed lethality’s core. Distributed lethality is not naval warfighting by new principles; it is a return to principles.

Donate to CIMSEC!

The best defense is not an overwhelming obsession with defense.
The best defense is not an overwhelming obsession with defense.

Unfortunately, the virtuous autonomy of the past was, in part, only protected by the limited technology of the day. As technology allowed, decentralized execution was replaced by the luxury and false confidence of constant connection to higher authority through an electronic umbilical. It is the kind of devolution that turned into Secretary Gates’ nightmare, “I was touring a [Joint Special Operations Command] in Kabul and discovered a direct line to somebody on the NSC, and I had them tear it out while I was standing there.” In parallel, America began the ahistorical project of investing all offensive opportunity not even in a single class of ship, but a single ship surrounded by a fleet obsessed with its defense.  As early as 1993, President Clinton stated that when a crisis broke out, his first question would be, “where is the nearest carrier.” Sorry, other ships! For the Navy to sensibly rebalance, distributed lethality must succeed. For distributed lethality to succeed, we must decentralize and de-tether mission command, weapons release authority, and weapons support systems.

Decentralized and disconnected methods of command must be embraced, as centralization is only an imagined luxury. Modern centralization is based on the assumption we will have the connectivity appropriate for it. This is no longer tenable in a world of increasingly advanced peers and hyundaized lesser adversaries. Anti-Access, Area-Denial (A2/AD) depends on opponents making themselves visible, of which electronic emission is critical. A2/AD will also inevitably seek to disrupt our C2 connections.

“Permission? We don’t need no stinkin’ permissions.” “The Battle for Fox Green Beach,” watercolor by Dwight Shepler, showing the Gleaves class destroyer USS Emmons(DD 457) foreground and her sister-ship, the USS Doyle, to the right, within a few hundred yards of the landing beach, mixing it up with German shore batteries on D-Day.

The current major-node CWC concept will need to be broken down to a more compact, internal model designed around the Hunter Killer Surface Action Group. Rules of Engagement must be flexible to the point that American commanders need not look over their shoulders to a higher OPCON. Consider, the destroyer CO’s at Normandy didn’t consider waiting for direction or requesting approval before shifting from small boat screening to shore bombardment from the shoals. They recognized the opportunity – the necessity – and executed of their own will.

In contrast, today it might be a regular occurrence to double-and-triple check our actions with American OPCON while operating with NATO in TACON off Somalia. American CO’s could use the freedom to make pragmatic, on-the-spot decisions not only for immediate concerns of mission effectiveness, but as representatives of their higher military command and, potentially, the state. Coalition commanders would have greater trust in the spot decisions of their American counterparts, rather than worry they sit precariously atop a changing several-step staffing process.

Though encouraging equivalent RoE flexibility for coalition partners may be challenging, our autonomy may encourage our partners to interpret their home nation guidance in a flexibility equivalent to their trust in the US commander they fight beside. That lack of hesitancy will be critical during a conflict, and in that sudden moment in the South China Sea or Mediterranean when a small SAG of coalition partners find themselves in the midst of a conflict that has just begun. Imposing the peacetime discipline necessary to trust the CO’s we have trained, prepared, and empowered to do their jobs is the only thing that will jump-start a shift in a mind-set now dominated by subordination. 

In the execution of more flexible orders, ships must be re-invested with control of their own weapon systems. CO’s oversee non-nuclear weapon systems that they do not control – that are solely the purview of off-ship authorities. In particular, as weapon systems like Tomahawk become deployable for ASuW, off-ship authority’s iron grip on their control must break.  This decentralization also matters outside the stand-up fight at sea. The organic ability to program and deploy Tomahawk missiles for land strike allows surface ships to execute attacks of opportunity on land infrastructure, or execute and support opportunistic maritime raids as groups of marines harass adversaries, or turn isolated islands into temporary logistics or aviation operations bases. For winning the sudden-and-deadly fight in the littoral environment but integrating with opportunistic amphibious operations, the surface fleet could find some inspiration from the USS BARB, the only submarine in WWII to “sink” a train with its crew-come-amateur-commandos. From Somalia to the South China Sea, naval commanders should be told what to do, not how – and be allowed to do it. The less reliant the force is on these ephemeral links and the less these links are unnecessarily exercised in peacetime, the greater a force’s instinct to operate independently and with confidence in an imposed or needed silence. 

CAPT Ramius, relieved to discover he is not dealing with "some buckaroo."
CAPT Ramius, relieved to discover he is not dealing with “some buckaroo.”

There may be a level of discomfort with decentralization and disconnection. If leaders fear the impact of a “strategic corporal,” surely a “buckaroo,” as  CAPT Ramius would call him, that would be truly horrifying. That fear would be a reflection of a failure of the system to produce leaders, not the importance and operational effectiveness of independence. There is a reason the US once considered the Department of the Navy to be separate and peer to the Department of War – noting the institution and its individual commanders as unique peace and wartime tools for strategic security and diplomacy. Compare today’s autonomy and trust with that invested in Commodore Perry during his mission to Japan or Commodore Preble’s mission to seek partnership with Naples during the First Barbary Pirates War. Reliance on call-backs and outside authority will gut a naval force’s ability to operate in a distributed manner when those connections disappear. Encouraging it by default will ensure the muscle memory is there when needed.

Finally, Distributed Lethality requires the hardware to allow surface combatants to operate as effective offensive surface units in small groups. The kinetic end of the spectrum, upgraded legacy weapons and an introduction of new weapon systems has been extensively discussed since the 2015 Surface Navy Association National Symposium when VADM Rowden and RADM Fanta rolled out Distributed Lethality in force. However, weapon systems are only as good as the available detection systems. Current naval operations rely heavily on shore-based assets, assets from the carrier, and joint assets for reconnaissance. In the previous Distributed Lethality topic week, LT Glynn argued for a suite of surveillance assets, some organic to individual ships, but most deploying from the shore or from carriers.  Presuming a denied environment, and commanders empowered to seek and exploit opportunities within their space, the best argument would be for greater emphasis on ship-organic assets. They may not provide the best capabilities, but capabilities are worthless if assets cannot find, reach, or communicate with a Hunter-Killer SAG operating in silence imposed by self or the enemy. They also prevent an HKSAG from being completely at the mercy or limitations of a Navy or joint asset coordinator – while simultaneously relieving those theater assets for higher-level operations and opportunity exploitation.

Ultimately – distributed lethality is the historical default mode of independent naval operations given a new name due to the strength of the current carrier-based operational construct. Admiral Halsey ordered CAPT Arleigh Burke to intercept a Japanese convoy at Bougainville, “GET ATHWART THE BUKA-RABAUL EVACUATION LINE ABOUT 35 MILES WEST OF… IF ENEMY CONTACTED YOU KNOW WHAT TO DO.” The surface fleet must embrace a culture assuming our commanders “KNOW WHAT TO DO.” We must build an operational construct in which acting on that instinct is practiced and exercised in peacetime, for wartime. The operational and diplomatic autonomy, as well as the OLD IRONSIDES style firepower of single surface combatants, is necessary to rebalance a force gutted of its many natural operational advantages. Distributed lethality must return the surface force to its cultural and operational roots of distributed autonomy, returning to the ideas that will maximize opportunity to threaten, undermine, engage with, and destroy the adversary.

Matthew Hipple is the President of CIMSEC and an active duty surface warfare officer. He also leads our Sea Control and Real Time Strategy podcasts, available on iTunes.

Donate to CIMSEC!

Sea Control 92 – Autonomy

seacontrol2Weapon autonomy is a broad term around which swirls an incredible amount of debate. Paul Scharre, Michael Horowitz, and Adam Elkus join Sea Control to discuss the nature of autonomy, how to imagine its use in an operational environment, and how to think of the debate surrounding it.

DOWNLOAD: Sea Control 92 -Autonomy

Music: Sam LaGrone

CIMSEC content is and always will be free; consider a voluntary monthly donation to offset our operational costs. As always, it is your support and patronage that have allowed us to build this community – and we are incredibly grateful.

Lethal Autonomy in Autonomous Unmanned Vehicles

Guest post written for UUV Week by Sean Welsh.

Should robots sink ships with people on them in time of war? Will it be normatively acceptable and technically possible for robotic submarines to replace crewed submarines?

These debates are well-worn in the UAV space. Ron Arkin’s classic work Governing Lethal Behaviour in Autonomous Robots has generated considerable attention since it was published six years ago in 2009. The centre of his work is the “ethical governor” that would give normative approval to lethal decisions to engage enemy targets. He claims that International Humanitarian Law (IHL) and Rules of Engagement can be programmed into robots in machine readable language. He illustrates his work with a prototype that engages in several test cases. The drone does not bomb the Taliban because they are in a cemetery and targeting “cultural property” is forbidden. The drone selects an “alternative release point” (i.e. it waits for the tank to move a certain distance) and then it fires a Hellfire missile at its target because the target (a T-80 tank) was too close to civilian objects.

Could such an “ethical governor” be adapted to submarine conditions? One would think that the lethal targeting decisions a Predator UAV would have to make above the clutter of land would be far more difficult than the targeting decisions a UUV would have to make. The sea has far fewer civilian objects in it. Ships and submarines are relatively scarce compared to cars, houses, apartment blocks, schools, hospitals and indeed cemeteries. According to the IMO there are only about 100,000 merchant ships in the world. The number of warships is much smaller, a few thousand.

Diagram of the ethical governer
Diagram of the ‘ethical governor’

There seems to be less scope for major targeting errors with UUVs. Technology to recognize shipping targets is already installed in naval mines. At its simplest, developing a hunter-killer UUV would be a matter of putting the smarts of a mine programmed to react to distinctive acoustic signatures into a torpedo – which has already been done. If UUV were to operate at periscope depth, it is plausible that object recognition technology (Treiber, 2010) could be used as warships are large and distinctive objects. Discriminating between a prawn trawler and a patrol boat is far easier than discriminating human targets in counter-insurgency and counter-terrorism operations. There are no visual cues to distinguish between regular shepherds in Waziristan who have beards, wear robes, carry AK-47s, face Mecca to pray etc. and Taliban combatants who look exactly the same. Targeting has to be based on protracted observations of behaviour. Operations against a regular Navy in a conventional war on the high seas would not have such extreme discrimination challenges.

A key difference between the UUV and the UAV is the viability of telepiloting. Existing communications between submarines are restricted to VLF and ELF frequencies because of the properties of radio waves in salt water. These frequencies require large antenna and offer very low transmission rates so they cannot be used to transmit complex data such as video. VLF can support a few hundred bits per second. ELF is restricted to a few bits per minute (Baker, 2013). Thus at the present time remote operation of submarines is limited to the length of a cable. UAVs by contrast can be telepiloted via satellite links. Drones flying over Afghanistan can be piloted from Nevada.

For practical purposes this means the “in the loop” and “on the loop” variants of autonomy would only be viable for tethered UUVs. Untethered UUVs would have to run in “off the loop” mode. Were such systems to be tasked with functions such as selecting and engaging targets, they would need something like Arkin’s ethical governor to provide normative control.

DoD policy directive 3000.09 (Department of Defense, 2012) would apply to the development of any such system by the US Navy. It may be that a Protocol VI of the Convention on Certain Conventional Weapons (CCW) emerges that may regulate or ban “off the loop” lethal autonomy in weapons systems. There are thus regulatory risks involved with projects to develop UUVs capable of offensive military actions.

Even so, in a world in which a small naval power such as Ecuador can knock up a working USV from commodity components for anti-piracy operations (Naval-technology.com, 2013), the main obstacle is not technical but in persuading military decision makers to trust the autonomous options. Trust of autonomous technology is a key issue. As Defense Science Board (2012) puts it:

A key challenge facing unmanned system developers is the move from a hardware-oriented, vehicle-centric development and acquisition process to one that addresses the primacy of software in creating autonomy. For commanders and operators in particular, these challenges can collectively be characterized as a lack of trust that the autonomous functions of a given system will operate as intended in all situations.

There is evidence that military commanders have been slow to embrace unmanned systems. Many will mutter sotto voce: to err is human but to really foul things up requires a computer. The US Air Force dragged their feet on drones and yet the fundamental advantages of unmanned aircraft over manned aircraft have turned out to be compelling in many applications. It is frequently said that the F-35 will be the last manned fighter the US builds. The USAF has published a roadmap detailing a path to “full autonomy” by 2049 (United States Air Force, 2009).

Similar advantages of unmanned systems apply to ships. Just as a UAV can be smaller than a regular plane, so a UUV can be smaller than a regular ship. This reduces requirements for engine size and elements of the aircraft that support human life at altitude or depth. UAVs do not need toilets, galleys, pressurized cabins and so on. In UUVs, there would be no need to generate oxygen for a crew and no need for sleeping quarters. Such savings would reduce operating costs and risks to the lives of crew. In war, as the Spanish captains said: victory goes to he who has the last escudo. Stress on reducing costs is endemic in military thinking and political leaders are highly averse to casualties coming home in flag-draped coffins. If UUVs can effectively deliver more military bang for less bucks and no risk to human crews, then they will be adopted in preference to crewed alternatives as the capabilities of vehicles controlled entirely by software are proven.

Such a trajectory is arguably as inevitable as that of Garry Kasparov vs Deep Blue. However in the shorter term, it is not likely that navies will give up on human crews. Rather UUVs will be employed as “force multipliers” to increase the capability of human crews and to reduce risks to humans. UUVs will have uncontroversial applications in mine counter measures and in intelligence and surveillance operations. They are more likely to be deployed as relatively short range weapons performing tasks that are non-lethal. Submarine launched USVs attached to their “mother” subs by tethers could provide video communications of the surface without the sub having to come to periscope depth. Such USVs could in turn launch small UAVs to enable the submarine to engage in reconnaissance from the air.  The Raytheon SOTHOC (Submarine Over the Horizon Organic Capabilities) launches a one-shot UAV from a launch platform ejected from the subs waste disposal lock . Indeed small UAVs such

AeroVironment Switchblade UUV
AeroVironment Switchblade UUV

as Switchblade (Navaldrones.com, 2015) could be weaponized with modest payloads and used to attack the bridges or rudders of enemy surface ships as well as to increase the range of the periscope beyond the horizon. Future aircraft carriers may well be submarine.

In such cases, the UUV, USV and UAV “accessories” to the human crewed submarine would increase capability and decrease risks. As humans would pilot such devices, there are no requirements for an “ethical governor” though such technology might be installed anyway to advise human operators and to take over in case the network link failed.

However, a top priority in naval warfare is the destruction or capture of the enemy. Many say that it is inevitable that robots will be tasked with this mission and that robots will be at the front line in future wars. The key factors will be cost, risk, reliability and capability. If military capability can be robotized and deliver the same functionality at similar or better reliability and at less cost and less risk than human alternatives, then in the absence of a policy prohibition, sooner or later it will be.

Sean Welsh is a Doctoral Candidate in Robot Ethics at the University of Canterbury. His professional experience includes  17 years working in software engineering for organizations such as British Telecom, Telstra Australia, Fitch Ratings, James Cook University and Lumata. The working title of Sean’s doctoral dissertation is “Moral Code: Programming the Ethical Robot.”


 Arkin, R. C. (2009). Governing Lethal Behaviour in Autonomous Robots. Boca Rouge: CRC Press.

Baker, B. (2013). Deep secret – secure submarine communication on a quantum level.   Retrieved 13th May, 2015, from http://www.naval-technology.com/features/featuredeep-secret-secure-submarine-communication-on-a-quantum-level/

Defense Science Board. (2012). The Role of Autonomy in DoD Systems. from http://fas.org/irp/agency/dod/dsb/autonomy.pdf

Department of Defense. (2012). Directive 3000.09: Autonomy in Weapons Systems.   Retrieved 12th Feb, 2015, from http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf

Navaldrones.com. (2015). Switchblade UAS.   Retrieved 28th May, 2015, from http://www.navaldrones.com/switchblade.html

Naval-technology.com. (2013). No hands on deck – arming unmanned surface vessels.   Retrieved 13th May, 2015, from http://www.naval-technology.com/features/featurehands-on-deck-armed-unmanned-surface-vessels/

Treiber, M. (2010). An Introduction to Object Recognition: Selected Algorithms for a Wide Variety of Applications. London: Springer.

United States Air Force. (2009). Unmanned Aircraft Systems Flight Plan 2009-2047.   Retrieved 13th May, 2015, from http://fas.org/irp/program/collect/uas_2009.pdf