Tag Archives: autonomy

Sea Control 92 – Autonomy

seacontrol2Weapon autonomy is a broad term around which swirls an incredible amount of debate. Paul Scharre, Michael Horowitz, and Adam Elkus join Sea Control to discuss the nature of autonomy, how to imagine its use in an operational environment, and how to think of the debate surrounding it.

DOWNLOAD: Sea Control 92 -Autonomy

Music: Sam LaGrone

CIMSEC content is and always will be free; consider a voluntary monthly donation to offset our operational costs. As always, it is your support and patronage that have allowed us to build this community – and we are incredibly grateful.

Select a Donation Option (USD)

Enter Donation Amount (USD)

Lethal Autonomy in Autonomous Unmanned Vehicles

Guest post written for UUV Week by Sean Welsh.

Should robots sink ships with people on them in time of war? Will it be normatively acceptable and technically possible for robotic submarines to replace crewed submarines?

These debates are well-worn in the UAV space. Ron Arkin’s classic work Governing Lethal Behaviour in Autonomous Robots has generated considerable attention since it was published six years ago in 2009. The centre of his work is the “ethical governor” that would give normative approval to lethal decisions to engage enemy targets. He claims that International Humanitarian Law (IHL) and Rules of Engagement can be programmed into robots in machine readable language. He illustrates his work with a prototype that engages in several test cases. The drone does not bomb the Taliban because they are in a cemetery and targeting “cultural property” is forbidden. The drone selects an “alternative release point” (i.e. it waits for the tank to move a certain distance) and then it fires a Hellfire missile at its target because the target (a T-80 tank) was too close to civilian objects.

Could such an “ethical governor” be adapted to submarine conditions? One would think that the lethal targeting decisions a Predator UAV would have to make above the clutter of land would be far more difficult than the targeting decisions a UUV would have to make. The sea has far fewer civilian objects in it. Ships and submarines are relatively scarce compared to cars, houses, apartment blocks, schools, hospitals and indeed cemeteries. According to the IMO there are only about 100,000 merchant ships in the world. The number of warships is much smaller, a few thousand.

Diagram of the ethical governer
Diagram of the ‘ethical governor’

There seems to be less scope for major targeting errors with UUVs. Technology to recognize shipping targets is already installed in naval mines. At its simplest, developing a hunter-killer UUV would be a matter of putting the smarts of a mine programmed to react to distinctive acoustic signatures into a torpedo – which has already been done. If UUV were to operate at periscope depth, it is plausible that object recognition technology (Treiber, 2010) could be used as warships are large and distinctive objects. Discriminating between a prawn trawler and a patrol boat is far easier than discriminating human targets in counter-insurgency and counter-terrorism operations. There are no visual cues to distinguish between regular shepherds in Waziristan who have beards, wear robes, carry AK-47s, face Mecca to pray etc. and Taliban combatants who look exactly the same. Targeting has to be based on protracted observations of behaviour. Operations against a regular Navy in a conventional war on the high seas would not have such extreme discrimination challenges.

A key difference between the UUV and the UAV is the viability of telepiloting. Existing communications between submarines are restricted to VLF and ELF frequencies because of the properties of radio waves in salt water. These frequencies require large antenna and offer very low transmission rates so they cannot be used to transmit complex data such as video. VLF can support a few hundred bits per second. ELF is restricted to a few bits per minute (Baker, 2013). Thus at the present time remote operation of submarines is limited to the length of a cable. UAVs by contrast can be telepiloted via satellite links. Drones flying over Afghanistan can be piloted from Nevada.

For practical purposes this means the “in the loop” and “on the loop” variants of autonomy would only be viable for tethered UUVs. Untethered UUVs would have to run in “off the loop” mode. Were such systems to be tasked with functions such as selecting and engaging targets, they would need something like Arkin’s ethical governor to provide normative control.

DoD policy directive 3000.09 (Department of Defense, 2012) would apply to the development of any such system by the US Navy. It may be that a Protocol VI of the Convention on Certain Conventional Weapons (CCW) emerges that may regulate or ban “off the loop” lethal autonomy in weapons systems. There are thus regulatory risks involved with projects to develop UUVs capable of offensive military actions.

Even so, in a world in which a small naval power such as Ecuador can knock up a working USV from commodity components for anti-piracy operations (Naval-technology.com, 2013), the main obstacle is not technical but in persuading military decision makers to trust the autonomous options. Trust of autonomous technology is a key issue. As Defense Science Board (2012) puts it:

A key challenge facing unmanned system developers is the move from a hardware-oriented, vehicle-centric development and acquisition process to one that addresses the primacy of software in creating autonomy. For commanders and operators in particular, these challenges can collectively be characterized as a lack of trust that the autonomous functions of a given system will operate as intended in all situations.

There is evidence that military commanders have been slow to embrace unmanned systems. Many will mutter sotto voce: to err is human but to really foul things up requires a computer. The US Air Force dragged their feet on drones and yet the fundamental advantages of unmanned aircraft over manned aircraft have turned out to be compelling in many applications. It is frequently said that the F-35 will be the last manned fighter the US builds. The USAF has published a roadmap detailing a path to “full autonomy” by 2049 (United States Air Force, 2009).

Similar advantages of unmanned systems apply to ships. Just as a UAV can be smaller than a regular plane, so a UUV can be smaller than a regular ship. This reduces requirements for engine size and elements of the aircraft that support human life at altitude or depth. UAVs do not need toilets, galleys, pressurized cabins and so on. In UUVs, there would be no need to generate oxygen for a crew and no need for sleeping quarters. Such savings would reduce operating costs and risks to the lives of crew. In war, as the Spanish captains said: victory goes to he who has the last escudo. Stress on reducing costs is endemic in military thinking and political leaders are highly averse to casualties coming home in flag-draped coffins. If UUVs can effectively deliver more military bang for less bucks and no risk to human crews, then they will be adopted in preference to crewed alternatives as the capabilities of vehicles controlled entirely by software are proven.

Such a trajectory is arguably as inevitable as that of Garry Kasparov vs Deep Blue. However in the shorter term, it is not likely that navies will give up on human crews. Rather UUVs will be employed as “force multipliers” to increase the capability of human crews and to reduce risks to humans. UUVs will have uncontroversial applications in mine counter measures and in intelligence and surveillance operations. They are more likely to be deployed as relatively short range weapons performing tasks that are non-lethal. Submarine launched USVs attached to their “mother” subs by tethers could provide video communications of the surface without the sub having to come to periscope depth. Such USVs could in turn launch small UAVs to enable the submarine to engage in reconnaissance from the air.  The Raytheon SOTHOC (Submarine Over the Horizon Organic Capabilities) launches a one-shot UAV from a launch platform ejected from the subs waste disposal lock . Indeed small UAVs such

AeroVironment Switchblade UUV
AeroVironment Switchblade UUV

as Switchblade (Navaldrones.com, 2015) could be weaponized with modest payloads and used to attack the bridges or rudders of enemy surface ships as well as to increase the range of the periscope beyond the horizon. Future aircraft carriers may well be submarine.

In such cases, the UUV, USV and UAV “accessories” to the human crewed submarine would increase capability and decrease risks. As humans would pilot such devices, there are no requirements for an “ethical governor” though such technology might be installed anyway to advise human operators and to take over in case the network link failed.

However, a top priority in naval warfare is the destruction or capture of the enemy. Many say that it is inevitable that robots will be tasked with this mission and that robots will be at the front line in future wars. The key factors will be cost, risk, reliability and capability. If military capability can be robotized and deliver the same functionality at similar or better reliability and at less cost and less risk than human alternatives, then in the absence of a policy prohibition, sooner or later it will be.

Sean Welsh is a Doctoral Candidate in Robot Ethics at the University of Canterbury. His professional experience includes  17 years working in software engineering for organizations such as British Telecom, Telstra Australia, Fitch Ratings, James Cook University and Lumata. The working title of Sean’s doctoral dissertation is “Moral Code: Programming the Ethical Robot.”


 Arkin, R. C. (2009). Governing Lethal Behaviour in Autonomous Robots. Boca Rouge: CRC Press.

Baker, B. (2013). Deep secret – secure submarine communication on a quantum level.   Retrieved 13th May, 2015, from http://www.naval-technology.com/features/featuredeep-secret-secure-submarine-communication-on-a-quantum-level/

Defense Science Board. (2012). The Role of Autonomy in DoD Systems. from http://fas.org/irp/agency/dod/dsb/autonomy.pdf

Department of Defense. (2012). Directive 3000.09: Autonomy in Weapons Systems.   Retrieved 12th Feb, 2015, from http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf

Navaldrones.com. (2015). Switchblade UAS.   Retrieved 28th May, 2015, from http://www.navaldrones.com/switchblade.html

Naval-technology.com. (2013). No hands on deck – arming unmanned surface vessels.   Retrieved 13th May, 2015, from http://www.naval-technology.com/features/featurehands-on-deck-armed-unmanned-surface-vessels/

Treiber, M. (2010). An Introduction to Object Recognition: Selected Algorithms for a Wide Variety of Applications. London: Springer.

United States Air Force. (2009). Unmanned Aircraft Systems Flight Plan 2009-2047.   Retrieved 13th May, 2015, from http://fas.org/irp/program/collect/uas_2009.pdf

Grail War 2050, Last Stand at Battle Site One

This piece by Dave Shunk is part of our Future Military Fiction Week for the New Year. The week topic was chosen as a prize by one of our Kickstarter supporters.

The nation state had decided not to invest in robotic armies. Autonomous killing machines were beyond their ethics. However, the enemy had no problem building autonomous robotic killing machines.

The enemy robotic land assault caught the nation state by surprise. The enemy forces especially sought to destroy the nation state’s treasure nicknamed “The Grail Project.”  The enemy’s battle plan sought to overcome the human defenders at the various Grail Project sites by overwhelming swarms.

The tactical fight went badly against the solely human forces defending the outlying Grail Project sites. The horde of enemy robotics on land, sea and air were the perfect attrition strategy.  Soul less killers, mass produced, networked together and built cheaply with advanced 3D printers in secret production facilities were deadly.

The nation state had not pursued the robotic armies but went a different route. HAL and Major Wittmann were the first experimental AI/Human team at training site “One” adjacent to one of the remaining Grail Project sites.  They were a prototype weapon – human and AI bonded together as a weapon system team within the tank with a shared neural network. However, this tank was unlike early 21st century tanks. This tank had advanced weapon systems – a tank on technology steroids.

HAL (Human Armor Liaison) is the artificial intelligence (AI) that controls the tank, the weapon systems, and communications. HAL is incorporated and encased into the advanced nanotechnology shell of the tank.  HAL has self repairing armor and neural circuits woven into the structure of the tank.  HAL also monitors the physical and mental health of Lt Wittmann via the neural connection with nanobot sensors throughout his body and bloodstream.

Major Wittmann has twelve years of service. He is a combat veteran, tank commander and human crew of one.  With genetic, physical and mental screening beginning in preschool, Major Wittmann began his military training early. He had the mental and intellectual capability for the nation state’s Human Performance Enhancement program. During his initial military training he received the neural implant for direct communication with advanced computer AIs. He also received nanotechnology enhancements in the form of nanobots in his blood stream to enhance and accelerate his cognitive and physical attributes.

HAL and Major Wittmann had trained as a team for two weeks. Due to the neural implant and nanobots, the bonding program progressed much quicker than human to human bonding. Days of training became the equivalent of months or years of purely human to human bonding. As the first AI/Human armored team they would chart the course for the fight against purely robotic forces. The speed of warfare had overtaken purely human skills due to AI and robotic technology.  At the same time science and technology opened new doors such as AI/human teaming, enhancing both warriors.

Orders came down to protect the Grail Project adjacent to HALS/Major Wittmann’s position at all costs. HAL monitored the battle flow from the network and Major Wittmann correctly anticipated the enemy tactical attack plan.  Within .01 seconds HAL detected the inbound swarm of enemy hypersonic missiles meant for the Grail Project.  HAL countered within .001 seconds by launching a counterstrike of steel flechettes which intercepted, detonated or deflected the inbound hypersonic missiles.  Inside the tank, observing from his 360 degree visual hologram of the battle, Major Wittmann thanked HAL via the neural network for his quick and decisive action to protect the Grail Project and them.

HAL and Major Wittmann knew if the enemy held to his doctrine, the robotic tanks would be next on the scene and attempt to destroy the sole AI/human tank. The twenty enemy robotic tanks announced their arrival by firing their laser cannon main weapons. Within .002 seconds of their firing HAL modified the external nanotechnology armor to disperse the energy along the entire hull and recharge the backup energy grid.

Before the last laser impacted the hull, HAL counter targeted the enemy robotic tanks. HAL fired the multiple barrel railgun and destroyed or severely damaged the robotic force. Fifteen burning hulks remained stationary and would move no more. Five other damaged tanks attempted to retreat. In .003 seconds HAL targeted the five with miniature hypersonic anti-tank missiles turning them into molten scrap. The enemy robotic scout force had been destroyed.

HAL knew they would need reinforcements to defeat the upcoming main robotic assault force. Major Wittmann came up with the “Improvise, Adapt, Overcome” solution.  On the training grounds in an underground warehouse were ten more experimental tanks – with AI’s on board but no human team member.  Due to neural limits Major Wittmann could not directly control another 10 AIs  – but HAL could.


Major Hartmann use his command emergency authority to over ride HAL’s protocol and programming limits. These limits stated that HAL could not control other AI tanks – a limit set by the nation state in peacetime.  But this was war and the Grail Project must survive.

HAL reached out to the ten tanks in warehouse by their AI battle network. Within .001 seconds the AIs received the mission, the situation, enemy order of battle, and threats. With the AI’s knowledge of military history, one other AI suggested that they form a laager around the Grail Project .

The Boers, like American wagon trains in the 19th century, formed mobile defensive laagers. The laager consisted of vehicles forming a defensive perimeter in whatever shape needed. The eleven AI tanks and one human formed a formidable interlinked mobile defensive perimeter around the Grail Project.

The battle ended quickly. The massed mobile firepower of the tanks overwhelmed the robotic attack force, but at a high cost. Tanks 1, 3 and 5 suffered catastrophic laser burn through on the armor plating destroying the AIs. Tanks 2, 4 and 8 suffered massive missile hits which destroyed various armaments reducing their offensive effectiveness to near zero.  The burning remains of the robotic army demonstrated they had fallen short of destroying the Grail Project at Site One.  In the classic struggle of over whelming force against determined defense, the combined AI/human teaming had turned the tide.


HAL watched the unfolding scene with curiosity as Major Wittmann exited the tank. The Grail Project at Site One had survived without loss. As the doors of the Grail Project opened, Major Wittmann, age 22, reached down and picked up his four year old son and gave a silent prayer of thanks as he held him once more.


His son had just been admitted with other select four year olds to the AI/Enhanced Human Performance Military Academy (The Grail Project). Eighteen years ago Major Wittmann had been in the first class of the Grail Project in 2032.


Article motivation for Grail War 2050, Last Stand at Battle Site One

The paper is meant as a wakeup that technology is changing warfare in a unique way. The era of human on human war is almost over. With artificial intelligence (AI) and robotics the speed of warfare will increase beyond human ability to react or intervene. The paper presents one possible solution.


This idea of human warfare nearing an end was presented in:

Future Warfare and the Decline of Human Decisionmaking by Thomas K. Adams


This article was first published in the Winter 2001-02 issue of Parameters.


“Warfare has begun to leave “human space.” … In short, the military systems (including weapons) now on the horizon will be too fast, too small, too numerous, and will create an environment too complex for humans to direct. Furthermore, the proliferation of information-based systems will produce a data overload that will make it difficult or impossible for humans to directly intervene in decisionmaking. This is not a consideration for the remote science-fiction future.”


Other ideas in the paper:

  • AI/Human teaming and bonding
  • Robotic armies used with attrition strategy against human armies
  • AI controlling other AI vehicles with human oversight
  • Nanotechnology adaptable armor with embedded AI neural links
  • Human neural implants for AI link
  • Human nanobot implants
  • Multi-barrel Rail Gun for armor vehicles
  • Laser weapons for armor vehicles
  • Fletchette weapon as counter missile weapon
  • Hypersonic anti-tank missiles
  • Early military screening for youth (Ender’s Game influence)
  • Early military training for youth (Ender’s Game influence)


The second intent of the paper is a tribute to the military science fiction of Keith Laumer and his creation of Bolos – tanks with AI and teamed with military officers. His writings in the 1960s and 1970s were not really about just Bolos but about duty, honor and a tribute to the warriors. I read Last Command in the late sixties and devoured all the Bolo stories.


Last Command can be found here: (with preface by David Drake, Vietnam Vet and Author of many military science fiction books)



Dave Shunk is a retired USAF Colonel, B-52G pilot, and Desert Storm combat veteran whose last military assignment was as the B-2 Vice Wing Commander of the 509th Bomb Wing, Whitman AFB, MO. Currently, he is a researcher/writer and DA civilian working in Army Capabilities Integration Center (ARCIC), Future Warfare Division, Fort Eustis, Virginia.

Unmanned Autonomous Systems Countermeasures NATO Workshop

It is common sense that Unmanned Autonomous Systems can be used against NATO populations and forces – but beyond the public perception of this threat, there is a need for an expert level debate that will produce a thorough understanding of the challenges and initiate the development of efficient response options. This is what NATO Transformation Command is addressing with this online and in-person workshop.



The workshop is gathering experts from all backgrounds: Government, Military, NATO, Academia, Industry … to address all aspects (operational, autonomy related, legal, legitimacy, financial and ethical) of the proliferation of Unmanned & Autonomous systems in ground, air, sea and telecoms dimensions. The aim is to identify the best options to counter Unmanned & Autonomous systems and to provide senior policy makers and industry with guidance on related future implications and requirements.


(sign up online first to gain access to online event registration)

Join now the brainstorming in the online forum at http://InnovationHub-act.org 

The WORKSHOP: 9-11 Dec 14

The forum findings will be exploited during the workshop when everyone will be able to join live online or onsite at the Innovation Hub:


Innovation Research Park @ ODU
4111 Monarch Way # 4211, Norfolk, Virginia 23508

Email registration request to: information@innovationhub-act.org
with the following information:

First Name :
Last Name :
City of residence :
Affiliation (name of employer) :
Area of Expertise :
Email address :
When do you plan to join the workshop? :
When do you plan to leave the workshop? :
Will you attend the ice-breaker on 8th Dec 1900-2000? :
Do you need a parking pass? :
Information and registration at http://InnovationHub-act.org


Serge Da Deppo is an information officer at the NATO ACT Innovation Hub.