Tag Archives: artificial intelligence

Will Artificial Intelligence Be Disruptive to Our Way of War?

By Marjorie Greene


At a recent Berkshire Hathaway shareholder meeting Warren Buffett said that Artificial Intelligence – the collection of technologies that enable machines to learn on their own – could be “enormously disruptive” to our human society. More recently, Stephen Hawking, the renowned physicist, predicted that planet Earth will only survive for the next one hundred years. He believes that because of the development of Artificial Intelligence, machines may no longer simply augment human activities but will replace and eliminate humans altogether in the command and control of cognitive tasks.

In my recent presentation to the annual Human Systems conference in Springfield, Virginia, I suggested that there is a risk that human decision-making may no longer be involved in the use of lethal force as we capitalize on the military applications of Artificial Intelligence to enhance war-fighting capabilities. Humans should never relinquish control of decisions regarding the employment of lethal force. How do we keep humans in the loop? This is an area of human systems research that will be important to undertake in the future.       


Norbert Wiener in his book, Cybernetics, was perhaps the first person to discuss the notion of “machine-learning.” Building on the behavioral models of animal cultures such as ant colonies and the flocking of birds, he describes a process called “self-organization” by which humans – and by analogy – machines learn by adapting to their environment. Self-organization refers to the emergence of higher-level properties of the whole that are not possessed by any of the individual parts making up the whole. The parts act locally on local information and global order emerges without any need for external control. The expression “swarm intelligence” is often used to describe the collective behavior of self-organized systems that allows the emergence of “intelligent” global behavior unknown to the individual systems.

Swarm Warfare

Military researchers are especially concerned about recent breakthroughs in swarm intelligence that could enable “swarm warfare” for asymmetric assaults against major U.S. weapons platforms, such as aircraft carriers.  The accelerating speed of computer processing, along with rapid improvements in the development of autonomy-increasing algorithms also suggests that it may be possible for the military to more quickly perform a wider range of functions without needing every individual task controlled by humans.

Drones like the Predator and Reaper are still piloted vehicles, with humans controlling what the camera looks at, where the drone flies, and what targets to hit with the drone’s missiles. But CNA studies have shown that drone strikes in Afghanistan caused 10 times the number of civilian casualties compared to strikes by manned aircraft. And a recent book published jointly with the Marine Corps University Press builds on CNA studies in national security, legitimacy, and civilian casualties to conclude that it will be important to consider International Humanitarian Law (IHL) in rethinking the drone war as Artificial Intelligence continues to flourish.

The Chinese Approach

Meanwhile, many Chinese strategists recognize the trend towards unmanned and autonomous warfare and intend to capitalize upon it. The PLA has incorporated a range of unmanned aerial vehicles into its force structure throughout all of its services. The PLA Air Force and PLA Navy have also started to introduce more advanced multi-mission unmanned aerial vehicles. It is clear that China is intensifying the military applications of Artificial Intelligence and, as we heard at a recent hearing by the Senate’s U.S. – China Economic and Security Review Commission (where CNA’s China Studies Division also testified), the Chinese defense industry has made significant progress in its research and development of a range of cutting-edge unmanned systems, including those with swarming capabilities. China is also viewing outer space as a new domain that it must fight for and seize if it is to win future wars.

Armed with artificial intelligence capabilities, China has moved beyond just technology developments to laying the groundwork for operational and command and control concepts to govern their use. These developments have important consequences for the U.S. military and suggest that Artificial Intelligence plays a prominent role in China’s overall efforts to establish an effective military capable of winning wars through an asymmetric strategy directed at critical military platforms.

Human-Machine Teaming

Human-machine teaming is gaining importance in national security affairs, as evidenced by a recent defense unmanned systems summit conducted internally by DoD and DHS in which many of the speakers explicitly referred to efforts to develop greater unmanned capabilities that intermix with manned capabilities and future systems.

Examples include: Michael Novak, Acting Director of the Unmanned Systems Directorate, N99, who spoke of optimizing human-machine teaming to multiply capabilities and reinforce trust (incidentally, the decision was made to phase out N99 because unmanned capabilities are being “mainstreamed” across the force); Bindu Nair, the Deputy Director, Human Systems, Training & Biosystems Directorate, OASD, who emphasized efforts to develop greater unmanned capabilities that intermix with manned capabilities and future systems; and Kris Kearns, representing the Air Force Research Lab, who discussed current efforts to mature and update autonomous technologies and manned-unmanned teaming.


Finally, it should be noted that the Defense Advanced Projects Agency (DARPA) has recently issued a relevant Broad Agency Announcement (BAA) titled “OFFensive Swarm-Enabled Tactics” – as part of the Defense Department OFFSET initiative.  Notably, it includes a section asking for the development of tactics that look at collaboration between human systems and the swarm, especially for urban environments. This should certainly reassure the human systems community that future researchers will not forget them, even as swarm intelligence makes it possible to achieve global order without any need for external control.


As we capitalize on the military applications of Artificial Intelligence, there is a risk that human decision-making may no longer be involved in the use of lethal force. In general, Artificial Intelligence could indeed be disruptive to our human society by replacing the need for human control, but machines do not have to replace humans in the command and control of cognitive tasks, particularly in military contexts. We need to figure out how to keep humans in the loop. This area of research would be a fruitful one for the human systems community to undertake in the future.  

Marjorie Greene is a Research Analyst with the Center for Naval Analyses. She has more than 25 years’ management experience in both government and commercial organizations and has recently specialized in finding S&T solutions for the U. S. Marine Corps. She earned a B.S. in mathematics from Creighton University, an M.A. in mathematics from the University of Nebraska, and completed her Ph.D. course work in Operations Research from The Johns Hopkins University. The views expressed here are her own.

Featured Image: Electronic Warfare Specialist 2nd Class Sarah Lanoo from South Bend, Ind., operates a Naval Tactical Data System (NTDS) console in the Combat Direction Center (CDC) aboard USS Abraham Lincoln. (U.S. Navy photo by Photographer’s Mate 3rd Class Patricia Totemeier)

Sea Control 92 – Autonomy

seacontrol2Weapon autonomy is a broad term around which swirls an incredible amount of debate. Paul Scharre, Michael Horowitz, and Adam Elkus join Sea Control to discuss the nature of autonomy, how to imagine its use in an operational environment, and how to think of the debate surrounding it.

DOWNLOAD: Sea Control 92 -Autonomy

Music: Sam LaGrone

CIMSEC content is and always will be free; consider a voluntary monthly donation to offset our operational costs. As always, it is your support and patronage that have allowed us to build this community – and we are incredibly grateful.

Grail War 2050, Last Stand at Battle Site One

This piece by Dave Shunk is part of our Future Military Fiction Week for the New Year. The week topic was chosen as a prize by one of our Kickstarter supporters.

The nation state had decided not to invest in robotic armies. Autonomous killing machines were beyond their ethics. However, the enemy had no problem building autonomous robotic killing machines.

The enemy robotic land assault caught the nation state by surprise. The enemy forces especially sought to destroy the nation state’s treasure nicknamed “The Grail Project.”  The enemy’s battle plan sought to overcome the human defenders at the various Grail Project sites by overwhelming swarms.

The tactical fight went badly against the solely human forces defending the outlying Grail Project sites. The horde of enemy robotics on land, sea and air were the perfect attrition strategy.  Soul less killers, mass produced, networked together and built cheaply with advanced 3D printers in secret production facilities were deadly.

The nation state had not pursued the robotic armies but went a different route. HAL and Major Wittmann were the first experimental AI/Human team at training site “One” adjacent to one of the remaining Grail Project sites.  They were a prototype weapon – human and AI bonded together as a weapon system team within the tank with a shared neural network. However, this tank was unlike early 21st century tanks. This tank had advanced weapon systems – a tank on technology steroids.

HAL (Human Armor Liaison) is the artificial intelligence (AI) that controls the tank, the weapon systems, and communications. HAL is incorporated and encased into the advanced nanotechnology shell of the tank.  HAL has self repairing armor and neural circuits woven into the structure of the tank.  HAL also monitors the physical and mental health of Lt Wittmann via the neural connection with nanobot sensors throughout his body and bloodstream.

Major Wittmann has twelve years of service. He is a combat veteran, tank commander and human crew of one.  With genetic, physical and mental screening beginning in preschool, Major Wittmann began his military training early. He had the mental and intellectual capability for the nation state’s Human Performance Enhancement program. During his initial military training he received the neural implant for direct communication with advanced computer AIs. He also received nanotechnology enhancements in the form of nanobots in his blood stream to enhance and accelerate his cognitive and physical attributes.

HAL and Major Wittmann had trained as a team for two weeks. Due to the neural implant and nanobots, the bonding program progressed much quicker than human to human bonding. Days of training became the equivalent of months or years of purely human to human bonding. As the first AI/Human armored team they would chart the course for the fight against purely robotic forces. The speed of warfare had overtaken purely human skills due to AI and robotic technology.  At the same time science and technology opened new doors such as AI/human teaming, enhancing both warriors.

Orders came down to protect the Grail Project adjacent to HALS/Major Wittmann’s position at all costs. HAL monitored the battle flow from the network and Major Wittmann correctly anticipated the enemy tactical attack plan.  Within .01 seconds HAL detected the inbound swarm of enemy hypersonic missiles meant for the Grail Project.  HAL countered within .001 seconds by launching a counterstrike of steel flechettes which intercepted, detonated or deflected the inbound hypersonic missiles.  Inside the tank, observing from his 360 degree visual hologram of the battle, Major Wittmann thanked HAL via the neural network for his quick and decisive action to protect the Grail Project and them.

HAL and Major Wittmann knew if the enemy held to his doctrine, the robotic tanks would be next on the scene and attempt to destroy the sole AI/human tank. The twenty enemy robotic tanks announced their arrival by firing their laser cannon main weapons. Within .002 seconds of their firing HAL modified the external nanotechnology armor to disperse the energy along the entire hull and recharge the backup energy grid.

Before the last laser impacted the hull, HAL counter targeted the enemy robotic tanks. HAL fired the multiple barrel railgun and destroyed or severely damaged the robotic force. Fifteen burning hulks remained stationary and would move no more. Five other damaged tanks attempted to retreat. In .003 seconds HAL targeted the five with miniature hypersonic anti-tank missiles turning them into molten scrap. The enemy robotic scout force had been destroyed.

HAL knew they would need reinforcements to defeat the upcoming main robotic assault force. Major Wittmann came up with the “Improvise, Adapt, Overcome” solution.  On the training grounds in an underground warehouse were ten more experimental tanks – with AI’s on board but no human team member.  Due to neural limits Major Wittmann could not directly control another 10 AIs  – but HAL could.


Major Hartmann use his command emergency authority to over ride HAL’s protocol and programming limits. These limits stated that HAL could not control other AI tanks – a limit set by the nation state in peacetime.  But this was war and the Grail Project must survive.

HAL reached out to the ten tanks in warehouse by their AI battle network. Within .001 seconds the AIs received the mission, the situation, enemy order of battle, and threats. With the AI’s knowledge of military history, one other AI suggested that they form a laager around the Grail Project .

The Boers, like American wagon trains in the 19th century, formed mobile defensive laagers. The laager consisted of vehicles forming a defensive perimeter in whatever shape needed. The eleven AI tanks and one human formed a formidable interlinked mobile defensive perimeter around the Grail Project.

The battle ended quickly. The massed mobile firepower of the tanks overwhelmed the robotic attack force, but at a high cost. Tanks 1, 3 and 5 suffered catastrophic laser burn through on the armor plating destroying the AIs. Tanks 2, 4 and 8 suffered massive missile hits which destroyed various armaments reducing their offensive effectiveness to near zero.  The burning remains of the robotic army demonstrated they had fallen short of destroying the Grail Project at Site One.  In the classic struggle of over whelming force against determined defense, the combined AI/human teaming had turned the tide.


HAL watched the unfolding scene with curiosity as Major Wittmann exited the tank. The Grail Project at Site One had survived without loss. As the doors of the Grail Project opened, Major Wittmann, age 22, reached down and picked up his four year old son and gave a silent prayer of thanks as he held him once more.


His son had just been admitted with other select four year olds to the AI/Enhanced Human Performance Military Academy (The Grail Project). Eighteen years ago Major Wittmann had been in the first class of the Grail Project in 2032.


Article motivation for Grail War 2050, Last Stand at Battle Site One

The paper is meant as a wakeup that technology is changing warfare in a unique way. The era of human on human war is almost over. With artificial intelligence (AI) and robotics the speed of warfare will increase beyond human ability to react or intervene. The paper presents one possible solution.


This idea of human warfare nearing an end was presented in:

Future Warfare and the Decline of Human Decisionmaking by Thomas K. Adams


This article was first published in the Winter 2001-02 issue of Parameters.


“Warfare has begun to leave “human space.” … In short, the military systems (including weapons) now on the horizon will be too fast, too small, too numerous, and will create an environment too complex for humans to direct. Furthermore, the proliferation of information-based systems will produce a data overload that will make it difficult or impossible for humans to directly intervene in decisionmaking. This is not a consideration for the remote science-fiction future.”


Other ideas in the paper:

  • AI/Human teaming and bonding
  • Robotic armies used with attrition strategy against human armies
  • AI controlling other AI vehicles with human oversight
  • Nanotechnology adaptable armor with embedded AI neural links
  • Human neural implants for AI link
  • Human nanobot implants
  • Multi-barrel Rail Gun for armor vehicles
  • Laser weapons for armor vehicles
  • Fletchette weapon as counter missile weapon
  • Hypersonic anti-tank missiles
  • Early military screening for youth (Ender’s Game influence)
  • Early military training for youth (Ender’s Game influence)


The second intent of the paper is a tribute to the military science fiction of Keith Laumer and his creation of Bolos – tanks with AI and teamed with military officers. His writings in the 1960s and 1970s were not really about just Bolos but about duty, honor and a tribute to the warriors. I read Last Command in the late sixties and devoured all the Bolo stories.


Last Command can be found here: (with preface by David Drake, Vietnam Vet and Author of many military science fiction books)



Dave Shunk is a retired USAF Colonel, B-52G pilot, and Desert Storm combat veteran whose last military assignment was as the B-2 Vice Wing Commander of the 509th Bomb Wing, Whitman AFB, MO. Currently, he is a researcher/writer and DA civilian working in Army Capabilities Integration Center (ARCIC), Future Warfare Division, Fort Eustis, Virginia.

Leading the Blind: Teaching UCAV to See

In “A Scandal in Bohemia”, Sherlock Holmes laments, “You [Watson] see, but you do not observe. The distinction is clear.” Such is the current lament of America’s fleet of UCAVs, UGV’s, and other assorted U_V’s: they have neither concept nor recognition of the world around them. To pass from remote drones living on the edges of combat to automated systems at the front, drones must cross the Rubicon of recognition.

To See

Still can't see a thing.

The UCAV is the best place to start, as the skies are the cleanest canvas upon which drones could cast their prying eyes. As with any surveillance system, the best ones are multi-faceted. Humans use their five senses and a good portion of deduction.  Touch is a bit too close for UCAV, smell and hearing would be both useless and uncomfortable at high speed, and taste would be awkward. Without that creative deductive spark, drones will need a bit more than a Mk 1 Eyeball. Along with radar, good examples for how a drone might literally “see” besides a basic radar picture are the likes of the layered optics of the ENVG (Enhanced Night Vision) or the RLS (Artillery Rocket Launch Spotter).

Operators for typical optical systems switch between different modes to understand a picture. A USN Mk38 Mod-2 24MM Bushmaster has a camera system with an Electro-Optical System (EOS), Forward Looking Infrared (FLIR), and a laser range-finder. While a Mod-2 operator switches between the EOS and FLIR, in the ENVG, both modes are combined to create an NVG difficult to blind. For a drone, digital combination isn’t necessary, all inputs can be perceived by a computer at one time. Optical systems can also be put on multiple locations on the UCAV to aid in creating a 3D composite of the contact being viewed. Using an array of both EOS and FLIR systems simultaneously could allow drones to “see” targets in more varied and specific aspect than the human eye.

For the deployment of these sensors, the RLS is a good example of how sensors can “pass” targets to one another. In RLS, after target data is collected amongst audio and IR sensors, flagged threats are passed to the higher-grade FLIR for further designation and potential fire control solution. A UCAV outfitted with multiple camera systems could, in coordination with radar, pass detected targets within a certain parameter “up” to better sensors. Targets viewed in wide-angle scans (such as stealth aircraft only seen) can be passed “down” to radar with further scrutiny based on bearing. UCAV must be given a suite of sensors that would not merely serve a remote human operator, but for the specific utility of the UCAV itself that could take advantage of the broad-access of computer capabilities.

And Observe

In-game models for real-life comparison.
In-game models for real-life comparison.

However, this vast suite of ISR equipment still leaves a UCAV high-and-dry when it comes to target identification. Another officer suggested to me that, “for a computer to identify an air target, it has to have an infinite number of pictures of every angle and possibility.” With 3-D rendered models of desired aircraft, UCAV could have that infinite supply of pictures with varying sets of weapons and angles of light. If a UCAV can identify an aircraft’s course and speed, it would decrease that “range” of comparison to other aircraft or a missiles by orienting that contact’s shape and all comparative models along that true motion axis. Whereas programs like facial recognition software build models from front-on pictures, we have the specifications on most if not all global aircraft. Just as searching the internet for this article, typing “Leading” into the search bar eliminates all returns without the word. In the same way, a UCAV could eliminate all fighter aircraft when looking at a Boeing 747. 3-D modeled comparisons sharpened by target-angle perspective comparisons could identify an airborne contact from any angle.

A UCAV also need not positively identify every single airborne target. A UCAV could be loaded with a set of parameters as well as a database limited to those aircraft of concern in the operating area. AEGIS flags threats by speed, trajectory, and other factors; so too could a UCAV gauge its interest level in a contact based on target angle and speed in relation to the Carrier Strike Group (CSG). Further, loading every conceivable aircraft into an onboard database is as sensible as training a pilot to recognize the make and model of every commercial aircraft on the planet. A scope of parameters for “non-military” could be loaded into a UCAV along with the specific models of regional aircraft-of-interest. The end-around of strapping external weapons to commercial aircraft or using those aircraft as weapons could be defeated by the previously noted course/speed parameters, as well as a database of weapons models.

Breaking Open the Black Box

The musings of an intrigued amateur will not solve these problems; our purpose here is to break open the black box of drone operations and start thinking about our next step. We take for granted the remote connections that allow our unmanned operations abroad, but leave a hideously soft underbelly for our drones to be compromised, destroyed, or surveilled at the slightest resistance. Success isn’t as simple as building the airframe and programming it to fly. For a truly successful UCAV, autonomy must be a central goal. A whole bevy of internal processes must be mastered, in particular the ability of the UCAV to conceive and understand the world around it. The more we parse out the problem, the more ideas we may provide to those who can execute them. I’m often told that, “if they could do this, they would have done it”… but there’s always the first time.

Matt Hipple is a surface warfare officer in the U.S. Navy.  The opinions and views expressed in this post are his alone and are presented in his personal capacity.  They do not necessarily represent the views of U.S. Department of Defense or the U.S. Navy.