Tag Archives: autonomy

Distributed Lethality: Old Opportunities for New Operations

Distributed Lethality Topic Week

By Matthew Hipple

The BISMARCK, a single ship capable of striking fear into the heart of an entire nation.
The BISMARCK, a single ship whose threat was sufficient to muster an entire fleet for the hunt.

The essence of naval warfare has always been opportunism – from the vague area of gravity generated by an in-port “fleet in being,” to the fleet-rallying threat generated by even a BISMARK or RANGER alone. The opportunity is generated by forces more mobile and self-contained than any army, more persistent than an air force, and empowered to act with no connection to higher authority in a domain that leaves no trace.  It is that ability for a small number of independent ships, or even a single vessel, to provide opportunity and create, “battlespace complexity,” that is distributed lethality’s core. Distributed lethality is not naval warfighting by new principles; it is a return to principles.

[otw_shortcode_button href=”https://cimsec.org/buying-cimsec-war-bonds/18115″ size=”medium” icon_position=”right” shape=”round” color_class=”otw-blue”]Donate to CIMSEC![/otw_shortcode_button]

The best defense is not an overwhelming obsession with defense.
The best defense is not an overwhelming obsession with defense.

Unfortunately, the virtuous autonomy of the past was, in part, only protected by the limited technology of the day. As technology allowed, decentralized execution was replaced by the luxury and false confidence of constant connection to higher authority through an electronic umbilical. It is the kind of devolution that turned into Secretary Gates’ nightmare, “I was touring a [Joint Special Operations Command] in Kabul and discovered a direct line to somebody on the NSC, and I had them tear it out while I was standing there.” In parallel, America began the ahistorical project of investing all offensive opportunity not even in a single class of ship, but a single ship surrounded by a fleet obsessed with its defense.  As early as 1993, President Clinton stated that when a crisis broke out, his first question would be, “where is the nearest carrier.” Sorry, other ships! For the Navy to sensibly rebalance, distributed lethality must succeed. For distributed lethality to succeed, we must decentralize and de-tether mission command, weapons release authority, and weapons support systems.

Decentralized and disconnected methods of command must be embraced, as centralization is only an imagined luxury. Modern centralization is based on the assumption we will have the connectivity appropriate for it. This is no longer tenable in a world of increasingly advanced peers and hyundaized lesser adversaries. Anti-Access, Area-Denial (A2/AD) depends on opponents making themselves visible, of which electronic emission is critical. A2/AD will also inevitably seek to disrupt our C2 connections.

doyle-dday
“Permission? We don’t need no stinkin’ permissions.” “The Battle for Fox Green Beach,” watercolor by Dwight Shepler, showing the Gleaves class destroyer USS Emmons(DD 457) foreground and her sister-ship, the USS Doyle, to the right, within a few hundred yards of the landing beach, mixing it up with German shore batteries on D-Day.

The current major-node CWC concept will need to be broken down to a more compact, internal model designed around the Hunter Killer Surface Action Group. Rules of Engagement must be flexible to the point that American commanders need not look over their shoulders to a higher OPCON. Consider, the destroyer CO’s at Normandy didn’t consider waiting for direction or requesting approval before shifting from small boat screening to shore bombardment from the shoals. They recognized the opportunity – the necessity – and executed of their own will.

In contrast, today it might be a regular occurrence to double-and-triple check our actions with American OPCON while operating with NATO in TACON off Somalia. American CO’s could use the freedom to make pragmatic, on-the-spot decisions not only for immediate concerns of mission effectiveness, but as representatives of their higher military command and, potentially, the state. Coalition commanders would have greater trust in the spot decisions of their American counterparts, rather than worry they sit precariously atop a changing several-step staffing process.

Though encouraging equivalent RoE flexibility for coalition partners may be challenging, our autonomy may encourage our partners to interpret their home nation guidance in a flexibility equivalent to their trust in the US commander they fight beside. That lack of hesitancy will be critical during a conflict, and in that sudden moment in the South China Sea or Mediterranean when a small SAG of coalition partners find themselves in the midst of a conflict that has just begun. Imposing the peacetime discipline necessary to trust the CO’s we have trained, prepared, and empowered to do their jobs is the only thing that will jump-start a shift in a mind-set now dominated by subordination. 

In the execution of more flexible orders, ships must be re-invested with control of their own weapon systems. CO’s oversee non-nuclear weapon systems that they do not control – that are solely the purview of off-ship authorities. In particular, as weapon systems like Tomahawk become deployable for ASuW, off-ship authority’s iron grip on their control must break.  This decentralization also matters outside the stand-up fight at sea. The organic ability to program and deploy Tomahawk missiles for land strike allows surface ships to execute attacks of opportunity on land infrastructure, or execute and support opportunistic maritime raids as groups of marines harass adversaries, or turn isolated islands into temporary logistics or aviation operations bases. For winning the sudden-and-deadly fight in the littoral environment but integrating with opportunistic amphibious operations, the surface fleet could find some inspiration from the USS BARB, the only submarine in WWII to “sink” a train with its crew-come-amateur-commandos. From Somalia to the South China Sea, naval commanders should be told what to do, not how – and be allowed to do it. The less reliant the force is on these ephemeral links and the less these links are unnecessarily exercised in peacetime, the greater a force’s instinct to operate independently and with confidence in an imposed or needed silence. 

CAPT Ramius, relieved to discover he is not dealing with "some buckaroo."
CAPT Ramius, relieved to discover he is not dealing with “some buckaroo.”

There may be a level of discomfort with decentralization and disconnection. If leaders fear the impact of a “strategic corporal,” surely a “buckaroo,” as  CAPT Ramius would call him, that would be truly horrifying. That fear would be a reflection of a failure of the system to produce leaders, not the importance and operational effectiveness of independence. There is a reason the US once considered the Department of the Navy to be separate and peer to the Department of War – noting the institution and its individual commanders as unique peace and wartime tools for strategic security and diplomacy. Compare today’s autonomy and trust with that invested in Commodore Perry during his mission to Japan or Commodore Preble’s mission to seek partnership with Naples during the First Barbary Pirates War. Reliance on call-backs and outside authority will gut a naval force’s ability to operate in a distributed manner when those connections disappear. Encouraging it by default will ensure the muscle memory is there when needed.

Finally, Distributed Lethality requires the hardware to allow surface combatants to operate as effective offensive surface units in small groups. The kinetic end of the spectrum, upgraded legacy weapons and an introduction of new weapon systems has been extensively discussed since the 2015 Surface Navy Association National Symposium when VADM Rowden and RADM Fanta rolled out Distributed Lethality in force. However, weapon systems are only as good as the available detection systems. Current naval operations rely heavily on shore-based assets, assets from the carrier, and joint assets for reconnaissance. In the previous Distributed Lethality topic week, LT Glynn argued for a suite of surveillance assets, some organic to individual ships, but most deploying from the shore or from carriers.  Presuming a denied environment, and commanders empowered to seek and exploit opportunities within their space, the best argument would be for greater emphasis on ship-organic assets. They may not provide the best capabilities, but capabilities are worthless if assets cannot find, reach, or communicate with a Hunter-Killer SAG operating in silence imposed by self or the enemy. They also prevent an HKSAG from being completely at the mercy or limitations of a Navy or joint asset coordinator – while simultaneously relieving those theater assets for higher-level operations and opportunity exploitation.

Ultimately – distributed lethality is the historical default mode of independent naval operations given a new name due to the strength of the current carrier-based operational construct. Admiral Halsey ordered CAPT Arleigh Burke to intercept a Japanese convoy at Bougainville, “GET ATHWART THE BUKA-RABAUL EVACUATION LINE ABOUT 35 MILES WEST OF… IF ENEMY CONTACTED YOU KNOW WHAT TO DO.” The surface fleet must embrace a culture assuming our commanders “KNOW WHAT TO DO.” We must build an operational construct in which acting on that instinct is practiced and exercised in peacetime, for wartime. The operational and diplomatic autonomy, as well as the OLD IRONSIDES style firepower of single surface combatants, is necessary to rebalance a force gutted of its many natural operational advantages. Distributed lethality must return the surface force to its cultural and operational roots of distributed autonomy, returning to the ideas that will maximize opportunity to threaten, undermine, engage with, and destroy the adversary.

Matthew Hipple is the President of CIMSEC and an active duty surface warfare officer. He also leads our Sea Control and Real Time Strategy podcasts, available on iTunes.

[otw_shortcode_button href=”https://cimsec.org/buying-cimsec-war-bonds/18115″ size=”medium” icon_position=”right” shape=”round” color_class=”otw-blue”]Donate to CIMSEC![/otw_shortcode_button]

Sea Control 92 – Autonomy

seacontrol2Weapon autonomy is a broad term around which swirls an incredible amount of debate. Paul Scharre, Michael Horowitz, and Adam Elkus join Sea Control to discuss the nature of autonomy, how to imagine its use in an operational environment, and how to think of the debate surrounding it.

DOWNLOAD: Sea Control 92 -Autonomy

Music: Sam LaGrone

CIMSEC content is and always will be free; consider a voluntary monthly donation to offset our operational costs. As always, it is your support and patronage that have allowed us to build this community – and we are incredibly grateful.

Select a Donation Option (USD)

Enter Donation Amount (USD)

Lethal Autonomy in Autonomous Unmanned Vehicles

Guest post written for UUV Week by Sean Welsh.

Should robots sink ships with people on them in time of war? Will it be normatively acceptable and technically possible for robotic submarines to replace crewed submarines?

These debates are well-worn in the UAV space. Ron Arkin’s classic work Governing Lethal Behaviour in Autonomous Robots has generated considerable attention since it was published six years ago in 2009. The centre of his work is the “ethical governor” that would give normative approval to lethal decisions to engage enemy targets. He claims that International Humanitarian Law (IHL) and Rules of Engagement can be programmed into robots in machine readable language. He illustrates his work with a prototype that engages in several test cases. The drone does not bomb the Taliban because they are in a cemetery and targeting “cultural property” is forbidden. The drone selects an “alternative release point” (i.e. it waits for the tank to move a certain distance) and then it fires a Hellfire missile at its target because the target (a T-80 tank) was too close to civilian objects.

Could such an “ethical governor” be adapted to submarine conditions? One would think that the lethal targeting decisions a Predator UAV would have to make above the clutter of land would be far more difficult than the targeting decisions a UUV would have to make. The sea has far fewer civilian objects in it. Ships and submarines are relatively scarce compared to cars, houses, apartment blocks, schools, hospitals and indeed cemeteries. According to the IMO there are only about 100,000 merchant ships in the world. The number of warships is much smaller, a few thousand.

Diagram of the ethical governer
Diagram of the ‘ethical governor’

There seems to be less scope for major targeting errors with UUVs. Technology to recognize shipping targets is already installed in naval mines. At its simplest, developing a hunter-killer UUV would be a matter of putting the smarts of a mine programmed to react to distinctive acoustic signatures into a torpedo – which has already been done. If UUV were to operate at periscope depth, it is plausible that object recognition technology (Treiber, 2010) could be used as warships are large and distinctive objects. Discriminating between a prawn trawler and a patrol boat is far easier than discriminating human targets in counter-insurgency and counter-terrorism operations. There are no visual cues to distinguish between regular shepherds in Waziristan who have beards, wear robes, carry AK-47s, face Mecca to pray etc. and Taliban combatants who look exactly the same. Targeting has to be based on protracted observations of behaviour. Operations against a regular Navy in a conventional war on the high seas would not have such extreme discrimination challenges.

A key difference between the UUV and the UAV is the viability of telepiloting. Existing communications between submarines are restricted to VLF and ELF frequencies because of the properties of radio waves in salt water. These frequencies require large antenna and offer very low transmission rates so they cannot be used to transmit complex data such as video. VLF can support a few hundred bits per second. ELF is restricted to a few bits per minute (Baker, 2013). Thus at the present time remote operation of submarines is limited to the length of a cable. UAVs by contrast can be telepiloted via satellite links. Drones flying over Afghanistan can be piloted from Nevada.

For practical purposes this means the “in the loop” and “on the loop” variants of autonomy would only be viable for tethered UUVs. Untethered UUVs would have to run in “off the loop” mode. Were such systems to be tasked with functions such as selecting and engaging targets, they would need something like Arkin’s ethical governor to provide normative control.

DoD policy directive 3000.09 (Department of Defense, 2012) would apply to the development of any such system by the US Navy. It may be that a Protocol VI of the Convention on Certain Conventional Weapons (CCW) emerges that may regulate or ban “off the loop” lethal autonomy in weapons systems. There are thus regulatory risks involved with projects to develop UUVs capable of offensive military actions.

Even so, in a world in which a small naval power such as Ecuador can knock up a working USV from commodity components for anti-piracy operations (Naval-technology.com, 2013), the main obstacle is not technical but in persuading military decision makers to trust the autonomous options. Trust of autonomous technology is a key issue. As Defense Science Board (2012) puts it:

A key challenge facing unmanned system developers is the move from a hardware-oriented, vehicle-centric development and acquisition process to one that addresses the primacy of software in creating autonomy. For commanders and operators in particular, these challenges can collectively be characterized as a lack of trust that the autonomous functions of a given system will operate as intended in all situations.

There is evidence that military commanders have been slow to embrace unmanned systems. Many will mutter sotto voce: to err is human but to really foul things up requires a computer. The US Air Force dragged their feet on drones and yet the fundamental advantages of unmanned aircraft over manned aircraft have turned out to be compelling in many applications. It is frequently said that the F-35 will be the last manned fighter the US builds. The USAF has published a roadmap detailing a path to “full autonomy” by 2049 (United States Air Force, 2009).

Similar advantages of unmanned systems apply to ships. Just as a UAV can be smaller than a regular plane, so a UUV can be smaller than a regular ship. This reduces requirements for engine size and elements of the aircraft that support human life at altitude or depth. UAVs do not need toilets, galleys, pressurized cabins and so on. In UUVs, there would be no need to generate oxygen for a crew and no need for sleeping quarters. Such savings would reduce operating costs and risks to the lives of crew. In war, as the Spanish captains said: victory goes to he who has the last escudo. Stress on reducing costs is endemic in military thinking and political leaders are highly averse to casualties coming home in flag-draped coffins. If UUVs can effectively deliver more military bang for less bucks and no risk to human crews, then they will be adopted in preference to crewed alternatives as the capabilities of vehicles controlled entirely by software are proven.

Such a trajectory is arguably as inevitable as that of Garry Kasparov vs Deep Blue. However in the shorter term, it is not likely that navies will give up on human crews. Rather UUVs will be employed as “force multipliers” to increase the capability of human crews and to reduce risks to humans. UUVs will have uncontroversial applications in mine counter measures and in intelligence and surveillance operations. They are more likely to be deployed as relatively short range weapons performing tasks that are non-lethal. Submarine launched USVs attached to their “mother” subs by tethers could provide video communications of the surface without the sub having to come to periscope depth. Such USVs could in turn launch small UAVs to enable the submarine to engage in reconnaissance from the air.  The Raytheon SOTHOC (Submarine Over the Horizon Organic Capabilities) launches a one-shot UAV from a launch platform ejected from the subs waste disposal lock . Indeed small UAVs such

AeroVironment Switchblade UUV
AeroVironment Switchblade UUV

as Switchblade (Navaldrones.com, 2015) could be weaponized with modest payloads and used to attack the bridges or rudders of enemy surface ships as well as to increase the range of the periscope beyond the horizon. Future aircraft carriers may well be submarine.

In such cases, the UUV, USV and UAV “accessories” to the human crewed submarine would increase capability and decrease risks. As humans would pilot such devices, there are no requirements for an “ethical governor” though such technology might be installed anyway to advise human operators and to take over in case the network link failed.

However, a top priority in naval warfare is the destruction or capture of the enemy. Many say that it is inevitable that robots will be tasked with this mission and that robots will be at the front line in future wars. The key factors will be cost, risk, reliability and capability. If military capability can be robotized and deliver the same functionality at similar or better reliability and at less cost and less risk than human alternatives, then in the absence of a policy prohibition, sooner or later it will be.

Sean Welsh is a Doctoral Candidate in Robot Ethics at the University of Canterbury. His professional experience includes  17 years working in software engineering for organizations such as British Telecom, Telstra Australia, Fitch Ratings, James Cook University and Lumata. The working title of Sean’s doctoral dissertation is “Moral Code: Programming the Ethical Robot.”

References

 Arkin, R. C. (2009). Governing Lethal Behaviour in Autonomous Robots. Boca Rouge: CRC Press.

Baker, B. (2013). Deep secret – secure submarine communication on a quantum level.   Retrieved 13th May, 2015, from http://www.naval-technology.com/features/featuredeep-secret-secure-submarine-communication-on-a-quantum-level/

Defense Science Board. (2012). The Role of Autonomy in DoD Systems. from http://fas.org/irp/agency/dod/dsb/autonomy.pdf

Department of Defense. (2012). Directive 3000.09: Autonomy in Weapons Systems.   Retrieved 12th Feb, 2015, from http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf

Navaldrones.com. (2015). Switchblade UAS.   Retrieved 28th May, 2015, from http://www.navaldrones.com/switchblade.html

Naval-technology.com. (2013). No hands on deck – arming unmanned surface vessels.   Retrieved 13th May, 2015, from http://www.naval-technology.com/features/featurehands-on-deck-armed-unmanned-surface-vessels/

Treiber, M. (2010). An Introduction to Object Recognition: Selected Algorithms for a Wide Variety of Applications. London: Springer.

United States Air Force. (2009). Unmanned Aircraft Systems Flight Plan 2009-2047.   Retrieved 13th May, 2015, from http://fas.org/irp/program/collect/uas_2009.pdf

Grail War 2050, Last Stand at Battle Site One

This piece by Dave Shunk is part of our Future Military Fiction Week for the New Year. The week topic was chosen as a prize by one of our Kickstarter supporters.

The nation state had decided not to invest in robotic armies. Autonomous killing machines were beyond their ethics. However, the enemy had no problem building autonomous robotic killing machines.

The enemy robotic land assault caught the nation state by surprise. The enemy forces especially sought to destroy the nation state’s treasure nicknamed “The Grail Project.”  The enemy’s battle plan sought to overcome the human defenders at the various Grail Project sites by overwhelming swarms.

The tactical fight went badly against the solely human forces defending the outlying Grail Project sites. The horde of enemy robotics on land, sea and air were the perfect attrition strategy.  Soul less killers, mass produced, networked together and built cheaply with advanced 3D printers in secret production facilities were deadly.

The nation state had not pursued the robotic armies but went a different route. HAL and Major Wittmann were the first experimental AI/Human team at training site “One” adjacent to one of the remaining Grail Project sites.  They were a prototype weapon – human and AI bonded together as a weapon system team within the tank with a shared neural network. However, this tank was unlike early 21st century tanks. This tank had advanced weapon systems – a tank on technology steroids.

HAL (Human Armor Liaison) is the artificial intelligence (AI) that controls the tank, the weapon systems, and communications. HAL is incorporated and encased into the advanced nanotechnology shell of the tank.  HAL has self repairing armor and neural circuits woven into the structure of the tank.  HAL also monitors the physical and mental health of Lt Wittmann via the neural connection with nanobot sensors throughout his body and bloodstream.

Major Wittmann has twelve years of service. He is a combat veteran, tank commander and human crew of one.  With genetic, physical and mental screening beginning in preschool, Major Wittmann began his military training early. He had the mental and intellectual capability for the nation state’s Human Performance Enhancement program. During his initial military training he received the neural implant for direct communication with advanced computer AIs. He also received nanotechnology enhancements in the form of nanobots in his blood stream to enhance and accelerate his cognitive and physical attributes.

HAL and Major Wittmann had trained as a team for two weeks. Due to the neural implant and nanobots, the bonding program progressed much quicker than human to human bonding. Days of training became the equivalent of months or years of purely human to human bonding. As the first AI/Human armored team they would chart the course for the fight against purely robotic forces. The speed of warfare had overtaken purely human skills due to AI and robotic technology.  At the same time science and technology opened new doors such as AI/human teaming, enhancing both warriors.

Orders came down to protect the Grail Project adjacent to HALS/Major Wittmann’s position at all costs. HAL monitored the battle flow from the network and Major Wittmann correctly anticipated the enemy tactical attack plan.  Within .01 seconds HAL detected the inbound swarm of enemy hypersonic missiles meant for the Grail Project.  HAL countered within .001 seconds by launching a counterstrike of steel flechettes which intercepted, detonated or deflected the inbound hypersonic missiles.  Inside the tank, observing from his 360 degree visual hologram of the battle, Major Wittmann thanked HAL via the neural network for his quick and decisive action to protect the Grail Project and them.

HAL and Major Wittmann knew if the enemy held to his doctrine, the robotic tanks would be next on the scene and attempt to destroy the sole AI/human tank. The twenty enemy robotic tanks announced their arrival by firing their laser cannon main weapons. Within .002 seconds of their firing HAL modified the external nanotechnology armor to disperse the energy along the entire hull and recharge the backup energy grid.

Before the last laser impacted the hull, HAL counter targeted the enemy robotic tanks. HAL fired the multiple barrel railgun and destroyed or severely damaged the robotic force. Fifteen burning hulks remained stationary and would move no more. Five other damaged tanks attempted to retreat. In .003 seconds HAL targeted the five with miniature hypersonic anti-tank missiles turning them into molten scrap. The enemy robotic scout force had been destroyed.

HAL knew they would need reinforcements to defeat the upcoming main robotic assault force. Major Wittmann came up with the “Improvise, Adapt, Overcome” solution.  On the training grounds in an underground warehouse were ten more experimental tanks – with AI’s on board but no human team member.  Due to neural limits Major Wittmann could not directly control another 10 AIs  – but HAL could.

 

Major Hartmann use his command emergency authority to over ride HAL’s protocol and programming limits. These limits stated that HAL could not control other AI tanks – a limit set by the nation state in peacetime.  But this was war and the Grail Project must survive.

HAL reached out to the ten tanks in warehouse by their AI battle network. Within .001 seconds the AIs received the mission, the situation, enemy order of battle, and threats. With the AI’s knowledge of military history, one other AI suggested that they form a laager around the Grail Project .

The Boers, like American wagon trains in the 19th century, formed mobile defensive laagers. The laager consisted of vehicles forming a defensive perimeter in whatever shape needed. The eleven AI tanks and one human formed a formidable interlinked mobile defensive perimeter around the Grail Project.

The battle ended quickly. The massed mobile firepower of the tanks overwhelmed the robotic attack force, but at a high cost. Tanks 1, 3 and 5 suffered catastrophic laser burn through on the armor plating destroying the AIs. Tanks 2, 4 and 8 suffered massive missile hits which destroyed various armaments reducing their offensive effectiveness to near zero.  The burning remains of the robotic army demonstrated they had fallen short of destroying the Grail Project at Site One.  In the classic struggle of over whelming force against determined defense, the combined AI/human teaming had turned the tide.

 

HAL watched the unfolding scene with curiosity as Major Wittmann exited the tank. The Grail Project at Site One had survived without loss. As the doors of the Grail Project opened, Major Wittmann, age 22, reached down and picked up his four year old son and gave a silent prayer of thanks as he held him once more.

 

His son had just been admitted with other select four year olds to the AI/Enhanced Human Performance Military Academy (The Grail Project). Eighteen years ago Major Wittmann had been in the first class of the Grail Project in 2032.

 

Article motivation for Grail War 2050, Last Stand at Battle Site One

The paper is meant as a wakeup that technology is changing warfare in a unique way. The era of human on human war is almost over. With artificial intelligence (AI) and robotics the speed of warfare will increase beyond human ability to react or intervene. The paper presents one possible solution.

 

This idea of human warfare nearing an end was presented in:

Future Warfare and the Decline of Human Decisionmaking by Thomas K. Adams

http://strategicstudiesinstitute.army.mil/pubs/parameters/articles/01winter/adams.pdf

This article was first published in the Winter 2001-02 issue of Parameters.

 

“Warfare has begun to leave “human space.” … In short, the military systems (including weapons) now on the horizon will be too fast, too small, too numerous, and will create an environment too complex for humans to direct. Furthermore, the proliferation of information-based systems will produce a data overload that will make it difficult or impossible for humans to directly intervene in decisionmaking. This is not a consideration for the remote science-fiction future.”

 

Other ideas in the paper:

  • AI/Human teaming and bonding
  • Robotic armies used with attrition strategy against human armies
  • AI controlling other AI vehicles with human oversight
  • Nanotechnology adaptable armor with embedded AI neural links
  • Human neural implants for AI link
  • Human nanobot implants
  • Multi-barrel Rail Gun for armor vehicles
  • Laser weapons for armor vehicles
  • Fletchette weapon as counter missile weapon
  • Hypersonic anti-tank missiles
  • Early military screening for youth (Ender’s Game influence)
  • Early military training for youth (Ender’s Game influence)

 

The second intent of the paper is a tribute to the military science fiction of Keith Laumer and his creation of Bolos – tanks with AI and teamed with military officers. His writings in the 1960s and 1970s were not really about just Bolos but about duty, honor and a tribute to the warriors. I read Last Command in the late sixties and devoured all the Bolo stories.

 

Last Command can be found here: (with preface by David Drake, Vietnam Vet and Author of many military science fiction books)

http://hell.pl/szymon/Baen/The%20best%20of%20Jim%20Baens%20Universe/The%20World%20Turned%20Upside%20Down/0743498747__14.htm

 

Dave Shunk is a retired USAF Colonel, B-52G pilot, and Desert Storm combat veteran whose last military assignment was as the B-2 Vice Wing Commander of the 509th Bomb Wing, Whitman AFB, MO. Currently, he is a researcher/writer and DA civilian working in Army Capabilities Integration Center (ARCIC), Future Warfare Division, Fort Eustis, Virginia.