Tag Archives: unmanned

Call for Articles: Unmanned Systems Program Office Launches CIMSEC Topic Week

Submissions Due: April 30, 2019
Week Dates: May 6–May 10, 2019

Article Length: 1000-3500 words
Submit to: Nextwar@cimsec.org

By CAPT Pete Small, Program Manager, Unmanned Maritime Systems

The U.S. Navy is committed to the expedited development, procurement, and operational fielding of “families” of unmanned undersea vehicles (UUVs) and unmanned surface vessels (USVs). CNO Adm. John Richardson’s Design for Maintaining Maritime Superiority (Version 2.0) explicitly calls for the delivery of new types of USVs and UUVs as rapidly as possible.

My office now manages more than a dozen separate efforts across the UUV and USV domains, and that number continues to increase. The Navy’s commitment to unmanned systems is strongly reinforced in the service’s FY2020 budget with the launching of a new high-priority program and key component of the Future Surface Combatant Force — the Large Unmanned Surface Vessel (LUSV) — along with the funding required to ensure the program moves as rapidly as possible through the acquisition process. This effort is closely aligned with the Medium Unmanned Surface Vessel (MUSV) rapid prototyping program started in FY19. Mine Countermeasures USV (MCM USV) efforts have several key milestones in FY19 with Milestone C and low-rate initial production of the minesweeping variant and the start of minehunting integration efforts.

U.S. Navy’s unmanned surface vessels systems vision. (NAVSEA Image)

On the UUV side, the ORCA Extra Large UUV (XLUUV) program has commenced the fabrication of five systems that are expected to begin testing in late 2020. The Snakehead submarine-launched Large Displacement UUV (LDUUV) is wrapping up detailed design and an operational prototype will be ready for Fleet experimentation by 2021. Several medium UUV programs continue in development, production, and deployment including Mark 18, Razorback, and Knifefish. So these new and different systems are coming online relatively quickly.

Supporting the established families of UUVs and USVs are a number of Core Technology standardization efforts in the areas of battery technology, autonomy architecture, command and control, and machinery control. While these architecture frameworks have stabilized and schedules have been established, there are still a host of logistical and sustainability issues that the Navy must work through. Most of these unmanned platforms do not immediately align with long-established support frameworks for surface ships and submarines. These are critical issues and will impact the operational viability of both UUVs and USVs if they are not fully evaluated and thought through before these systems join the Fleet.

Here are some of the questions we are seeking to more fully understand for the long-term sustainment and support of UUVs and USVs:

  • Where should the future “fleets” of UUVs and USVs be based or distributed?
  • What infrastructure is required?
  • How or where will these systems be forward deployed?
  • What sort of transportation infrastructure is required?
  • What is the manning scheme required to support unmanned systems?
  • How and where will these unique systems be tested and evaluated?
  • How do we test endurance, autonomy, and reliability?
  • What new policies or changes to existing policies are required?
  • How will these systems be supported?
  • What new training infrastructure is required?

To help jumpstart new thinking and address these questions and many others we have yet to consider, my office is partnering with the Center for International Maritime Security (CIMSEC) to launch a Special Topic Week series to solicit ideas and solutions. We are looking for bold suggestions and innovative approaches. Unmanned systems are clearly a growing part of the future Navy. We need to think now about the changes these systems will bring and ensure their introduction allows their capabilities to be exploited to the fullest.

CAPT Pete Small was commissioned in 1995 from the NROTC at the University of Virginia where he earned a Bachelor of Science Degree in Mechanical Engineering. He earned a Master of Science Degree in Operations Research in 2002 from Columbia University, and a Master of Science Degree in Mechanical Engineering and a Naval Engineer Degree in 2005 from the Massachusetts Institute of Technology. He is currently serving as Program Manager PMS 406, Unmanned Maritime Systems. 

Featured Image: Common Unmanned Surface Vessel (CUSV) intended to eventually serve as the U.S. Navy’s Unmanned Influence Sweep System (UISS) unmanned patrol boat. (Textron photo)

Unmanned Mission Command, Pt. 2

By Tim McGeehan

The following two-part series discusses the command and control of future autonomous systems. Part 1 describes how we have arrived at the current tendency towards detailed control. Part 2 proposes how to refocus on mission command.

Adjusting Course

Today’s commanders are accustomed to operating in permissive environments and have grown addicted to the connectivity that makes detailed control possible. This is emerging as a major vulnerability. For example, while the surface Navy’s concept of “distributed lethality” will increase the complexity of the detection and targeting problems presented to adversaries, it will also increase the complexity of its own command and control. Even in a relatively uncontested environment, tightly coordinating widely dispersed forces will not be a trivial undertaking. This will tend toward lengthening decision cycles, at a time when the emphasis is on shortening them.1 How will the Navy execute operations in a future Anti-Access/Area-Denial (A2/AD) scenario, where every domain is contested (including the EM spectrum and cyberspace) and every fraction of a second counts? 

The Navy must “rediscover” and fully embrace mission command now, to both address current vulnerabilities as well as unleash the future potential of autonomous systems. These systems offer increased precision, faster reaction times, longer endurance, and greater range, but these advantages may not be realized if the approach to command and control remains unchanged. For starters, to prepare for future environments where data links cannot be taken for granted, commanders must be prepared to give all subordinates, human and machine, wide latitude to operate, which is only afforded by mission command. Many systems will progress from a man “in” the loop (with the person integral to the functioning), to a man “on” the loop (where the person oversees the system and executes command by negation), and then to complete autonomy. In the future, fully autonomous systems may collaborate with one another across a given echelon and solve problems based on the parameters communicated to them as commander’s intent (swarms would fall into this category). However, it may go even further. Mission command calls for adaptable leaders at every level; what if at some level the leaders are no longer people but machines? It is not hard to imagine a forward deployed autonomous system tasking its own subordinates (fellow machines), particularly in scenarios where there is no available bandwidth to allow backhaul communications or enable detailed control from afar. In these cases, mission command will not just be the preferred option, it will be the only option. This reliance on mission command may be seen as a cultural shift, but in reality, it is a return to the Navy’s cultural roots.

Back to Basics

Culturally, the Navy should be well-suited to embrace the mission command model to employ autonomous systems. Traditionally once a ship passed over the horizon there was little if any communication for extended periods of time due to technological limitations. This led to a culture of mission command: captains were given basic orders and an overall intent; the rest was up to them. Indeed, captains might act as ambassadors and conduct diplomacy and other business on behalf of the government in remote areas with little direct guidance.2 John Paul Jones himself stated that “it often happens that sudden emergencies in foreign waters make him [the Naval Officer] the diplomatic as well as the military representative of his country, and in such cases he may have to act without opportunity of consulting his civic or ministerial superiors at home, and such action may easily involve the portentous issue of peace or war between great powers.”3  This is not to advocate that autonomous systems will participate in diplomatic functions, but it does illustrate the longstanding Navy precedent for autonomy of subordinate units.

Another factor in support of the Navy favoring mission command is that the physics of the operating environment may demand it. For example, the physical properties of the undersea domain prohibit direct, routine, high-bandwidth communication with submerged platforms. This is the case with submarines and is being applied to UUVs by extension. This has led to extensive development of autonomous underwater vehicles (AUVs) vice remotely operated ones; AUVs clearly favor mission command.

Finally, the Navy’s culture of decentralized command is the backbone of the Composite Warfare Commander (CWC) construct. CWC is essentially an expression of mission command. Just as technology (the telegraph cable, wireless, and global satellite communication) has afforded the means of detailed control and micromanagement, it has also increased the speed of warfighting, necessitating decentralized execution. Command by negation is the foundation of CWC, and has been ingrained in the Navy’s officer corps for decades. Extending this mindset to autonomous systems will be key to realizing their full capabilities.

Training Commanders

This begs the question: how does one train senior commanders who rose through the ranks during the age of continuous connectivity to thrive in a world of autonomous systems where detailed control is not an option? For a start, they could adopt the mindset of General Norman Schwarzkopf, who described how hard it was to resist interfering with his subordinates:

“I desperately wanted to do something, anything, other than wait, yet the best thing I could do was stay out of the way. If I pestered my generals I’d distract them:  I knew as well as anyone that commanders on the battlefield have more important things to worry about than keeping higher headquarters informed…”4

That said, even while restraining himself, at the height of OPERATION DESERT STORM, his U.S. Central Command used more than 700,000 telephone calls and 152,000 radio messages per day to coordinate the actions of their subordinate forces. In contrast, during the Battle of Trafalgar in 1805, Nelson used only three general tactical flag-hoist signals to maneuver the entire British fleet.5

Commanders must learn to be satisfied with the ambiguity inherent in mission command. They must become comfortable clearly communicating their intent and mission requirements, whether tasking people or autonomous systems. Again, there isn’t a choice; the Navy’s adversaries are investing in A2/AD capabilities that explicitly target the means that make detailed control possible. Furthermore, the ambiguity and complexity of today’s operating environments prohibit “a priori” composition of complete and perfect instructions.

Placing commanders into increasingly complex and ambiguous situations during training will push them toward mission command, where they will have to trust subordinates closer to the edge who will be able to execute based on commander’s intent and their own initiative. General Dempsey, former Chairman of the Joint Chiefs of Staff, stressed training that presented commanders with fleeting opportunities and rewarding those who seized them in order to encourage commanders to act in the face of uncertainty.

Familiarization training with autonomous systems could take place in large part via simulation, where commanders interact with the actual algorithms and rehearse at a fraction of the cost of executing a real-world exercise. In this setting, commanders could practice giving mission type orders and translating them for machine understanding. They could employ their systems to failure, analyze where they went wrong, and learn to adjust their level of supervision via multiple iterations. This training wouldn’t be just a one-way evolution; the algorithms would also learn about their commander’s preferences and thought process by finding patterns in their actions and thresholds for their decisions. Through this process, the autonomous system would understand even more about commander’s intent should it need to act alone in the future. If the autonomous system will be in a position to task its own robotic subordinates, that algorithm would be demonstrated so the commander understands how the system may act (which will have incorporated what it has learned about how its commander commands).

With this in mind, while it may seem trivial, consideration must be made for the fact that future autonomous systems may have a detailed algorithmic model of their commander’s thought process, “understand” his intent, and “know” at least a piece of “the big picture.” As such, in the future these systems cannot simply be considered disposable assets performing the dumb, dirty, dangerous work that exempt a human from having to go in harm’s way. They will require significant anti-tamper capabilities to prevent an adversary from extracting or downloading this valuable information if they are somehow taken or recovered by the enemy. Perhaps they could even be armed with algorithms to “resist” exploitation or give misleading information. 

The Way Ahead

Above all, commanders will need to establish the same trust and confidence in autonomous systems that they have in manned systems and human operators.6 Commanders trust manned systems, even though they are far from infallible. This came to international attention with the airstrike on the Medecins Sans Frontieres hospital operating in Kunduz, Afghanistan. As this event illustrated, commanders must acknowledge the potential for human error, put mitigation measures in place where they can, and then accept a certain amount of risk. In the future, advances in machine learning and artificial intelligence will yield algorithms that far exceed human processing capabilities. Autonomous systems will be able to sense, process, coordinate, and act faster than their human counterparts. However, trust in these systems will only come from time and experience, and the way to secure that is to mainstream autonomous systems into exercises. Initially these opportunities should be carefully planned and executed, not just added in as an afterthought. For example, including autonomous systems in a particular Fleet Battle Experiment solely to check a box that they were used raises the potential for negative training, where the observers see the technology fail due to ill-conceived employment. As there may be limited opportunities to “win over” the officer corps, this must be avoided. Successfully demonstrating the capabilities (and the legitimate limitations) of autonomous systems is critical. Increased use over time will ensure maximum exposure to future commanders, and will be key to widespread adoption and full utilization.  

The Navy must return to its roots and rediscover mission command in order to fully leverage the potential of autonomous systems. While it may make commanders uncomfortable, it has deep roots in historic practice and is a logical extension of existing doctrine. Former General Dempsey wrote that mission command “must pervade the force and drive leader development, organizational design and inform material acquisitions.”Taking this to heart and applying it across the board will have profound and lasting impacts as the Navy sails into the era of autonomous systems.

Tim McGeehan is a U.S. Navy Officer currently serving in Washington. 

The ideas presented are those of the author alone and do not reflect the views of the Department of the Navy or Department of Defense.

References

[1] Dmitry Filipoff, Distributed Lethality and Concepts of Future War, CIMSEC, January 4, 2016, http://cimsec.org/distributed-lethality-and-concepts-of-future-war/20831

[2] Naval Doctrine Publication 6: Naval Command and Control, 1995, http://www.dtic.mil/dtic/tr/fulltext/u2/a304321.pdf, p. 9      

[3] Connell, Royal W. and William P. Mack, Naval Customs, Ceremonies, and Traditions, 1980, p. 355.

[4] Schwartzkopf, Norman, It Doesn’t Take a Hero:  The Autobiography of General Norman Schwartzkopf, 1992, p.523

[5] Ibid 2, p. 4

[6] Greg Smith, Trusting Autonomous Systems: It’s More Than Technology, CIMSEC, September 18, 2015, http://cimsec.org/trusting-autonomous-systems-its-more-than-technology/18908     

[7] Martin Dempsey, Mission Command White Paper, April 3, 2012, http://www.dtic.mil/doctrine/concepts/white_papers/cjcs_wp_missioncommand.pdf

Featured Image: SOUTH CHINA SEA (April 30, 2017) Sailors assigned to Helicopter Sea Combat Squadron 23 run tests on the the MQ-8B Firescout, an unmanned aerial vehicle, aboard littoral combat ship USS Coronado (LCS 4). (U.S. Navy photo by Mass Communication Specialist 3rd Class Deven Leigh Ellis/Released)

Chinese UAV Development and Implications for Joint Operations

By Brandon Hughes

Drone Diplomacy

On December 15, 2016, a United States Navy (USN) unmanned underwater vehicle (UUV) was seized by the Chinese People’s Liberation Army Navy (PLAN) about 80 miles from Subic Bay, Philippines (Global Times, December 17, 2016). This was met with quick negotiations and the agreed return of the $150,000 research drone following complaints to Beijing. The then President-elect, Donald Trump, condemned the action from his twitter feed and responded, “Keep it!”, further escalating the situation and casting an unknown shadow on the future of the U.S.-China relationship (Reuters, December 18, 2016). Almost immediately, the seemingly mundane deployment of UUVs and unmanned aerial vehicles (UAVs) in the South China Sea became a potential flashpoint in the ever-contentious territorial disputes.

Countering President Trump’s South China Sea endeavors is a legislative move by Beijing to require all foreign submersibles transiting in China’s claimed territorial waters to travel on the surface and or be subject to confiscation (China News Service, February 15, 2017). The proposed change to the 1984 China Maritime Traffic Safety Law compares to China’s East China Sea Air Defense Identification Zone (ADIZ), set up in 2013. Codifying domestic maritime law further adds a layer of validity in the event a UAV or UUV is captured while patrolling in a disputed area. Assuming a more severe response is unlikely from the U.S., Beijing may use the law as an excuse to reduce unmanned foreign Intelligence, Surveillance, and Reconnaissance (ISR) assets in its periphery, regardless of international opinion.

While demonizing foreign ISR activities, China continues to bolster its own ISR efforts for deployment in maritime disputes, foreign surveillance, and warfighting capacity. Advances in armed/unarmed and stealth UAVs will further integrate UAVs into the Chinese People’s Liberation Army (PLA) joint forces array. Advances such as satellite data-link systems not only extend the range of these assets, but also allow for a more seamless integration of command and control (C2). This further enhances relatively low cost and low risk surveillance mechanisms.

UAVs are already an emerging capability within the PLA, law enforcement, and civil agencies and are playing a more prominent role in operations. Real-world testing will refine the PLA doctrinal use of these systems. Control, direction of development, and interoperability in joint operations are all questions yet to be answered. Developing an understanding of how these systems are incorporated into the PLA force structure may give insight into developing doctrine and political considerations. A clear understanding of both may support a potential framework for de-escalating unmanned vehicle incidents between nations where China has interests.

Deployment

On January 20th, 2017, the Chinese North Sea Fleet (NSF) received a request for help with a distress call initiated from the rescue center in Jiangsu Province to aid in the search and rescue of 13 crew members aboard a Chinese fishing boat that sank around 6 am that morning. The PLAN NSF dispatched two navy frigates, the ‘Suzhou’ and ‘Ji’an’ to the East China Sea to search for the crew of the lost fishing vessel, named the Liaoda Zhongyu 15126. What made this search-and-rescue effort unique was the announcement that a surveillance UAV (make unknown) aided in the search.

The deployment of a UAV with two naval frigates, in coordination with a maritime rescue center, demonstrates the multi-functionality and capability of China’s UAVs. Additionally, it is likely the UAV was deployed from a non-naval platform due to the size of the helicopter deck and lack of hangar on a ‘Suzhou’ and ‘Ji’an’, both Type 056/056A corvettes (Janes, November 3, 2016; Navy Recognition, March 18, 2013). This proof of concept highlights the interoperability of air, land, and sea assets coordinating for a common purpose. What is unknown, specifically, is where the UAV was launched, who controlled it, and whether it was using a line-of-sight (LOS) or extended control system.

China’s 40th Jiangdao-class (Type 056/056A) corvette shortly before being launched on 28 October at the Huangpu shipyard in Guangzhou. (fyjs.cn)

Capitalizing on peacetime operations validates control and communication hand-offs and will integrate intelligence platforms, such as the PLAN’s newest electronic surveillance ship, the Kay Yangxin (开阳星 ), vastly expanding the reach of Chinese ISR. Additionally, integration of satellite-linked communication packages, utilizing the domestic constellation of GPS satellites known as the Beidou, or Compass, will continue to improve UAV navigation and targeting systems. These improved navigation and satellite aids will be integrated into existing UAV datalink systems and developed with future ISR systems in mind.

Command Guidance

The use of UAVs for military and ISR purposes can have unintended political and military consequences. The PLA command structure has always focused around centralization to retain political power over the military. It is fair to assume that the guidelines for deployment of UAVs used for strategic intelligence missions are developed at a high level. On November 26, 2015, President Xi Jinping rolled out one of the many updates to the Soviet-style military system that was part of a recent effort to make the PLA more efficient. According to Yue Gang, a retired Colonel in the PLA’s General Staff Department, placing all branches of the military under a “Joint Military Command” was the “biggest military overhaul since the 1950s.” On February 1, 2016, a few months after Yue Gang’s comments, China’s Defense Ministry Spokesman Yang Yujun stated that the PLA was consolidating seven military regions into five theater commands, a move likely to streamline C2 (China Military Online, February 2, 2016). The theater commands will be presided over by the Central Military Commission for overall military administration (See China Brief, February 4, 2016 and February 23, 2016).

Centralizing and reducing the number of commands will allow for each individual military component to focus on their own training objectives (China Military Online, February 2, 2016). This concept promotes component independence to enhance capability, but doesn’t talk to efforts to enhance integration of forces in joint military exercises. The logistical and financial burden of large-scale exercises naturally limit the frequency of exercises each region can conduct per year. What is not clear, yet important to understand for a high-end conflict, is how joint operations between military regions will be executed. Chinese Defense Ministry Spokesman Yang Yuju added that the new structure allows for the commands to have more decision-making power in responding to threats and requesting CMC support. (China Military Online, February 2, 2016).

Utilizing UAVs in regional operations to patrol disputed regions indicates that tactical control would be conducted at the highest level by a chief staff at a joint command center, but more likely relegated to a lower echelon headquarters element closer to the front lines. These lower-tiered units are likely bound by the strict left and right limits on where they patrol. Advances in simultaneous satellite data-link systems will allow for a more seamless handoff of ISR/strike assets between commands in a robust communications environment. The fielding of enhanced and interoperable satellite communications is likely to bolster the deployment of UAVs and further integrate them into PLA doctrine by supporting the “offshore waters defense” and “open seas protection” missions, as outlined in the PLA’s 2015 White Paper on Military Strategy (China Military Online, May 26, 2015).

Direct operational control of the PLA’s UAVs is generally given to the commander of the next higher echelon or to a commander on the ground. UAV technicians depicted on Chinese military websites tend to hold the ranks of junior non-commissioned officers E-5/OR-5 (Sergeant) to O- 2/OF-1 (First Lieutenant). This is similar to certain units of the United States Army, where platforms are directly controlled by enlisted and warrant officers. However, just like the U.S., guidance and direction is usually “tasked down” by a higher echelon, and UAVs with a strike package will likely be controlled or employed by officers under orders from above.

UAV units in the PLA are likely to be attached to a reconnaissance or communications company. Likewise, the PLA Air Force (PLAAF) and PLA Navy (PLAN) will likely have UAV-specific units. Advancements in communication will enable various command levels (i.e. company, battalion, brigade) to simultaneously pull UAV feeds and give guidance to the operator. Based on the size of various exercises, the training indicates UAV control is given down to the lowest level of command but under extremely strict guidance. Additionally, the authority to deploy or strike is likely to be held at the regional command level or higher. Specific rules of engagement are unknown, but those authorities will be developed through trial and error during a high-intensity conflict.

Interoperability

Communications infrastructure improvements are evident in the development of over-the-horizon satellite datalink programs and communication relays. The CH-5 “Rainbow” (Cai Hong) drone, for example, resembles a U.S. Atomic General MQ-9 “Reaper” and is made to function with data systems capable of integrating with previous CH-4 and CH-3 models (Global Times, November 3, 2016). The newest model is capable of 250 km line-of-sight datalink, with up to 2000 km communications range when linked into a secure satellite (Janes, November 7, 2016).

It is likely that improvements in interoperability will be shared among service branches. Recent developments in Ku-Band UAV data-link systems, highlighted during the 11th China International Aviation and Aerospace Exposition in November 2016, will further synchronize intelligence sharing and over-the-horizon control of armed and unarmed UAVs (Taihainet.com, November 2, 2016).

PLA Signal Units already train on implementing UAV communication relays (China Military Online, April 8, 2016). Exercises like these indicate a desire to increase the interoperability in a joint environment. UAVs with relay packages will improve functionality beyond ISR & strike platforms. Units traversing austere environments or maritime domains could utilize UAV coverage to extend the range of VHF or HF radios to direct artillery or missile strikes from greater distances. If keyed to the same encrypted channels, these transmissions could be tracked at multiple command levels.

Joined with a UAV satellite datalink, ground or air communications could be relayed from thousands of kilometers away. At the same time, a Tactical Operations Center (TOC) could directly receive transmissions before passing UAV control to a ground force commander. In a South China Sea or East China Sea contingency, UAVs could link unofficial maritime militias (dubbed “Little Blue Men”) via VHF to Chinese Coast Guard Vessels or Naval ships. These messages could also be relayed to PLA Rocket Force units in the event of an anti-access area denial (A2AD) campaign.

Capping off China’s already enormous communication infrastructure is the implementation of dedicated fiber-optic cables, most likely linking garrisoned units and alternate sites to leadership nodes. Future use technologies such as “quantum encryption” for both fiber-optic and satellite based communication platforms could lead to uninhibited communication during a military scenario (The Telegraph, November 7, 2014; Xinhua, August 16, 2016).

Functionality 

Based on the use of Chinese UAVs overseas and in recent exercises, UAVs will continue to be utilized on military deployments in the South China Sea for patrol and ISR support. In the event of a contingency operation or the implementation of an A2/AD strategy, UAVs will likely be used for targeting efforts, battle damage assessments, and small scale engagements. Against a low-tech opponent, the UAV offers an asymmetric advantage. However, the use of UAVs for something other than ISR would be greatly contested by more modern powers. UAVs are generally slow, loud, and observable by modern radar. Many larger UAVs can carry EW packages, although there is little information on how the datalink systems handle EW interference. Ventures in stealth technology, such as the “Anjian/ Dark Sword,” (暗剑) and “Lijian/ Sharp sword” (利剑) projects, would increase Beijing’s UAV survivability and first strike capability if deployed in a contingency operation (Mil.Sohu.com, November, 24, 2013). However, a large-scale deployment of stealth UAV assets is not likely in the near future due to cost and material constraints.

To reduce the risk of high-intensity engagements, China may expand its reliance on UAVs to harass U.S., Taiwanese, Japanese, Philippine, and Vietnamese vessels. Additionally, UAVs may be utilized abroad in the prosecution of transnational threats. So far, China has stuck to a no-strike policy against individuals, although it was considered as an option to prosecute a drug kingpin hiding out in Northeast Myanmar (Global Times, February 19, 2016). The “Rainbow/Cai Hong” variant and “Yilong / Pterodactyl,” made by Chengdu Aircraft Design & Research Institute (CADI), represent some of the more well-known commercial ventures used by the PLAAF (PLA Air Force) and sold on the global market. These variants are often used for ISR in counter-insurgency and counterterrorism operations (The Diplomat, October 6, 2016; Airforce-technology.com, no date).

Strike capability, aided by satellite datalink systems, is another growing capability of China’s UAV programs (Popular Science, June 8, 2016). In late 2015, the Iraqi army released images from a UAV strike against an insurgent element utilizing the Chinese-made export variant “Rainbow 4” (彩虹 4) running on a Window’s XP platform (Sohu.com, January 2, 2016; Popular Science, December 15, 2015). PLA UAVs already patrol border regions, conduct maritime patrols, and assist in geological surveys and disaster relief.

The arrival of off-the-shelf UAVs contributes to the growing integration of dual-use platforms. Technology and imagination are the only limits to the growing UAV industry. Additionally, the export of high-end military UAVs will only continue to grow as they are cheaper than U.S. models and growing in capability. The profit from these sales will certainly aid research and development efforts in creating a near-peer equivalent to the U.S. systems. For a struggling African nation held hostage by rebels (e.g. Nigeria) or an established U.S. ally in the Middle East (e.g. Jordan), the purchase of UAVs at a relatively low price will increase good will and allow for an operational environment to refine each platform’s own capability (The Diplomat, October 6, 2016; The Daily Caller, December 2, 2016).

Conclusion

UAVs for military operations are not new, however, improvements in lethal payloads, targeting, and ISR capabilities will change the role in which UAVs are utilized. Considering China’s own drone diplomacy, the deployment of UAVs is as much a political statement as it is a tactical platform. State-run media has highlighted the successes of its drone program but has not been clear on who, or at what command level, operational control of UAVs is granted. Due to Beijing’s standing policy against lethal targeting, release authority is most likely relegated to the Central Military Commission, or even President Xi himself.

The extent that doctrine has been developed in planning for a high or low-intensity conflict is still unclear. The advent of satellite data-links and communication relays means the tactical control of UAVs may be seamlessly transferred between commanders. The rapid development of UAVs will continue to be integrated into the joint forces array but must be done as part of an overall doctrine and C4ISR infrastructure. Failure to exercise their UAVs in a joint environment will affect combined arms operations and reduce the PLA’s ability to synchronize modern technology with centralized command decisions and rigid doctrine.

Brandon Hughes is the founder of FAO Global, a specialized research firm, and the Senior Regional Analyst-Asia for Planet Risk. He has previously worked with the U.S. Army, the Carnegie-Tsinghua Center for Global Policy, and Asia Society. He is a combat veteran and has conducted research on a wide variety of regional conflicts and foreign affairs. Brandon holds a Masters of Law in International Relations from Tsinghua University, Beijing and has extensive overseas experience focused on international security and U.S.-China relations. He can be reached via email at DC@FAOGLOBAL.com.

Featured Image: CASC’s CH-5 strike-capable UAV made its inaugural public appearance at Airshow China 2016 (IHS/Kelvin Wong)

Fast Followers, Learning Machines, and the Third Offset Strategy

The following article originally featured in National Defense University’s Joint Force Quarterly and is republished with permission. Read it in its original form here.

By Brent Sadler

It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be. . . . This, in turn, means that our statesmen, our businessmen, our everyman must take on a science fictional way of thinking.

—Isaac Asimov

Today, the Department of Defense (DOD) is coming to terms with trends forcing a rethinking of how it fights wars. One trend is proliferation of and parity by competitors in precision munitions. Most notable are China’s antiship ballistic missiles and the proliferation of cruise missiles, such as those the Islamic State of Iraq and the Levant claimed to use to attack an Egyptian ship off the Sinai in 2014. Another trend is the rapid technological advances in artificial intelligence (AI) and robotics that are enabling the creation of learning machines.

Failure to adapt and lead in this new reality risks U.S. ability to effectively respond and control the future battlefield. However, budget realities make it unlikely that today’s DOD could spend its way ahead of these challenges or field new systems fast enough. Consider that F-35 fighter development is 7 years behind schedule and, at $1.3 trillion, is $163 billion over budget.1 On the other hand, China produced and test-flew its first fifth-generation fighter (J-20) within 2 years. These pressures create urgency to find a cost-effective response through emergent and disruptive technologies that could ensure U.S. conventional deterrent advantage—in other words, the so-called Third Offset Strategy.

sadler-1
Unmanned Combat Air System X-47B demonstrator flies near aircraft carrier USS George H.W. Bush, first aircraft carrier to successfully catapult launch unmanned aircraft from its flight deck, May 14, 2013 (U.S. Navy/Erik Hildebrandt)

Narrowing Conventional Deterrence

In 1993, Andrew Marshall, Director of Net Assessment, stated, “I project a day when our adversaries will have guided munitions parity with us and it will change the game.”2 On December 14, 2015, Deputy Secretary of Defense Robert Work announced that day’s arrival when arguing for a Third Offset during comments at the Center for a New American Security.3

An offset seeks to leverage emerging and disruptive technologies in innovative ways in order to prevail in Great Power competition. A Great Power is understood to be a rational state seeking survival through regional hegemony with global offensive capabilities.4 The First Offset Strategy in the 1950s relied on tactical nuclear superiority to counter Soviet numerical conventional superiority. As the Soviets gained nuclear parity in the 1960s, a Second Offset in the 1970s centered on precision-guided munitions and stealth technologies to sustain technical overmatch, conventional deterrence, and containment for another quarter century. The Third Offset, like previous ones, seeks to deliberately change an unattractive Great Power competition, this time with China and Russia, to one more advantageous. This requires addressing the following challenges.

Fast Followers. Russia and China have been able to rapidly gain and sustain near-parity by stealing and copying others’ technologies for their own long-range precision capabilities, while largely pocketing developmental costs. Lateral thinking5 is required to confound these Fast Followers, as Apple used with Microsoft when it regained tech-sector leadership in the early 2000s.6

Hybrid Warfare. Russia’s actions in Crimea and ongoing activities in Eastern Ukraine indicate both that Russia is undeterred and that it was successful in coordinating asymmetric and unconventional tactics across multiple domains.

Narrowing Conventional Advantage. The loss of the precision-munitions advantage increases cost for U.S. intervention, thus reducing deterrence and inviting adventurism. Recent examples include Russian interventions (Georgia, Ukraine, Syria) and increasingly coercive Chinese activities in the East and South China seas, especially massive island-building in the South China Sea since 2014.

Persistent Global Risks from Violent Extremists. While not an existential threat, left unchecked, violent extremism is inimical to U.S. interests as it corrodes inclusive, open economies and societies. As a long-term ideological competition, a global presence able to monitor, attack, and attrite violent extremist networks is required.

In response to these challenges, two 2015 studies are informing DOD leadership on the need for a new offset: the Defense Science Board summer study on autonomy and the Long-Range Research and Development Planning Program. From these studies, Deputy Secretary Work has articulated five building blocks of a new offset:

  • autonomous deep-learning systems
  • human-machine collaboration
  • assisted human operations
  • advanced human-machine combat teaming
  • network-enabled semi-autonomous weapons.

Central to all are learning machines that, when teamed with a person, provide a potential prompt jump in capability. Technological advantages alone, however, could prove chimerical as Russia and China are also investing in autonomous weapons, making any U.S. advantage gained a temporary one. In fact, Russia’s Chief of the General Staff, General Valery Gerasimov, predicts a future battlefield populated with learning machines.7

A Third Offset Strategy could achieve a qualitative edge and ensure conventional deterrence relative to Fast Followers in four ways: One, it could provide U.S. leaders more options along the escalation ladder. Two, a Third Offset could flip the cost advantage to defenders in a ballistic and cruise missile exchange; in East Asia this would make continuation of China’s decades-long investment in these weapons cost prohibitive. Three, it could have a multiplicative effect on presence, sensing, and combat effectiveness of each manned platform. Four, such a strategy could nullify the advantages afforded by geographic proximity and being the first to attack.

Robot Renaissance

In 1997, IBM’s Deep Blue beat chess champion Garry Kasparov, marking an inflection point in the development of learning machines. Since then, development of learning machines has accelerated, as illustrated by Giraffe, which taught itself how to play chess at a master’s level in 72 hours.8 Driving this rapid development have been accelerating computer-processing speeds and miniaturization. In 2011, at the size of 10 refrigerators, the super-computer Watson beat two champions of the game show Jeopardy. Within 3 years, Watson was shrunk to the size of three stacked pizza boxes—a 90-percent reduction in size along with a 2,700-percent improvement in processing speed.9 Within a decade, computers likely will match the massive parallel processing capacity of the human brain, and these machines will increasingly augment and expand human memory and thinking much like cloud computing for computers today, leading to accelerating returns in anything that can be digitized.10 This teaming of man and machine will set the stage for a new renaissance of human consciousness as augmented by learning machines—a Robot Renaissance.11 But man is not destined for extinction and will remain part of the equation; as “freestyle chess” demonstrates, man paired with computers utilizing superior processes can prevail over any competitor.12

Augmenting human consciousness with learning machines will usher in an explosion in creativity, engineering innovation, and societal change. This will in turn greatly impact the way we conceptualize and conduct warfare, just as the Renaissance spurred mathematical solutions to ballistic trajectories, metallurgy, and engineering for mobile cannons. Such a future is already being embraced. For example, Bank of America and Merrill Lynch recently concluded that robotics and AI—learning machines—will define the next industrial revolution and that the adoption of this technology is a foregone conclusion. Their report concludes that by 2025 learning machines will be performing 45 percent of all manufacturing versus 10 percent today.13 It would be a future of profound change and peril and was the focus of the 2016 Davos Summit whose founder, Klaus Schwab, calls the period the Fourth Industrial Revolution.14 As the Industrial Revolution demonstrated, the advantage will be to the early adopter, leaving the United States little choice but to pursue an offset strategy that leverages learning machines.

Garry Kasparov, chess grandmaster and former world champion, speaking at Turing centennial conference at Manchester, June 25, 2012 (Courtesy David Monniaux)
Garry Kasparov, chess grandmaster and former world champion, speaking at Turing centennial conference at Manchester, June 25, 2012 (Courtesy David Monniaux)

Advantages of Man-Machine Teaming

Learning machines teamed with manned platforms enabled by concepts of operations will be a key element of the Third Offset Strategy. Advantages of this approach include:

  • Speed Faster than Adversaries. Staying inside an adversary’s OODA (observe, orient, decide, act) loop necessitates learning machines that are able to engage targets at increasing speed, which diminishes direct human control.15
  • Greater Combat Effect per Person. As extensions of manned platforms, teaming increases the combat effect per person through swarm tactics as well as big data management. Moreover, augmenting the manned force with autonomous systems could mitigate deployment costs, which have increased 31 percent since 2000 and are likely unsustainable under current constructs.16
  • Less Human Risk. Reduced risk to manned platforms provides more options along the escalation ladder to commanders and allows a more forward and pervasive presence. Moreover, autonomous systems deployed in large numbers will have the long-term effect of mitigating relative troop strengths.
  • High-Precision, Emotionless Warfare. Learning machines provide an opportunity for battlefield civility by lessening death and destruction with improved precision and accuracy. Moreover, being non-ethical and unemotional, they are not susceptible to revenge killings and atrocities.
  • Hard to Target. Learning machines enable disaggregated combat networks to be both more difficult to target and more fluid in attack. Some capabilities (for example, cyber) could reside during all phases of a conflict well within a competitor’s physical borders, collecting intelligence while also ready to act like a “zero-day bomb.”17
  • Faster Acquisition and Improvement. Incorporation of learning machines in design, production, and instantaneous sharing of learning across machines would have a multiplicative effect. However, achieving such benefits requires overcoming proprietary constraints such as those encountered with the Scan Eagle unmanned vehicle if better intra-DOD innovation and interoperability are to be achieved.

Realizing these potential benefits requires institutional change in acquisition and a dedicated cadre of roboticists. However, pursuing a Third Offset Strategy is not without risks.

Third Offset Risks

Fielding learning machines presents several risks, and several technical and institutional barriers. The risks include the following challenges.

Cyber Intrusion and Programming Brittleness. DOD relies on commercial industry to develop and provide it with critical capabilities. This situation provides some cost savings, while presenting an Achilles’ heel for cyber exploitation during fabrication and in the field. One avenue for attack is through the complexity of programming, which leads to programming brittleness, or seams and back rooms causing system vulnerabilities.18 Another is through communications vital to proper human control. Additionally, swarm tactics involving teams of machines networking independently of human control on a near-continuous basis could further expose them to attack and manipulation.19 Mitigating such threats and staying inside an adversary’s accelerating OODA loop would drive increasing autonomy and decreasing reliance on communications.20

Proliferation and Intellectual Insecurity. The risk of proliferation and Fast Followers to close technological advantage makes protecting the most sensitive elements of learning machines an imperative. Doing so requires addressing industrial espionage and cyber vulnerabilities in the commercial defense industry, which will require concerted congressional and DOD action.

Unlawful Use. As competitors develop learning machines, they may be less constrained and ethical in their employment. Nonetheless, the international Law of Armed Conflict applies, and does not preclude employing learning machines on the battlefield in accordance with jus in bello—the legal conduct of war. Legally, learning machines would have to pass the same tests as any other weapons; their use must be necessary, discriminate, and proportional against a military objective.21 A key test for learning machines is discrimination; that is, the ability to discern noncombatants from targeted combatants while limiting collateral damage.22

Unethical War. When fielded in significant numbers, learning machines could challenge traditions of jus ad bellum—criteria regarding decisions to engage in war. That is, by significantly reducing the cost in human life to wage war, the decision to wage it becomes less restrictive. Such a future is debatable, but as General Paul J. Selva (Vice Chairman of the Joint Chiefs of Staff) suggested at the Brookings Institution on January 21, 2016, there should be an international debate on the role of autonomous weapons systems and jus ad bellum implications.

A New Fog of War. Lastly, the advent of learning machines will give rise to a new fog of war emerging from uncertainty in a learning machine’s AI programming. It is a little unsettling that a branch of AI popular in the late 1980s and early 1990s was called “fuzzy logic,” due to an ability to alter its programming that represents a potential loss of control and weakening of liability.

Seven teams from DARPA’s Virtual Robotics Challenge continue to develop and refine ATLAS robot, developed by Boston Dynamics (DARPA)
Seven teams from DARPA’s Virtual Robotics Challenge continue to develop and refine ATLAS robot, developed by Boston Dynamics (DARPA)

Third Offset Barriers

Overcoming the barriers to a Third Offset Strategy requires advancing key foundational technologies, adjustments in acquisition, and training for man–learning machine interaction.

Man-Machine Interaction. Ensuring proper human interface with and the proper setting of parameters for a given mission employing learning machines requires a professional cadre of roboticists. As with human communication, failure to appropriately command and control learning machines could be disastrous. This potential was illustrated in the movie 2001: A Space Odyssesy when the HAL 9000 computer resolved a dilemma of conflicting orders by killing its human crew. Ensuring an adequately trained cadre is in place as new systems come online requires building the institutional bedrock on which these specialists are trained. Because it will take several years to build such a cadre, it is perhaps the most pressing Third Offset investment.

Trinity of Robotic Capability. Gaining a sustainable and significant conventional advantage through learning machines requires advances in three key areas. This trinity includes high-density energy sources, sensors, and massive parallel processing capacity. Several promising systems have failed because of weakness in one or all of these core capabilities. Fire Scout, a Navy autonomous helicopter, failed largely due to limited endurance. The Army and Marine Corps Big Dog was terminated because its noisy gasoline engine gave troop positions away. Sensor limitations undid Boomerang, a counter-sniper robot with limited ability to discern hostiles in complex urban settings.23

Agile Acquisition Enterprise. As technological challenges are overcome, any advantage earned would be transitory unless acquisition processes adapt in several key ways. One way is to implement continuous testing and evaluation to monitor the evolving programming of learning machines and ensure the rapid dissemination of learning across the machine fleet. A second way is to broaden the number of promising new capabilities tested while more quickly determining which ones move to prototype. A third way is to more rapidly move prototypes into the field. Such changes would be essential to stay ahead of Fast Followers.

While acquisition reforms are being debated in Congress, fielding emerging and disruptive technologies would need to progress regardless.24 However, doing both provides a game-changing technological leap at a pace that can break today’s closely run technological race—a prompt jump in capability.

Chasing a Capability Prompt Jump

Actualizing a nascent Third Offset Strategy in a large organization such as DOD requires unity of effort. One approach would be to establish a central office empowered to ensure coherency in guidance and oversight of resource decisions so that investments remain complementary. Such an office would build on the legacy of the Air Sea Battle Office, Joint Staff’s Joint Concept for Access and Maneuver in the Global Commons, and Strategic Capabilities Office (SCO). Therefore, a central office would need to be resourced and given authority to direct acquisition related to the Third Offset, develop doctrine, standardize training, and conduct exercises to refine concepts of operation. First steps could include:

  • Limit or curtail proprietary use in Third Offset systems while standardizing protocols and systems for maximum cross-Service interoperability.
  • Leverage legacy systems initially by filling existing capacity gaps. SCO work has been notable in pursuing rapid development and integration of advanced low-cost capabilities into legacy systems. This approach results in extension of legacy systems lethality while complicating competitors’ countermeasures. Examples include shooting hypersonic rounds from legacy Army artillery and the use of digital cameras to improve accuracy of small-diameter bombs.25 The Navy could do this by leveraging existing fleet test and evaluation efforts, such as those by Seventh Fleet, and expanding collaboration with SCO. An early effort could be maturing Unmanned Carrier-Launched Airborne Surveillance and Strike, which is currently being developed for aerial refueling, into the full spectrum of operations.26
  • Standardize training and concepts of operations for learning machines and their teaming with manned platforms. Early efforts should include formally establishing a new subspecialty of roboticist and joint exercises dedicated to developing operational concepts of man-machine teaming. Promising work is being done at the Naval Postgraduate School, which in the summer of 2015 demonstrated the ability to swarm up to 50 unmanned systems at its Advanced Robotic Systems Engineering Laboratory and should inform future efforts.
  • Direct expanded investment in the trinity of capabilities—high-density energy sources, sensors, and next-generation processors. The DOD Defense Innovation Initiative is building mechanisms to identify those in industry advancing key technologies, and will need to be sustained as private industry is more deeply engaged.

DOD is already moving ahead on a Third Offset Strategy, and it is not breaking the bank. The budget proposal for fiscal year 2017 seeks a significant but manageable $18 billion toward the Third Offset, with $3 billion devoted to man-machine teaming, over the next 5 years; the $3.6 billion committed in 2017 equates to less than 1 percent of the annual $582.7 billion defense budget.27 As a first step, this funds initial analytical efforts in wargaming and modeling and begins modest investments in promising new technologies.

Conclusion fireshot-capture-1-fast-followers-learning-machines-and_-http___ndupress-ndu-edu_jfq_joint-f

Because continued U.S. advantage in conventional deterrence is at stake, resources and senior leader involvement must grow to ensure the success of a Third Offset Strategy. It will be critical to develop operational learning machines, associated concepts of operations for their teaming with people, adjustments in the industrial base to allow for more secure and rapid procurement of advanced autonomous systems, and lastly, investment in the trinity of advanced base capabilities—sensors, processors, and energy.

For the Navy and Marine Corps, the foundation for such an endeavor resides in the future design section of A Cooperative Strategy for 21st Century Seapower supported by the four lines of effort in the current Chief of Naval Operations’ Design for Maintaining Maritime Superiority. A promising development has been the establishment of OpNav N99, the unmanned warfare systems directorate recently established by the Office of the Chief of Naval Operations on the Navy staff and the naming of a Deputy Assistant Secretary of Navy for Unmanned Systems, both dedicated to developing capabilities key to a Third Offset Strategy. This should be broadened to include similar efforts in all the Services.

However, pursuit of game-changing technologies is only sustainable by breaking out of the increasingly exponential pace of technological competition with Fast Followers. A Third Offset Strategy could do this and could provide the first to adopt outsized advantages. Realistically, to achieve this requires integrating increasing layers of autonomy into legacy force structure as budgets align to new requirements and personnel adapt to increasing degrees of learning machine teaming. The additive effect of increasing autonomy could fundamentally change warfare and provide significant advantage to whoever successfully teams learning machines with manned systems. This is not a race we are necessarily predestined to win, but it is a race that has already begun with strategic implications for the United States. JFQ

Captain Brent D. Sadler, USN, is a Special Assistant to the Navy Asia-Pacific Advisory Group.

Notes

1 CBS News, 60 Minutes, “The F-35,” February 16, 2014.

2 Deputy Secretary of Defense Bob Work, speech delivered to a Center for a New American Security Defense Forum, Washington, DC, December 14, 2015, available at <www.defense.gov/News/Speeches/Speech-View/Article/634214/cnas-defense-forum>.

3 Ibid.

4 John J. Mearsheimer, The Tragedy of Great Power Politics (New York: Norton, 2014).

5 Lateral thinking, a term coined by Edward de Bono in 1967, means indirect and creative approaches using reasoning not immediately obvious and involving ideas not obtainable by traditional step-by-step logic.

6 Shane Snow, Smartcuts: How Hackers, Innovators, and Icons Accelerate Success (New York: HarperCollins, 2014), 6, 116.

7 Russia’s Chief of the General Staff, General Valery Gerasimov, stated in a February 27, 2013, article: “Another factor influencing the essence of modern means of armed conflict is the use of modern automated complexes of military equipment and research in the area of artificial intelligence. While today we have flying drones, tomorrow’s battlefields will be filled with walking, crawling, jumping, and flying robots. In the near future it is possible a fully robotized unit will be created, capable of independently conducting military operations.” See Mark Galeotti, “The ‘Gerasimov Doctrine’ and Russian Non-Linear War,” In Moscow’s Shadows blog, available at <https://inmoscowsshadows.wordpress.com/2014/07/06/the-gerasimov-doctrine-and-russian-non-linear-war/>. For Gerasimov’s original article (in Russian), see Military-Industrial Kurier 8, no. 476 (February 27–March 5, 2013), available at <http://vpk-news.ru/sites/default/files/pdf/VPK_08_476.pdf>.

8 “Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level,” MIT Technology Review, September 14, 2015, available at <www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/>.

9 “IBM Watson Group Unveils Cloud-Delivered Watson Services to Transform Industrial R&D, Visualize Big Data Insights and Fuel Analytics Exploration,” IBM News, January 9, 2014, available at <http://ibm.mediaroom.com/index.php?s=43&item=1887>.

10 Ray Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed (New York: Penguin Books, 2012), 4, 8, 125, 255, 280–281.

11 A learning machine, according to Arthur Samuel’s 1959 definition of machine learning, is the ability of computers to learn without being explicitly programmed.

12 Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (New York: W.W. Norton, 2014), 188.

13 Michael Hartnett et al., Creative Disruption (New York: Bank of America and Merrill Lynch, April 2015), available at <www.bofaml.com/content/dam/boamlimages/documents/articles/D3_006/11511357.pdf>.

14 Klaus Schwab, The Fourth Industrial Revolution (Geneva: World Economic Forum, 2016).

15 Michael N. Schmitt, “War, Technology and the Law of Armed Conflict,” International Law Studies, vol. 82 (2006), 137–182.

16 Growth in DOD’s Budget from 2000 to 2014 (Washington, DC: Congressional Budget Office, November 2014).

17 Richard Clarke, Cyber War: The Next Threat to National Security and What to Do About It (New York: HarperCollins, 2010), 163–166.

18 Ibid., 81–83.

19 Katherine D. Mullens et al., An Automated UAV Mission System (San Diego, CA: SPAWAR Systems Center, September 2003), available at <www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA422026>.

20 Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons (Farnham, United Kingdom: Ashgate, 2009).

21 James E. Baker, In the Common Defense: National Security Law for Perilous Times (Cambridge: Cambridge University Press, 2007), 215–216.

22 “Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977,” Article 48, 57.4 and 51.4; Yoram Dinstein, The Conduct of Hostilities under the Law of International Armed Conflict, 2nd ed. (New York: Cambridge University Press, 2010), 62–63.

23 Schmitt.

24 House Armed Services Committee, Acquisition Reform: Experimentation and Agility, Hon. Sean J. Stackley, Assistant Secretary of the Navy for Research, Development, and Acquisition, 114th Cong., January 7, 2016, available at <http://docs.house.gov/meetings/AS/AS00/20160107/104314/HHRG-114-AS00-Wstate-StackleyS-20160107.pdf>.

25 Sam LaGrone, “Little Known Pentagon Office Key to U.S. Military Competition with China, Russia,” U.S. Naval Institute News, February 2, 2016.

26 Christopher P. Cavas, “U.S. Navy’s Unmanned Jet Could Be a Tanker,” Defense News, February 1, 2016, available at <www.defensenews.com/story/defense/naval/naval-aviation/2016/01/31/uclass-ucasd-navy-carrier-unmanned-jet-x47-northrop-boeing/79624226/>.

27 Aaron Mehta, “Defense Department Budget: $18B Over FYDP for Third Offset,” Defense News, February 9, 2016, available at <www.defensenews.com/story/defense/policy-budget/budget/2016/02/09/third-offset-fy17-budget-pentagon-budget/80072048/>.

Featured Image: Boston Dynamics’ Atlas  robot. (Boston Dynamics)