Category Archives: Cyber War

Threats, risks, and players in the cyber realm.

Why Are Our Ships Crashing? Competence, Overload, and Cyber Considerations

By Chris Demchak, Keith Patton, and Sam J. Tangredi

These are exclusively the personal views of the authors and do not necessarily reflect the views of the U.S. Naval War College or the Department of Defense.

Security researchers do not believe in coincidences. In the past few weeks, a very rare event – a U.S. Navy destroyer colliding fatally with a huge commercial vessel – happened twice in a short period of time. These incidents followed a collision involving a cruiser off Korea and the grounding of a minesweeper off the Philippines, and have now resulted in the relief of a senior Seventh Fleet admiral. Surface warfare officers (SWOs) look to weather, sensors, watchstanders, training requirements, leadership and regulations (COLREGS) as possible contributing factors to the collisions.  

Cyber security scholars, in contrast, first look to the underlying complex technologies trusted by the crew to determine the proper course of action. With the advancements in navigational technology, computer-aided decision making and digital connectivity, it is human nature that seafarers become more dependent on, as well as electronic aids for navigation and trusting the data the systems provide. While the U.S. Navy emphasizes verification of this data by visual and traditional navigation means, the reality is the social acceptance of the validity of electronic data is a feature of modern culture. The U.S. Navy, with an average age in the early 20s for sea-going sailors, is not immune from this effect. But what if the data is invalid or, as an extreme possibility, subject to outside manipulation?

In directing a pause for all warship crews (not currently conducting vital missions) during which to conduct assessments and additional training, the Chief of Naval Operations – Admiral John Richardson – was asked whether the Navy was considering cyber intrusion as a possible cause. The CNO responded that concerning cyberattack or intrusion, “the review will consider all possibilities.”

The truth could be that only mundane factors contributed to the accident, but as an intellectual thought experiment, what follows are explanations following the logic of open-source information. The first set of explanations will focus on the human in the loop to argue that the fundamental cause is likely human miscalculation rather than intentional distortion of data. The second explanation will focus on the criticality of accurate data provided to humans or their technologies. The pattern suggests a lack of ‘normalness’ as the ‘normal accidents’ of complex systems deeply integrated with cyber technologies – in frequency, locations, and effects. In the case of the destroyers, a credible case—based on analysis of land-based systems–could be made for a witting or unwitting insider introduction of malicious software into critical military navigation and steering systems. The conclusion will offer motivations for timing and targets, and some recommendations for the future.

Similarities in the Scenarios      

There are similarities in recent collisions. Both happened in darkness or semi-darkness. Both happened in shipping lanes in which literally hundreds of major ships pass per day, to say nothing of smaller ships and fishing vessels. Crew manning of both vessels approach 300 sailors, with approximately one-eighth of the crew on watch involved in controlling/steering, navigating, as lookouts, and operating propulsion machinery when the ship is at its lowest states of alertness, known as peacetime steaming. It is logical that both ships were at peacetime steaming at the time since they were not conducting military exercises. In contrast, when USS JOHN S. McCAIN conducted a freedom of navigation operation (FONOP) in the vicinity of the artificial islands China has created to buttress its territorial claims to the South China Sea on August 9, her crew was likely at high alert.

In looking for possible explanations, we have downloaded and examined readily available open-source data concerning the two recent collisions, including identified locations of the incidents, vessel characteristics, crew manning, weather, proximity to land, automatic identification system (AIS) ship tracks, and shipping density data. We have consulted with naval experts on ship handling and on the Sea of Japan and Strait of Malacca.

Collision avoidance on Navy vessels can be roughly cast into four elements, three technical and one human. On the bridge, the watchstanders have (1) the AIS system which relies on tracking ships that broadcast their identities, (2) the military radar systems linked into the ships combat systems, (3)the civilian radar and contact management systems, and (4) the eyes of sailors standing watch on lookout normally posted port, starboard, and aft on the vessel. All these systems are complementary and overlapping, but not exactly delivering the same information.  

The AIS system – in which merchant vessels transmit their identities and location data – is an open and voluntary system relying on GPS. In principle, keeping the AIS on is required for the 50 thousand plus commercial vessels over 500 GRT (gross registered tons). As of 2016, 87 percent of merchant shipping uses satellite navigation and 90 percent of the world’s trade is carried by sea. Nonetheless, ship captains can turn it off and travel without identifying themselves (at least until detected by other means). U.S. Navy vessels do not routinely transmit AIS but each bridge monitors the AIS of ships around them in addition to the military and civilian radar systems and the eyes of the sailors.

In quiet or tense times, the bridge watch and the Combat Information Center (CIC) teams of naval warships must synthesize this information and make sound decisions to avoid putting the ship into extremis. This is a continuous, round-the-clock requirement and a tough task for even the most skilled.

In this photo released by Japan’s 3rd Regional Coast Guard Headquarters, the damage of Philippine-registered container ship ACX Crystal is seen in the waters off Izu Peninsula, southwest of Tokyo, on June 17, 2017 after it had collided with the USS Fitzgerald. (Japan’s 3rd Regional Coast Guard Headquarters/AP)

In contrast, merchant ships such as the Alnic MC, a chemical tanker (which hit JOHN S. McCAIN) have tiny crews with great reliance on autopilot. Depending on the circumstances, possibly only three people would be on the watch as the ship’s commercial navigation autonomously follows the route that the captain set initially. One of the indications that the ACX Crystal, the cargo vessel colliding with the USS FITZGERALD, was on autopilot was its behavior after the collision. Having been temporarily bumped off its course by the collision, it corrected and resumed steaming on the original course for about 15 minutes before stopping and turning to return to the collision location. While nothing is yet published about what was happening on either bridge in the June FITZGERALD collision, one can surmise that it took 15 minutes for the small crew to realize what had happened, to wrest control back of the behemoth, and turn it around.    

Possible “Normal” Explanations

Flawed human decision-making

U.S. Navy warships maintain teams of watchstanders in order to mitigate the effects of a flawed decision being made by any one individual. Ultimately, one individual makes the final decision on what actions to take in an emergency—the Officer of the Deck (OOD) if the Commanding Officer is not available—but recommendations from the others are assumed to help in identifying flaws in precipitous decisions before they are actually made.

In contrast, in merchant ships with only two or three deck watchstanders, there is less of a possibility that flawed decision-making is identified before incorrect actions are taken. These actions can also be influenced by unrelated disorienting activities. Alcohol is not permitted on U.S. warships, abuse of drugs at any time is not countenanced, and U.S. naval personnel are subjected to random urinalysis as a means of enforcement. On a merchant ship these policies vary from owner to owner, and inebriation or decision-making under-the-influence has contributed to many past collisions.   

Common tragedy from fatigue in an inherently dangerous environment

Collisions at sea happen. U.S. warships have collided with other warships, including aircraft carriers and with civilian vessels. USS FRANK EVANS was cut in half and sunk in 1969 when it turned the wrong way and crossed the bow of an Australian aircraft carrier. In 2012 the USS PORTER, a destroyer of the same class as the FITZGERALD and McCAIN, was transiting the Strait of Hormuz. The PORTER maneuvered to port (left) to attempt to get around contacts ahead of it, passing the bow of one freighter astern and then was hit by a supertanker it had not seen because it was screened behind the first freighter. Many of the previous collisions involved a loss of situational awareness by an at-least-partly fatigued crew. It is hard to avoid such conditions in an inherently dangerous, around-the-clock operating environment.

Mechanical Failure

There has been no report of a problem with the FITZGERALD prior to her collision. The Navy, however, has acknowledged the MCCAIN suffered a steering casualty prior to the collision. While backup steering exists in the form of manual controls in aft steering or using differential propulsion to twist the ship in the absence of rudder control, such control methods are not as efficient as the normal controls. Additionally, there would be a brief delay in switching control unexpectedly or transmitting orders to aft steering. In normal conditions, this would not be serious. In a busy shipping lane, with the least hesitation due to shock at the unexpected requirement, the brief delay could be catastrophic.

Quality of training for ship handling by young Surface Warfare Officers (SWOs)

One can look at the U.S. Navy Institute Proceedings (the premier independent naval journal) and other literature to see signs these incidents may be symptoms of a larger issue involving the training of watchstanders. In March 2017, LT Brendan Cordial had a Proceedings article entitled “Too Many SWOs per Ship” that questioned both the quality and quantity of the ship handling experience that surface warfare officers (SWOs) received during their first tours. Later in a SWO’s career track, the focus of new department heads (DH) is tactical and technical knowledge of the ship’s weapons systems and ship’s combat capabilities, not necessarily basic ship handling. Ship handling skill are assumed. But such skills can atrophy while these officers are deployed on land or elsewhere, and individual ships have unique handling characteristics that must be learned anew.

In January 2017, CAPT John Cordle (ret.) wrote an article for Proceedings titled “We Can Prevent Surface Mishaps” and called into question the modern SWO culture. Peacetime accident investigations rarely produce dramatic new lessons. They simply highlight past lessons. Errors in judgment, lapses in coordination, task saturation, fatigue, a small error cascading into a tragedy. Those who have stood the watch on the bridge or in the CIC read them, and frequently think, “There, but for the grace of God, go I.” However, unlike in the aviation community, near misses and accidents that almost happened were not publicly dissected and disseminated to other commands. Officers have always known how easy it is to be relieved for minor mishaps, but they do not have the community discussion of all those that nearly happened to learn vicariously from the experiences.

Pace of forward operations – especially for the MCCAIN after the FITZGERALD event

Both destroyers are homeported in Yokosuka, Japan, the headquarters of the U.S. Seventh Fleet. While only the line of duty investigation has been released for the FITZGERALD collision, one can assume that the officers and crew of the McCAIN would have heard some of the inside details from their squadron mate. Logically the CO of McCAIN would be doubly focused on the safe operation of his ship as he approached the highly congested traffic separation scheme (TSS) in the straits of Malacca and approach to Singapore harbor. But the loss of one of only seven similar and critical ships in a highly contested environment would almost certainly increased the tempo and demands on the MCCAIN as it attempted to move into the Singapore harbor just before sunrise.

In this case, tempo should have been accommodated adequately. While technology is a key component of U.S. warships, it is only one of many tools. Lookouts scan the horizon and report contacts to the bridge and CIC watch teams. The officer of the deck (OOD) uses their professional skills and seaman’s eye to judge the situation. If in doubt, they can, and should, call the Captain. Indeed, close contacts are required to be reported to the Captain. The bridge and CIC have redundant feeds to display contacts detected by radar, sonar, or AIS. The computer can perform target motion analysis, but crews are still trained to manually calculate closest points of approach and recommend courses to avoid contacts via maneuvering boards (MOBOARDs). This is done both on the bridge and in the CIC so even if one watch misses something critical, the other can catch it. When ships enter densely trafficked areas, additional specially qualified watchstanders are called up to augment the standard watch teams. Yet, it is possible that—under the theory of “normal” accidents—somewhere in this multiply redundant sensor system, misread or misheard information led to the human equivalent of the “telephone game” and the wrong choice was dictated to the helm.

But along with the “normal” explanations, the possibility of cyber or other intentional distortion of critical data does remain a possibility.

Cyber Misleads and Mis-function

If one argues that neither the Navy nor commercial crews were inebriated or otherwise neglectful, accepts that the weather and visibility were good for the time of day with crew in less stressful routine sailing postures, finds serendipitous mechanical failure of severe navigational significance on both ships difficult to accept as merely normal accidents, and questions if tempo distraction alone could explain both events, then – as Sherlock would say – the impossible could be possible. It is worth laying out using unclassified knowledge how cyber intrusions could have been used to cause warships to have collisions. This is not to say the collisions could not have multiple sources. But for the purposes of this thought experiment, however, this section will focus on cyber explanations.

Cyber affects outcomes because it is now a near universal substrate to all key societal and shipboard functions. Either cyber errors mislead humans, or its digitized operations malfunction in process, action, or effect, or both while buried inside the complex systems. To make this point, one of the two major classes of cyber assaults – the distributed denial of service (DDOS) – works by using what the computer wants to do anyway – answer queries – and simply massively overloads it into paralysis. It has been shown in a number of experiments that large mechanical systems integrated with electronics can be remotely made to overload, overheat, or vibrate erratically into breakdown by hackers or embedded malware. In several reports, the McCAIN may have suffered failures in both its main steering system (highly digitized) and its backup systems (more mechanical). Less information has been released on the earlier collision between the FITZGERALD and the ACX Crystal cargo ship so steering issues there cannot be known at this time.

However, that the two collisions involved large commercial ships with similar crews and technologies, and that two U.S. Navy vessels were sister ships close in age and technologies suggests commonalities that could be more easily exploited by adversaries using cyber means rather than humans. In particular, commonly shared logistics or non-weapon systems such as navigation are more likely to have vulnerabilities in their life cycles or embedded, routinized processes that are less sought by – or discernible to – the standard security reviews.

In a complex socio-technical-economic system like that involved in both circumstances, the one-off rogue event is likely the normal accident – i.e., the FITZGERALD incident. But too many common elements are present in the McCAIN event to suggest a second, simply rogue outcome. Hence, it is necessary to explore the three possible avenues by which the navigation could have been hacked without it being obvious to the U.S. Navy commander or crew in advance.

First, external signals (GPS, AIS) can be spoofed to feed both navigation systems with erroneous information for any number of reasons including adversary experimentation. Second, the civilian contact management systems on the civilian or military bridge (or both) could be hacked in ways either serendipitously or remotely engineered to feed erroneous data. Third, insider-enabled hacks of one or both of the destroyer’s combat systems could have occurred in the shared home port of Yokosuka to enable distortion of sensors or responses under a range of possible circumstances.

Spoofing GPS inputs to navigation

It does not take much technical expertise to spoof or distort GPS signals because the GPS system itself is sensitive to disruptions. The 2016 removal of one old satellite from service caused a 13.7 microsecond timing error that occurred across half of the 30-odd GPS satellites, causing failures and faults around the world in various industries. Anything that can be coded can be corrupted, even inadvertently. Anything so critical globally which does not have enforced, routine, and rigorous external validity tests, defenses, and corrective actions, however, is even more likely to attract the hacks from both state and nonstate actors.

Major national adversaries today have indicated interest in having the capability to arrange GPS distortions. With their already large domestic units of state-sponsored hackers, the Chinese, Russians, and North Koreans have already sought such capabilities as protections against the accuracy of largely U.S. missile guidance systems. Hacking GPS has been reported for some years, and while some efforts to harden the system have been pursued, spoofing mechanisms located on land in tight transit areas or even on other complicit or compromised vessels could mislead the autopilot. The website Maritime Executive reported mass GPS spoofing in June 2017 in the Black Sea, impacting a score of civilian vessels and putatively emanating from Russian sources most likely on land nearby.

However, it does not have to be a matter of state decision to go to war to have this kind of meddling with key navigation systems, especially if land or many other vessels are nearby. In a cybered conflict world, state-sponsored or freelance hackers would be interested in trying to see what happens just because they can. Not quite a perfect murder because of the external sources of data, however, the spoofed or spoiled data would provide misleading locations in real time to autopilot software. Vessels and their bridge would operate normally in their steering functions with bad data. They go aground or collide. So might airplanes. And the distorted signals could then stop, allowing normal GPS signals to resume and indicate that something went wrong in navigation choices but not in time to stop the collision or with the attribution trace necessary to know by whose hand.

In these two cases, the DDG FITZGERALD looks like it failed to give way to the ACX Crystal which appears by the tracking data to have been on autopilot. If the ACX Crystal’s navigation was operating on false data, and the equivalent civilian system on the U.S. ship was as well, then the watch team of the FITZGERALD would have had at least two other sources conflicting with the spoofed information – the military systems and the eyes of the sailors on watch. For the moment assume no deliberate hack of the military systems, its radars are correctly functioning, and the alert sailors have 20-20 vision, then the watch team of the FITZGERALD clearly miscalculated by believing the civilian system. Or, the overlap in relying on GPS is so profound that the military system was also fooled and the human eyes overruled. In that case, the FITZGERALD watch team trusted the civilian system over other inputs.

AIS data map of course of container ship MV ACX Crystal around the time of collision with USS Fitzgerald near Japan on June 16, 2017. (Wikimedia Commons/marinetraffic.com)

In the McCAIN case, if one assumes all the same conditions, the Navy ship had the right of way and the oil tanker plowed into it. Presumably the tanker autopilot – if it was on as one could reasonably assume – was coded to stop, divert, warn, and otherwise sound the alarm if it sees another ship in its path. Presumably, its code also embeds the right of way rules in the autopilot’s decision-making. A convincing GPS spoof could, of course, persuade the autopilot navigation that it is not where it was, thereby seeing more time and space between it and the Navy ship.  

Hacking civilian navigation radars shared by all vessels

According to experts, commercial navigation systems are remarkably easy to hack quite apart from GPS spoofing. The cybersecurity of these bridge systems against deliberate manipulation has long been neglected. In the same unenforced vein as the voluntary identification requirement of AIS, the global maritime shipping industry has relied on requirements by maritime insurance companies and specific port regulations to control individual shipping firms’ choices in vessels technologies (and level of compliance). Myriad reports in recent years discuss the increasing sophistication of sea pirates in hacking commercial shipping systems to locate ships, cherry pick what cargo to go acquire, show up, take it, and vanish before anything can be done. That is more efficient than the old brute force taking of random ships for ransom.

In addition, shipping systems tend to be older and receive less maintenance – including time-critical patches – more likely to be scheduled with infrequent overall ship maintenance in port. In the recent “Wannacry” ransom-ware global event, the major shipping company Maersk – profoundly and expensively hit – reported its key systems used WIN XP unpatched and unsupported by Microsoft. Hacking groups are also targeting ports and their systems as well.

If systems are compromised, hacks could have opened back doors to external controllers or at least inputs when the commercial ship crossed into locations close enough to land or adversary-compromised surface or submerged vessels. Then the misleading inputs could be more closely controlled to be present when U.S. vessels have been observed to be traveling nearby or are in a particular position. Navy vessels may not transmit AIS, but they are detectable on radar as ships. A radar contact without an AIS identity could be a trigger for the malware to at least become interested in the unidentified vessel, perhaps sending pre-arranged signals to remote controllers to track and then wait for instructions or updates. The autopilot would then act on the inputs unaware of the distortion.

An interesting aspect of corrupting code is that exchanging data across commercial systems alone can provide a path for corrupted code to attempt to install itself on both ends of the data exchange. Stuxnet traveled through printer connections to systems otherwise not on any internet-enabled networks. If the civilian navigation systems are proprietary – and that is likely the case on commercial ships – then it is likely that the U.S. vessels’ bridges also have ‘hardened’ COTS civilian systems whose internal software and hardware are proprietary. That means a hack successful on the commercial side could open an opportunity to hack a similar or targeted civilian system that happens to be found on a U.S. Navy vessel. Furthermore, it is possible the two systems share vulnerabilities and/or have exchanges that are not visible to external observers.

Navy IT security on vessels might also regard the civilian proprietary systems as less a threat because they are not connected to internal military systems. They presumably are standalone and considered merely an additional navigation input along more trusted and hardened military systems. The commercial systems are (ironically) also less likely to be closely scrutinized internally, because that would mean the U.S. Navy is violating contractual rules regarding proprietary commercial equipment. Outside of war – in which such holds are likely to be ignored in crises – there is little incentive to violate those proprietary rules.

One can conceive of a Navy bridge hosting a commercial navigation system that at some point along its journey is compromised with nothing to indicate that compromise or the triggering of the software now interwoven with the legitimate firmware inside the equipment. By happenstance, the Navy vessel comes in to the vicinity of an appropriately compromised large commercial vessel. At that point, the adversary hackers might receive a message from the commercial vessel to indicate the contact and have the option to distort the navigation inputs to help the commercial vessel’s autopilot plow into the warship.

Of course the adversary is helped if the Navy equipment is also hacked and, perhaps, the vessel loses its digitized steering right before the impact.

Hacking U.S. Navy military navigation systems

Remotely accessing and then changing the triggers and sensors of military systems – if possible – would be very hard given the Navy’s efforts in recent years. That possibility is tough to evaluate because the open source knowledge regarding such systems is likely to be third party information on proprietary subordinate systems at least five or six years old – or much more. Both major U.S. adversaries in Asia – North Korea and China – already show propensities for long-term cyber campaigns to remotely gain access and infiltrate or exfiltrate data over time from all military systems, including shipborne navigation. We deem this less likely simply because this is where the cybersecurity focus of the Navy and DOD already is.

However, the history of poorly-coded embedded systems, lightweight or incompetent maintenance, and deep cyber security insensitivity of third party IT capital goods corporations is appalling across a myriad of industry supply chains, even without the national security implications well-known today. While commercial vessels could be hacked remotely, a more likely avenue for entry in Navy systems would be through these corrupted supply chains of third parties, shoddily constructed software, or compromised contractors creating or maintaining the ship’s navigation and related systems. Using insiders would be especially easier than remotely hacking inside when the vessels were in a trusted harbor nestled inside a long-term ally such as Japan. Using insiders to access the systems during routine activities would be less likely to be detected quickly, especially if the effects would not be triggered or felt until particular circumstances far from port and underway.

An especially oblivious contractor engaged in using specialized and proprietary software to patch, check, or upgrade equipment could inadvertently use compromised testing or patching tools to compromise the vessel’s equipment. For example, a Russian engineer carrying in a compromised USB stick was reportedly the originating source of the Stuxnet malware in Iran – whether he was witting or unwitting is unknown. The actions would have been the same. Furthermore, Navy systems are built by contractors with clearances of course, but the systems would have deeply buried and often proprietary inner operating code. Corrupted lines of code could rest inactive for some time, or be installed in the last minute, to lie dormant during most of the deployment until triggered. None would visibly display any corruption until the programmed conditions or triggers are present.

In hacked systems, triggers are really hard to discern in advance. In part, the skill of the adversary deftly obscures them, but also the objectives of adversaries can vary from the classic “act on command of national superiors,” to “see how far we can get and how,” to pure whimsy. With no real personal costs likely for any of these motives, the game is defined by the skill, patience, and will of the adversary, especially when proprietary commercial code is involved. While it is safer in terms of attribution for hackers to have more automatic triggers such as those used in the Stuxnet software, the action triggers do not have to be automatic. In navigation systems, data is exchanged constantly. Conceivably there can be a call out and return buried in massive flows of data.

Without extensive AI and rather advanced systems management, how massive data flows are monitored can vary widely. While it is more and more common to secure a system’s outgoing as well as incoming communication, a multitude of systems that are not particularly dated have been shown to allow rather subtle communications to go on for some time without any event or external revelation. One can imagine code calling home or acting autonomously when triggered by something as mundane as a sensor noting the presence of a large commercial cargo ship within X nautical miles, moving in Y direction, and responding to encrypted queries from its own navigation system. Highly skilled botnet masters are able to detect anomalies across thousands of infected computers and, in a pinch, de-install huge botnets in minutes. It is not difficult to imagine something buried in these otherwise secured systems, especially if the adversary is willing to wait and see when it would be useful. For North Korea, the latest ratcheting of tensions between the Hermit dictatorship and the U.S. could easily provide a reason.

Hacking seems more of a possibility when considering how both destroyers failed to navigate under circumstances that were, to most accounts, not that challenging. It is possible that the first such event – the FITZGERALD collision – was a rogue event, the kind of complex system surprise that routinely but rarely emerges. What is less likely is that a similar ship in broadly similar circumstances shortly thereafter proceeds to have a similar event. Exquisitely suspicious are the reports of the failure of the steering system and possibly its backups on McCAIN, though not on the FITZGERALD. That effect is not spoofed GPS or hacked civilian systems, and it would take much more reach of the malware to achieve. In keeping with the presumption here that a successful insider hack occurred on both ships and the malware was waiting for a trigger, the lack of steering failure (at least no reports of it) on the FITZGERALD could also mean the malware or external controller was smart enough to know collision did not need additional failures to ensure damage. The ship was already in the wrong place having failed to cede right of way. Holding fire like that would be desired and expresses sophistication. Typical technique in cybered conflict is deception in tools; adversaries do not burn their embedded hacks unless necessary. Once shown, the cyber mis-function becomes unusable again against an alert and skilled opponent such as the U.S. Navy.

Furthermore, the Aegis destroyers – of which both Navy vessels are – suffer from a rather massive knowledge asymmetry with a major adversary. At some point in the early to mid 2000s, the Chinese stole the entire design of the AEGIS systems on which the Navy spent billions across contractors and subcontractors. While built to roughly the same specifications as a class of ships, each vessel reflects the upgrades and systemic changes of its particular era, with the older 1990s ships like the FITZGERALD and McCAIN having more patches and bolt-ons than the newer versions of the ship. Fundamental ship elements are hardwired into the vessel and hard to upgrade, while more modular and likely proprietary modern systems are plugged in and pulled out as time goes on. The adversary who stole those comprehensive plans would know more about the older AEGIS ships than they would about the ships completed after the plans were stolen and newer systems used in the installs. Anyone who has ever faced the daunting prospect of rewiring a large house knows by ugly personal experience that the new wiring is forced to work around the existing layout and limitations. Ships are even more rigid and, quite often, the more critical the system, the less flexibly it can be changed.

Thus, vulnerabilities built into the highly complex earlier AEGIS systems would be both known to the thieves after some years of study and perhaps covert testing on other nations’ AEGIS systems, and be very hard to definitively fix by the Navy itself, especially if the service is not looking for the vulnerabilities. Unnerving, but not inconceivable, is the failure of the digitized steering system on the McCAIN – if it happened. Exceptionally telling, however, is the presumably near-simultaneously loss of backup systems. If the steering and contact management systems were compromised, steering could be made to fail at just the right time to force a collision. A good insider would be needed to ensure both, but only an adversary with considerable engineering design knowledge could reliably hazard a successful guess about how to disable the more likely mechanical backup systems. The adversary to whom the original AEGIS theft is attributed – China – is known to be very patient before using the material it has acquired.

Both Civilian and Military Systems

Why not put hacks on both systems? Commercial vessels are easier and could be left in place for some time pending being used and, in the meantime, slowly embedding Trojans via maintenance in port or third party access to remove and replace proprietary boxes or upgrades in software. Preparation of the cyber battlefield occurs – as does the ‘battle’ – in peacetime well before anything or anyone is blown up. China and North Korea have thousands of personnel on the offensive and value extraction cyber payroll. Careers could easily be made by such coups of installing such software as potential tools and have them still in place ready to be used months or years later.

Furthermore, Westerners are routinely afflicted with the rationality disease of believing that all actions – especially if adversaries are suspected – must be intentionally strategic and logically justifiable. Otherwise, why would the adversary bother? There is also a tendency to underestimate the comprehensive approach of most adversaries working against the U.S. Silence does not mean compliance or concession on the part of adversaries, especially not China or North Korea. Installing access points or triggers on all possible systems within one’s grasp is a basic long-term campaign strategy. Even now, when a major hack of a large corporation or agency is found, it has often been in place for years.

Motives for the Collisions

Timing may be serendipitous, but at least one adversary – North Korea – has already sunk a naval vessel of a U.S. ally, South Korea, with no public punishment. Certainly, North Korea has been loudly threatening the U.S. in the region and has cyber assets capable of what has been described above. However, one difficulty in determining culpability is that, while China is an ally of North Korea, neither will readily share information so valuable as the AEGIS design plans or even what each other may have hacked. One can readily ascribe eagerness to hurt the U.S. physically to North Korea, but attributing the same motivation to China at this point is problematic.  

There are other possibilities, however. Both nations – like most nations – are led by individuals with little technical comprehension. In particular and most unfortunately, in a world of ubiquitous cybered conflict where ‘just because one can’ or ‘just to see what could happen’ operates equally well as a motivation, adversary states with a large army of hackers and technically ignorant superiors could easily have their own cyber wizards working in ways their superiors can neither discern nor realistically curtail. In this vein the McCAIN case (and possible FITZGERALD), these over eager technically skilled subordinates could have gotten quite lucky.

Why a DDG that happens to be sailing around Japan? Why one near Singapore? Why now? Well, “why not” is as good a reason, especially if the U.S. Navy publicly fires the ships’ leadership and declares the incidents over. In that case there are no consequences for adversaries. Perhaps the FITZGERALD was the rogue event, but—following that—the N.K. leaders then asked their wizards to take out another as signaling or retribution for recent U.S. “insults.” That motivation has some persuasive aspects: no publically apparent risks; a nifty experiment to see what can be done if needed in larger scale; and the public turmoil alone puts North Korea with a smug secret while the U.S. twists trying to figure it out. Cyber offensive capabilities in the hands of technically incompetent leaders have serious implications for misuse and, critically, inadvertent outcomes that are strategically more comprehensive and potentially destabilizing than ever intended.

Implications for the Navy

If it is leadership that failed in both cases, the Navy has a long history of responding and clearing out the incompetence. If it is cyber that undercut that leadership and killed sailors, the Navy has an uphill battle to definitively establish all the avenues by which it could have and did occur, including fully recognizing the multiple sources of such deliberately induced failure. The literature on complex large-scale system surprise and resilience offers means of preventing multisource failures in socio-technical systems. However, these means may not be compatible with current Naval thought and organization. The literature recommends parsing larger systems into self-sufficient and varying wholes that are embedded with redundancy in knowledge (not replication or standardization), slack in time (ability to buffer from inputs routinely), and constant trial and error learning. Trial and error learning is particularly hard because it routinely involves violations of current practices.

The current organization of the U.S. military seems incompatible with the concept of easily decomposable units engaging and disengaging as needed in collective sense-making. Neither can it accept constant systems adjustments, pre-coordinated but dynamically flexed rapid mitigation and innovation, and whole systems discovery trial and error learning. The truth is that in the cybered world, nothing can be trusted if it is not reliably verified by multiple, independent, and alternative sources of expertise. USS FITZGERALD did not discern its error and correct fast enough to avoid being in the wrong place at the wrong time. The McCAIN may have trusted its right of way entitlement too long, or made a traffic avoidance maneuver and suffered a steering casualty at the worst possible moment. Or perhaps both ships encountered something unexpected: a commercial ship operating on corrupted code. In the future, we should expect that any merchant ship controlled by digital information technology can be hacked.

This is a new idea for the Navy, that merchant shipping can be used as proxies for adversary intentions. With over 50,000 of such large vessels sailing around and next to U.S. ships all over the world, the adversary’s tools of coercion would be both effective and effectively obscured to visual or other indicators of malice. The world of cybered conflict is deeply riven with deception in tools and opaqueness in origins, and now it is clearly on the seas as well. Even if the Navy rules that both incidents were simply bad shiphandling, adversaries have already seen the great impact that can be had by making relatively fewer Navy ships collide with big, dumb, large commercial vessels. Even if cyber did not play the deciding role in these events, there is every reason to assume it will in the future. Just because they can try, they will.

Dr. Chris C. Demchak is the Rear Admiral Grace Murray Hopper Professor of Cybersecurity and Director of the Center for Cyber Conflict Studies, Strategic and Operational Research Department, Center for Naval Warfare Studies, U.S. Naval War College.

Commander Keith “Powder” Patton, USN, is a naval aviator and the former Deputy Director of the Strategic and Operational Research Department, Center for Naval Warfare Studies, U.S. Naval War College.

Dr. Sam J. Tangredi is professor of national, naval and maritime strategy and director of the Institute for Future Warfare Studies, Strategic and Operational Research Department, Center for Naval Warfare Studies, U.S. Naval War College.

Featured Image: Damage is seen on the guided missile destroyer USS Fitzgerald off Japan’s coast, after it collided with a Philippine-flagged container ship, on June 17, 2017 (AFP)

Beijing’s Views on Norms in Cyberspace and Cyber Warfare Strategy Pt. 2

By LCDR Jake Bebber USN

The following is a two-part series looking at PRC use of cyberspace operations in pursuit of its national strategies and the establishment of the Strategic Support Force. Part 1 considered the centrality of information operations and information war to the PRC’s approach toward its current struggle against the U.S. Part 2 looks at the PRC’s use of international norms and institutions in cyberspace, and possible U.S. responses.

Cyber-Enabled Public Opinion and Political Warfare

Many American planners are carefully considering scenarios such as China making a play to force the integration of Taiwan, seize the Senkaku Islands from Japan, or seize and project power from any and all claimed reefs and islands in the South China Sea. Under these scenarios we can expect preemptive strikes in the space and network domains in an attempt to “blind” or confuse American and allied understanding and establish a fait accompli. This will, in Chinese thinking, force the National Command Authority to consider a long and difficult campaign in order to eject Chinese forces, and the CCP is placing a bet that American decision makers will choose to reach a political accommodation that recognizes the new “facts on the ground” rather than risk a wider military and economic confrontation.

The role of public opinion warfare may be an integral component of future crisis and conflict in Asia. Well in advance of any potential confrontation, Chinese writing emphasizes the role of “political warfare” and “public opinion warfare” as an offensive deterrence strategy. China will seek to actively shape American, allied, and world opinion to legitimize any military action the CCP deems necessary. We might see cyber-enabled means to “incessantly disseminate false and confused information to the enemy side … through elaborate planning [in peacetime], and [thereby] interfere with and disrupt the enemy side’s perception, thinking, willpower and judgment, so that it will generate erroneous determination and measures.”1 China may try to leverage large populations of Chinese nationals and those of Chinese heritage living outside China as a way to influence other countries and generate new narratives that promote the PRC’s position. Consider, for example, how Chinese social media campaigns led to the boycotts of bananas from the Philippines when it seized Scarborough Reef, or similar campaigns against Japanese-made cars during its ongoing territorial dispute over the Senkaku Islands. Most recently, Lotte Duty Free, a South Korean company, suffered distributed denial-of-service attacks from Chinese IP servers – almost certainly a response to South Korea’s recent decision to host the THAAD missile defense system.

It is also critical to recognize China’s understanding and leverage of the American political, information, and economic system. Over decades, China has intertwined its interests and money with American universities, research institutes, corporate institutions, media and entertainment, political lobbying, and special interest organizations. This has had the effect of co-opting a number of institutions and elite opinion makers who view any competition or conflict with China as, at best, detrimental to American interests, and at worst, as a hopeless cause, some going so far as to suggest that it is better for the U.S. to recognize Chinese primacy and hegemony, at least in Asia, if not worldwide. Either way, China will maximize attempts to use cyber-enabled means to shape American and world understanding so as to paint China as the “victim” in any scenario, being “forced” into action by American or Western “interference” or “provocation.”

What can the U.S. do to Enhance Network Resilience?

One of the most important ways that network resiliency can be addressed is by fundamentally changing the intellectual and conceptual approach to critical networks. Richard Harknett, the former scholar-in-residence at U.S. Cyber Command, has suggested a better approach. In a recent issue of the Journal of Information Warfare, he points out that cyberspace is not a deterrence space, but an offense-persistent environment. By that he means that it is an inherently active, iterative, and adaptive domain. Norms are not established by seeking to impose an understood order (such as at Bretton Woods) or through a “doctrine of restraint,” but rather through the regular and constant interactions between states and other actors.  Defense and resiliency are possible in this space, but attrition is not. Conflict here cannot be contained to “areas of hostility” or “military exclusion zones.” No steady state can exist here—every defense is a new opportunity for offense, and every offense generates a new defense.2

Second, the policy and legal approach to network resiliency must shift from a law enforcement paradigm to a national security paradigm. This paradigm is important because it affects the framework under which operations are conducted. The emphasis becomes one of active defense, adaptation, identification of vulnerabilities and systemic redundancy and resilience. A national security approach would also be better suited for mobilizing a whole-of-nation response in which the government, industry, and the population are engaged as active participants in network defense and resiliency. Important to this is the development of partnership mechanisms and professional networking that permit rapid sharing of information at the lowest level possible. Major telecommunications firms, which provide the infrastructure backbone of critical networks, require timely, actionable information in order to respond to malicious threats. Engagement with the private sector must be conducted in the same way they engage with each other – by developing personal trust and providing actionable information.

Network hardening must be coupled with the capabilities needed to rapidly reconstitute critical networks and the resiliency to fight through network attack. This includes the development of alternative command, control, and communication capabilities. In this regard, the military and government can look to industries such as online retail, online streaming, and online financial networks (among others) that operate under constant attack on an hourly basis while proving capable of providing on-demand service to customers without interruption. Some lessons might be learned here.    

Third, new operational concepts must emphasize persistent engagement over static defense. The United States must have the capacity to contest and counter the cyber capabilities of its adversaries and the intelligence capacity to anticipate vulnerabilities so we move away from a reactive approach to cyber incidents and instead position ourselves to find security through retaining the initiative across the spectrum of resiliency and active defensive and offensive cyber operations.

Congressional Action and Implementing a Whole-of-Government Approach

There are five “big hammers” that Congress and the federal government have at their disposal to effect large changes – these are known as the “Rishikof of Big 5” after Harvey Rishikof, Chairman of the Standing Committee on Law and National Security for the American Bar Association. These “hammers” include the tax code and budget, the regulatory code, insurance premiums, litigation, and international treaties. A comprehensive, whole-of-nation response to the challenge China represents to the American-led international system will require a mixture of these “big hammers.” No one change or alteration in Department of Defense policy toward cyberspace operations will have nearly the impact as these “hammers.”3

The tax code and budget, coupled with regulation, can be structured to incentivize network resiliency and security by default (cyber security built into software and hardware as a priority standard), not only among key critical infrastructure industries, but among the population as a whole to include the telecommunication Internet border gateways, small-to-medium sized Internet service providers, and information technology suppliers. Since the federal government, Defense Department, and Homeland Security rely largely on private industry and third-party suppliers for communications and information technology, this would have the attendant effect of improving the systems used by those supporting national security and homeland defense. The key question then is: how can Congress incentivize network resiliency and security standards, to include protecting the supply chain, most especially for those in industry who provide goods and services to the government?

If the tax code, budget, and regulation might provide some incentive (“carrots”), so too can they provide “sticks.” Litigation and insurance premiums can also provide similar effects, both to incentivize standards and practices and discourage poor cyber hygiene and lax network security practices. Again, Congress must balance the “carrots” and “sticks” within a national security framework.

Congress might also address law and policy which permits adversary states to leverage the American system to our detriment. Today, American universities and research institutions are training China’s future leaders in information technology, artificial intelligence, autonomous systems, computer science, cryptology, directed energy and quantum mechanics. Most of these students will likely return to China to put their services to work for the Chinese government and military, designing systems to defeat us. American companies hire and train Chinese technology engineers, and have established research institutes in China.4 The American taxpayer is helping fund the growth and development of China’s military and strategic cyber forces as well as growth in China’s information technology industry.

Related specifically to the Department of Defense, Congress should work with the Department to identify ways in which the services man, train, and equip cyber mission forces. It will have to provide new tools that the services can leverage to identify and recruit talented men and women, and ensure that the nation can benefit long-term by setting up appropriate incentives to retain and promote the best and brightest. It will have to address an acquisition system structured around platforms and long-term programs of record. The current military is one where highly advanced systems have to be made to work with legacy systems and cobbled together with commercial, off-the-shelf technology. This is less than optimal and creates hidden vulnerabilities in these systems, risking cascading mission failure and putting lives in jeopardy.

Finally, Congress, the Department of Defense, and the broader intelligence and homeland security communities can work together to establish a center of excellence for the information and cyber domain that can provide the detailed system-of-systems analysis, analytic tools, and capability development necessary to operate and defend in this space. Such centers have been established in other domains, such as land (e.g., National Geospatial Intelligence Agency), sea (e.g., Office of Naval Intelligence) and air and space (e.g., National Air and Space Intelligence Center).

Conclusion

It is important to understand that this competition is not limited to “DOD versus PLA.” The U.S. must evaluate how it is postured as a nation is whether it is prepared fight and defend its information space, to include critical infrastructure, networks, strategic resources, economic arrangements, and the industries that mold and shape public understanding, attitude, and opinion. It must decide whether defense of the information space and the homeland is a matter of national security or one of law enforcement, because each path is governed by very different approaches to rules, roles, policies, and responses. Policymakers should consider how to best address the need to provide critical indications, warnings, threat detection, as well as the system-of-systems network intelligence required for the U.S. to develop the capabilities necessary to operate in and through cyberspace. For all other domains in which the U.S. operates, there is a lead intelligence agency devoted to that space (Office of Naval Intelligence for the maritime domain, National Air and Space Intelligence Center for the air and space domains, etc.).

It must always be remembered that for China, this is a zero-sum competition – there will be a distinct winner and loser. It intends to be that winner, and it believes that the longer it can mask the true nature of that competition and keep America wedded to its own view of the competition as a positive-sum game, it will enjoy significant leverage within the American-led system and retain strategic advantage. China is pursuing successfully, so far, a very clever strategy of working through the system the U.S. built in order to supplant it – and much of it is happening openly and in full view. This strategy can be countered in many ways, but first the U.S. must recognize its approach and decide to act.

LCDR Jake Bebber is a cryptologic warfare officer assigned to the staff of Carrier Strike Group 12. He previously served on the staff of U.S. Cyber Command from 2013 – 2017. LCDR Bebber holds a Ph.D. in public policy. He welcomes your comments at: jbebber@gmail.com. These views are his alone and do not necessarily represent any U.S. government department or agency.

1. Deal 2014.

2. Richard Harknett and Emily Goldman (2016) “The Search for Cyber Fundamentals.” Journal of Information Warfare. Vol. 15 No. 2.

3. Harvey Rishikof (2017) Personal communication, April 21.

4. See: https://www.bloomberg.com/view/articles/2013-03-28/chinese-hacking-is-made-in-the-u-s-a-

Featured Image: Nokia Security Center server room (Photo: Nokia)

Beijing’s Views on Norms in Cyberspace and Cyber Warfare Strategy Pt. 1

By LCDR Jake Bebber USN

The following is a two-part series looking at PRC use of cyberspace operations in pursuit of its national strategies and the establishment of the Strategic Support Force. Part 1 considers the centrality of information operations and information war to the PRC’s approach toward its current struggle against the U.S. Part 2 looks at the PRC’s use of international norms and institutions in cyberspace, and possible U.S. responses.

Introduction

A recent article noted a marked shift in Chinese strategy a few short years ago which is only now being noticed. Newsweek author Jeff Stein wrote a passing reference to a CCP Politburo debate under the presidency of Hu Jintao in 2012 in which “Beijing’s leading economics and financial officials argued that China should avoid further antagonizing the United States, its top trading partner. But Beijing’s intelligence and military officials won the debate with arguments that China had arrived as a superpower and should pursue a more muscular campaign against the U.S.”1

The nature of this competition is slowly taking shape, and it is a much different struggle than the Cold War against the Soviet Union – however, with stakes no less important. This is a geoeconomic and geoinformational struggle. Both U.S. and PRC views on cyber warfare strategy, military cyber doctrine, and relevant norms and capabilities remain in the formative, conceptual, and empirical stages of understanding. There is an ongoing formulation of attempting to understand what cyberspace operations really are. While using similar language, each has different orientations and perspectives on cyberspace and information warfare, including limiting structures, which has led to different behaviors. However, the nature of cyberspace, from technological advancement and change, market shifts, evolving consumer preferences to inevitable compromises, means that while windows of opportunity will emerge, no one side should expect to enjoy permanent advantage. Thus, the term ‘struggle’ to capture the evolving U.S.-PRC competition.

The PRC recognized in the 1990s the centrality of information warfare and network operations to modern conflict. However, it has always understood the information space as blended and interrelated. Information is a strategic resource to be harvested and accumulated, while denied to the adversary. Information warfare supports all elements of comprehensive national power to include political warfare, legal warfare, diplomatic warfare, media warfare, economic warfare, and military warfare. It is critical to recognize that the PRC leverages the American system and its values legally (probably more so than illegally), to constrain the U.S. response, cloud American understanding, and co-opt key American institutions, allies, and assets. In many ways, the PRC approach being waged today is being hidden by their ability to work within and through our open liberal economic and political system, while supplemented with cyber-enabled covert action (such as the OPM hack).

To support their comprehensive campaign, the PRC is reforming and reorganizing the military wing of the Communist Party, the People’s Liberation Army (PLA), posturing it to fight and win in the information space. Most notably, it recently established the Strategic Support Force (SSF) as an umbrella entity for electronic, information, and cyber warfare. Critical for U.S. policymakers to understand is how the SSF will be integrated into the larger PLA force, how it will be employed in support of national and military objectives, and how it will be commanded and controlled. While much of this remains unanswered, some general observations can be made.

This reform postures the PLA to conduct “local wars under informationized conditions” in support of its historic mission to “secure dominance” in outer space and the electromagnetic domain. Network (or cyberspace) forces are now alongside electromagnetic, space, and psychological operations forces and better organized to conduct integrated operations jointly with air, land, and sea forces.2

This change presents an enormous challenge to the PLA. The establishment of the SSF disrupts traditional roles, relationships, and processes. It also disrupts power relationships within the PLA and between the PLA and the CCP. It challenges long-held organizational concepts, and is occurring in the midst of other landmark reforms, to include the establishment of new joint theater commands.3 However, if successful, it would improve information flows in support of joint operations and create a command and control organization that can develop standard operating procedures, tactics, techniques, procedures, advanced doctrine, associated training, along with driving research and development toward advanced capabilities.

While questions remain as to the exact composition of the Strategic Support Force, there seems to be some consensus that space, cyber, electronic warfare, and perhaps psychological operations forces will be centralized into a single “information warfare service.” Recent PLA writings indicate that network warfare forces will be charged with network attack and defense, space forces will focus on ISR and navigation, and electronic warfare forces will engage in jamming and disruption of adversary C4ISR. It seems likely that the PRC’s strategic information and intelligence support forces may fall under the new SSF. The PLA’s information warfare strategy calls for its information warfare forces to form into ad hoc “information operations groups” at the strategic, operational, and tactical levels, and the establishment of the SSF will save time and enable better coordination and integration into joint forces. The SSF will be better postured to conduct intelligence preparation of the battlespace, war readiness and comprehensive planning for “information dominance.”4

The establishment of the SSF creates a form of information “defense in depth,” both for the PLA and Chinese society as a whole. The SSF enables the PLA to provide the CCP with “overlapping measures of electronic, psychological, and political deterrents.” It is reasonable to expect that there will be extensive coordination and cooperation among the PRC’s military, internal security, network security, “commercial” enterprises such as Huawei and ZTE, political party organizations, state controlled media both inside and outside China, and perhaps even mobilization of Chinese populations.

Chinese Information Warfare Concepts and Applications

Recent Chinese military writings have stressed the centrality of information to modern war and modern military operations. Paying close attention to the way the West – principally the U.S. – conducted the First Gulf War and operations in Kosovo and the Balkans in the 1990s, the PRC has been aggressively pursuing a modernization and reform program that has culminated in where they are today. Indeed, there is close resemblance to PLA and PRC aspirational writing from the 1990s to today’s force structure.

In many ways, the PLA understanding of modern war reflects the American understanding in so much as both refer to the centrality of information and the need to control the “network domain.” “Informatized War” and “Informatized Operations” occur within a multi-dimensional space – land, sea, air, space and the “network electromagnetic” or what Americans generally understand as “cyberspace.” The U.S. has long held that the control of the network domain provides a significant “first mover advantage,” and the PRC is well on the way toward building the capability for contesting control of the network domain. Its writings consistently hold that the PLA must degrade and destroy the adversary’s information support infrastructure to lessen its ability to respond or retaliate. This is especially necessary for “the weak to defeat the strong,” because most current writing still suggests that the PLA believes itself still inferior to American forces, though this perception is rapidly changing. Regardless, the PRC understanding of modern war supposes a strong incentive for aggressive action in the network domain immediately prior to the onset of hostilities.6 These operations are not restricted geographically, and we should expect to see full-scope network operations worldwide in pursuit of their interests, including in the American homeland.7

There are three components to a strategic first strike in the cyber domain. The first component is network reconnaissance to gain an understanding of critical adversary networks, identifying vulnerabilities, and manipulating adversary perception to obtain strategic advantage. Network forces are then postured to be able to conduct “system sabotage” at a time and place of the PRC’s choosing. When the time is right, such as a prelude to a Taiwan invasion or perhaps the establishment of an air defense identification zone over the South China Sea, the PRC will use system sabotage to render adversary information systems impotent, or to illuminate the adversary’s “strategic cyber geography” in order to establish a form of “offensive cyber deterrence.” The PRC could take action to expose its presence in critical government, military, or civilian networks and perhaps conduct some forms of attack in order to send a “warning shot across the bow” and give national decision-makers reason to pause and incentive to not intervene.8

Indeed, unlike the American perspective, which seeks to use cyberspace operations as a non-kinetic means to dissuade or deter potential adversaries in what Americans like to think of as “Phase 0,” the PLA has increasingly moved toward an operational construct that blends cyberspace operations with kinetic operations, creating a form of “cyber-kinetic strategic interaction.” The goal would be to blind, disrupt, or deceive adversary command and control and intelligence, surveillance, and reconnaissance (C4ISR) systems while almost simultaneously deploying its formidable conventional strike, ballistic missile, and maritime power projection forces. The PLA envisions this operational concept as “integrated network electronic warfare,” described by Michael Raska as the “coordinated use of cyber operations, electronic warfare, space control, and kinetic strikes designed to create ‘blind spots’ in an adversary’s C4ISR systems.”9 

The PLA has recently described this as a form of “network swarming attacks” and “multi-directional maneuvering attacks” conducted in all domains – space, cyberspace, ground, air, and sea. The Strategic Support Force has been designed to provide these integrated operations, employing electronic warfare, cyberspace operations, space and counter-space operations, military deception and psychological operations working jointly with long-range precision strike, ballistic missile forces and traditional conventional forces.

Essential to these concepts are China’s ability to achieve dominance over space-based information assets. PRC authors acknowledge this as critical to conducting joint operations and sustaining battlefield initiative. This includes not only the orbiting systems, but ground stations, tracking and telemetry control, and associated data systems. We can expect full-scope operations targeting all elements of America’s space-based information system enterprise.

Important to all of this is the necessity of preparatory operations that take place during “peacetime.” China understands that many of its cyberspace, network, electronic and space warfare capabilities will not be available unless it has gained access to and conducted extensive reconnaissance of key systems and pre-placed capabilities to achieve desired effects. We should expect that the PRC is actively attempting to penetrate and exploit key systems now in order to be able to deliver effects at a later date.

Chinese Understandings of Deterrence and International Law in Cyber Warfare

China recently released the “International Strategy of Cooperation on Cyberspace.”10 Graham Webster at the Yale Law School made some recent observations. First, it emphasizes “internet sovereignty,” which is unsurprising, since the CCP has a vested interest in strictly controlling the information space within China, and between China and the rest of the world.  This concept of “internet sovereignty” should best be understood as the primacy of Chinese interests. China would consider threatening information sources outside of the political borders of China as legitimate targets for cyber exploitation and attack. In the minds of the CCP, the governance of cyberspace should recognize the sovereignty of states, so long as the Chinese state’s sovereignty is paramount over the rest of the world’s.

Second, the strategy suggests that “[t]he tendency of militarization and deterrence buildup in cyberspace is not conducive to international security and strategic mutual trust.” This appears to be aimed squarely at the U.S., most likely the result of Edward Snowden’s actions. The U.S. seems to also be the target when the strategy refers to “interference in other countries’ internal affairs by abusing ICT and massive cyber surveillance activities,” and that “no country should pursue cyber hegemony.” Of course, the PRC has been shown to be one of the biggest sources of cyber-enabled intellectual property theft and exploitation, and China’s cyber surveillance and control regimes are legendary in scope. Immediately after decrying the “militarization” of cyberspace, the strategy calls for China to “expedite the development of a cyber force and enhance capabilities … to prevent major crisis, safeguard cyberspace security, and maintain national security and social stability.” These broad, sweeping terms would permit China to later claim that much of its activities that appear to violate its own stated principles in the strategy are indeed legitimate.

The strategy seeks to encourage a move away from multi-stakeholder governance of the Internet to multilateral decision-making among governments, preferably under the United Nations. This would certainly be in China’s interests, as China continues to hold great sway in the U.N., especially among the developing world. After all, China is rapidly expanding its geoeconomic and geoinformational programs, leveraging its state-owned enterprises to provide funding, resources, and informational infrastructure throughout Africa, Asia, Europe, and the Americas. As more countries become dependent on Chinese financing, development, and infrastructure, they will find it harder to oppose or object to governance regimes that favor Chinese interests.

Naturally, the strategy emphasizes domestic initiatives and a commitment to a strong, domestic high-tech industry. This would include the “Made in China 2025” plan, which has received a great deal of attention. The plan seeks to comprehensively upgrade and reform Chinese industry, with an emphasis on information technology.11

When considering deterrence in the Chinese understanding, it is important to remember that China approaches it from a different context than the United States. Jacqueline Deal noted that China’s basic outlook proceeds from the premise that the “natural state of world is one of conflict and competition, and the goal of strategy is to impose order through hierarchy.”12 While Americans understand deterrence as a rational calculation, the Chinese approach emphasizes the conscious manipulation of perceptions.

Indeed, the Chinese term weishe, which translates as “deterrence,” also embodies the idea of “coercion.” We might see examples of this understanding by China’s historic use of “teaching a lesson” to lesser powers. In the 20th Century, Chinese offensives against India and Vietnam – thought by many in the West to be an example of tragic misunderstanding and failed signaling of core interests – might be better thought of as attempts by China to secure its “rightful” place atop the regional hierarchy. It is a form of “lesson teaching” that has long-term deterrent effects down the road.

We can expect therefore that cyberspace would become one means among many that China will use in support of its “Three Warfares” (public opinion, media, legal) concept in support of its larger deterrent or compellence strategies. It will likely be much broader than the use of PLA SSF forces, and could include cyber-enabled economic strategies, financial leverage, and resource withholding.

LCDR Jake Bebber is a cryptologic warfare officer assigned to the staff of Carrier Strike Group 12. He previously served on the staff of U.S. Cyber Command from 2013 – 2017. LCDR Bebber holds a Ph.D. in public policy. He welcomes your comments at: jbebber@gmail.com. These views are his alone and do not necessarily represent any U.S. government department or agency.

1. Available at: http://www.newsweek.com/cia-chinese-moles-beijing-spies-577442

2. Dean Cheng (2017). Cyber Dragon: Inside China’s Information Warfare and Cyber Operations. Praeger Security International.

3. Cheng 2017.

4. John Costello and Peter Mattis (2016). “Electronic Warfare and the Renaissance of Chinese Information Operations.” in China’s Evolving Military Strategy (Joe McReynolds, editor). The Jamestown Foundation.

6. Joe McReynolds, et. Al. (2015) “TERMINE ELECTRON: Chinese Military Computer Network Warfare Theory and Practice.” Center for Intelligence Research and Analysis

7.  Barry D. Watts (2014) “Countering Enemy Informationized Operations in Peace and War.” Center for Strategic and Budgetary Assessments

8. Timothy L. Thomas (2013) “China’s Cyber Incursions.” Foreign Military Studies Office

9. See: http://www.atimes.com/article/chinas-evolving-cyber-warfare-strategies/

10. See: http://news.xinhuanet.com/english/china/2017-03/01/c_136094371.htm

11. See: https://www.csis.org/analysis/made-china-2025

12. Jacqueline N. Deal (2014). “Chinese Concepts of Deterrence and their Practical Implications for the United States.” Long Term Strategy Group.

Featured Image: The Center for Nanoscale Materials at the Advanced Photon Source. (Photo: Argonne National Laboratory)

Standing Up the NIWDC with CAPT John Watkins

By Sally Deboer

CIMSEC was recently joined by Captain John Watkins, the first commanding officer of  the Naval Information Warfighting Development Center (NIWDC). Read on to learn about this new command’s role in shaping the U.S. Navy’s information warfighting skills and capabilities.

SD: We are joined by CAPT John Watkins, the first commanding officer of the newly opened Naval Information Warfighting Development Center. It is truly an honor to have you here. Before we begin, can you share a bit about yourself and your background?

JW: Thanks first and foremost for having me, it’s an honor for me as well. I came into the Navy in 1992 as a Surface Warfare Officer and completed various tours in engineering. I did that for roughly five years and really enjoyed it, but subsequent to those tours I attended the Naval Postgraduate School in Monterey, California where I achieved a Master’s degree in IT Management during which time I laterally transferred into the space and electronic warfare community. A few years transpired and that community was subsumed into the information professional community that we know of today, which comes with the 1820 designator.

Since being an IP, I’ve had multiple operational and staff tours, to include XO of USS Coronado, serving as N6 and Information Warfare Command on Expeditionary and Carrier Strike Group Staffs, and as the N6 on a Numbered Fleet staff. Staff tours have included time on the OPNAV and SURFACE FORCES staffs. I’ve been very fortunate and blessed to have had multiple command tours including NAVCOMTELSTA San Diego, Navy Information Operations Command Texas, and now just recently, my assignment here at the Naval Information Warfighting Development Center.  

SD: Let’s kick off by introducing our readers to your new command. Initial operating capability for the NIWDC was declared on 27 March 2017. Could you please explain the role of this warfighting development center, and specifically the mission of the NIWDC within the information domain?

JW: Like the other warfighting development centers (WDC), we are all focused on four primary lines of operation. First, we’re concerned with enhancing advanced level training. As you can imagine, in terms of NIWDC, that entails all of our information-related capabilities. The advanced level training for our units and forces in the fleet occurs at the latter stages of the optimized fleet response plan (OFRP). We’re heavily invested in that along with our fellow WDCs.

The second line of operation is the development of doctrine that allows us to achieve that advanced level of proficiency – doctrine including tactics, techniques, and procedures (TTPs), standard operating procedures (SOPs), higher level Concepts of Operation (CONOPS), or as necessary, revisions to Naval Warfare publications.

The third line of operation is to cultivate and develop a subject matter expertise known throughout all the WDCs as a ‘warfare tactics instructor’ or WTIs. Other WDCs have WTIs in place today, for example, the model that has been around longest is the Naval Aviation WDC, “Top Gun,” associated with advanced tactics for jet fighting, air-to-air combat, etc. What we want to do here at NIWDC is to build out our own WTI pipeline, which I think of as the “Information Warfare Jedi Knights” of the future; we’ll have quite a few WTI pipelines, as we have a broad spectrum of capabilities.

Last but not least, we’ll have an organic assessments capability built into the command which allows us to, in an OODA loop fashion, assess our advanced level training capabilities, our TTPs and SSPs, and our doctrine as we bake it into our training pipeline and processes, ensuring it is delivering optimal IW warfighting effects. Those are the four lines of operation that were promulgated to the WDCs, directed by the Chief of Naval Operations, in 2014.

SD: The traditional warfare Type Commanders (Air, Surface, Undersea) have established their own warfare development centers, as you mentioned. Given that IW is a critical enabler of other warfare areas, how do you envision the NIWDC interacting with the other warfare development centers? What key IW concepts and understandings should be incorporated by other communities?

JW: That’s a fantastic question. NIWDC just achieved IOC designation in late March, and the good news is that while we are the last WDC to be stood up, we already have IW community professionals, both enlisted and officer, arrayed across the other WDCs today, totaling about 150 people, who are working Information Warfare expertise into Naval warfighting. Even as we’re building up to this capability, our folks that have been embedded throughout the other WDCs have done a remarkable job laying the groundwork and foundation for us to come to fruition as the NIWDC. This is significant because the information-related capabilities that we bring to bear are so ingrained in all the other mission warfare areas of the Navy that we have to be interlinked with the other WDCs and visa-versa.

As we build up our capabilities here, we’d like to see the reciprocal detailing back and forth – where ideally we’ll have Surface Warfare Officers, Submariners, Aviators, etc., embedded and billeted to the NIWDC. That’s the future, and it’s absolutely imperative that we get to that point – to have that common back and forth day in and day out as we’re contemplating modern day warfare – it’s essential for us to understand the other warfare areas, their requirements, how our systems are interdependent, and how we have to operate in real time to optimize our overarching warfare capabilities.

SD: You recently stated, “a key objective of the NIWDC is to provide hard-hitting, fleet-relevant information warfighting effects…” Can you outline what some of those effects might be and what specific mission areas within Information Warfare (IW) they support? 

JW: I think the best way I can answer that question is to describe how we’re building out the command here today. We’ve established a headquarters staff that will manage seven core Mission Area Directorates, or what we refer to as “MADs.”

Those Mission Area Directorates include an Assured command-and-control and CyberSpace Operations MAD, a Space Operations MAD, a Meteorology MAD, an Intelligence MAD, a Cryptology MAD, an Electronic Warfare MAD, and an Information Operations MAD. Laying that all out, we can generate information warfare effects from any of those Mission Areas—but when combined, it becomes extremely optimal. It’s the traditional ‘sum of the parts’ principle.

As we develop our organization here, another big effort we’re putting into play in the larger Navy is the Information Warfare Commander construct, which is an organization led by a fully board-screened senior Information Warfare Community Captain (O-6). I’ll describe the construct at the tactical level for now because I think it will be the best way to articulate where we’re headed in employing our model. On a Carrier Strike Group (CSG) staff, for example, we have the Information Warfare Commander (IWC)—again, that board-screened IW Community Captain, who is providing leadership and oversight on core IW mission areas run by the N2 Intelligence Officer, the N39 Cryptologic Officer, the N6 Communications officer, and to the extent where we can get it into play, the Meteorological officer, who at the end of the day, all work for this O-6 IWC. The entire IWC organization works for the Carrier Strike Commander similar to a Destroyer Squadron or Carrier Air Group Commander.  

Where the synergistic effect really comes in is in information operations planning. If you think across typical phased wartime planning scenarios, the folks that are sitting down at the table in the IWC organization bringing their skills and attributes to the team while enabling holistic planning across all phases of warfare, achieve tremendous synergy and total awareness of the  interdependencies and linkages across their mission areas. This powerful effect cannot be overemphasized. Planning in individual stovepipes, i.e. within traditional N Head silos like the N2, the N39, N6 or Meteorology, is counterproductive in today’s modern warfare continuum. It’s essential that planning along these lines factors in and accounts for the coordination and integration of needs and requirements of our fellow Composite Warfare Commanders. When done correctly, we give our collective Navy team every advantage possible to win when we need to. Suffice it to say, I’m very excited about where we’re headed and how we’re going to make our phenomenal Naval warfighting prowess even better!

SD: There seems to be growing agreement that in future conflict, naval forces will not enjoy undisputed access to the electromagnetic spectrum. How will naval information warfare capabilities enable distributed operations when the spectrum required for C4ISR is being, denied, degraded, disrupted and subject to deception operations?

JW: That’s another great question that we are constantly focused on. We all acknowledge the fact that in modern warfare scenarios, the likelihood that we will have denied or degraded communications is a given. Frankly, it’s almost no longer an assumption—it’s reality. Simply put, we need to be able to retain organic capabilities as much as possible wherever we are, so that if we lose the link back to the beach, we can still function and fight.

To that end, we’ve got to be able to train, operate, and be proficient in fighting in those types of scenarios. We’re all about getting at that advanced level of necessary training here at the NIWDC.

SD: How do you propose addressing the acquisition and fielding of new information technology (cyber/EW/IW) and developing TTPs under the current DOD acquisition system?

JW: Acquisition is an evolving process, and I think acquisition reform surfaces quite frequently anytime we talk about the dynamics of advancing IT. The rate of advancement in technology is astounding, and the acquisition process needs to be agile enough to keep pace. To that end, we’ve looked for creative and innovative ways within our acquisition process to accelerate and expedite systems that facilitate IW warfighting effects and we need to continue doing so. NIWDC participates in many experimentation and innovation venues that help facilitate that speed-to-fleet dynamic and we’re excited to be a partner in those efforts.    

To your question about the TTPs and SOPs – when we introduce new tech to the fleet, it is important that we have TTPs and SOPs built into them from day one. We’ve got to be able to deliver a product that comes with robust training behind it so that when it’s delivered to the fleet, our sailors can put it into immediate effect. The TTPs and SOPs that accompany that capability need to be solid enough out of the gate so that we achieve immediate success from day one of fielding.  

On top of that, what I want to achieve at the NIWDC is the ability to refine and tweak TTPs and SOPs at a high rate – what I call the “wash, rinse, repeat” approach. There’s no reason we can’t take those TTPs and SOPs, have sailors put them into effect, provide their feedback to us if they’re not quite right and suggest course corrections, then update those on a continuous, OODA-loop basis until we have delivered optimal doctrine.

SD: Our adversaries approach the information space (IW/EW/cyber) holistically, blending electronic and information warfare with cyberspace operations, psychological operations, deception – and conduct these operations across all elements of national power (diplomatic, economic, legal, military, information). What steps are you taking to ensure the Navy is developing information warfare strategies, operational concepts, and TTPs that cut across all elements of national power?

JW: I’ll give you an example – that’s the best way I can answer this question – it’s a great question, but one you could spend an hour answering. Earlier in our discussion, we talked about the IWC construct. I’m a firm believer that if we get that instituted correctly and make it a robust organization with the goal of delivering those optimal IW effects that it will serve as the bedrock going forward across the Navy enterprise. We’ll look to institute that construct, as applicable, by using that optimized model at the tactical level and building out from there to implement at the operational and strategic levels.

Back to the point about our adversaries – when they’re exploiting all this goodness and delivering their effects, they are planning across the DOTMPLF (doctrine, organization, training, materiel, leadership and education, personnel and facilities) spectrum. We must do the same thing with our IWC Construct. At the NIWDC, in partnership with IFOR, this is one of our tasks – to perform this DOTMPLF analysis that will codify the IWC construct. We’ve been tasked by Fleet Forces Command and PACFLT to do just that – this will be one of our top objectives in the first years here at the NIWDC – to ensure we’re setting ourselves up for success for decades to come.

SD: Last but not least – if our listeners are new to information warfare, can you suggest any resources or reading materials that could help the less tech-inclined among us become more familiar with the domain and more ready to address its unique challenges?

JW: There are so many great reference materials, but perhaps the quickest way to answer that is to recommend your readers and listeners go to our command website and InfoDOMAIN, or our Navy News Web page or Facebook page. We have a lot of good products posted there – that would be a great start. We have some items posted there that are specific to the NIWDC, so if your readers want more information or a summary, they can find it there as well.

SD: Thank you so much for your time today, CAPT Watkins. It’s truly been an honor speaking with you, and we thank you for taking time out of your busy schedule to help educate us on your new command and the role of IW in the Navy and DoD going forward. We hope you’ll join us again sometime. 

Captain John Watkins is a native of California, where he went on to graduate from the NROTC program at the University of San Diego obtaining his commission in 1991. He joined the Naval Information Warfighting Development Center as the commanding officer in March of 2017.

Sally DeBoer is an Associate Editor with CIMSEC, and previously served as CIMSEC’s president from 2016-2017. 

Featured Image: Chief Fire Controlman Daniel Glatz, from Green Bay, Wisconsin, stands watch in the combat information center aboard the Arleigh Burke-class guided-missile destroyer USS John S. McCain (DDG 56). (Alonzo M. Archer/U.S. Navy)