Category Archives: Cyber War

Threats, risks, and players in the cyber realm.

21st Century Maritime Operations Under Cyber-Electromagnetic Opposition Part Two

The following article is part of our cross-posting partnership with Information Dissemination’s Jon Solomon.  It is republished here with the author’s permission.  You can read it in its original form here.

Read part one of this series here.

By Jon Solomon

Candidate Principle #2: A Network’s Combat Viability is more than the Sum of its Nodes

Force networking generates an unavoidable trade-off between maximizing collective combat capabilities and minimizing network-induced vulnerability risks. The challenge is finding an acceptable balance between the two in both design and operation; networking provides no ‘free lunch.’

This trade-off was commonly discounted during the network-centric era’s early years. For instance, Metcalfe’s Law—the idea that a network’s potential increases as the square of the number of networked nodes—was often applied in ways suggesting a force would become increasingly capable as more sensors, weapons, and data processing elements were tied together to collect, interpret, and act upon battle space information.[i] Such assertions, though, were made without reference to the network’s architecture. The sheer number (or types) of nodes matter little if the disruption of certain critical nodes (relay satellites, for example) or the exploitation of any given node to access the network’s internals erode the network’s data confidentiality, integrity, or availability. This renders node-counting on its own a meaningless and perhaps even misleadingly dangerous measure of a network’s potential. The same is also true if individual systems and platforms have design limitations that prevent them from fighting effectively if force-level networks are undermined.

Consequently, there is a gigantic difference between a network-enhanced warfare system and a network-dependent warfare system. While the former’s performance expands greatly when connected to other force elements via a network, it nevertheless is designed to have a minimum performance that is ‘good enough’ to independently achieve certain critical tasks if network connectivity is unavailable or compromised.[ii] A practical example of this is the U.S. Navy’s Cooperative Engagement Capability (CEC), which extends an individual warship’s air warfare reach beyond its own sensors’ line-of-sight out to its interceptor missiles’ maximum ranges courtesy of other CEC-participating platforms’ sensor data. Loss of the local CEC network may significantly reduce a battle force’s air warfare effectiveness, but the participating warships’ combat systems would still retain formidable self and local-area air defense capabilities.

Conversely, a network-dependent warfare system fails outright when its supporting network is corrupted or denied. For instance, whereas in theory Soviet anti-ship missile-armed bombers of the late 1950s through early 1990s could strike U.S. aircraft carrier battle groups over a thousand miles from the Soviet coast, their ability to do so was predicated upon time-sensitive cueing by the Soviet Ocean Surveillance System (SOSS). SOSS’s network was built around a highly centralized situational picture-development and combat decision-making apparatus, which relied heavily upon remote sensors and long-range radio frequency communications pathways that were ripe for EW exploitation. This meant U.S. efforts to slow down, saturate, block, or manipulate sensor data inputs to SOSS, let alone to do the same to the SOSS picture outputs Soviet bomber forces relied upon in order to know their targets’ general locations, had the potential of cutting any number of critical links in the bombers’ ‘kill chain.’ If bombers were passed a SOSS cue at all, their crews would have had no idea whether they would find a carrier battle group or a decoy asset (and maybe an accompanying aerial ambush) at the terminus of their sortie route. Furthermore, bomber crews firing from standoff-range could only be confident they had aimed their missiles at actual high-priority ships and not decoys or lower-priority ships if they received precise visual identifications of targets from scouts that had penetrated to the battle group’s center. If these scouts failed in this role—a high probability once U.S. rules of engagement were relaxed following a war’s outbreak—the missile salvo would be seriously handicapped and perhaps wasted, if it could be launched at all. Little is different today with respect to China’s nascent Anti-Ship Ballistic Missile capability: undermine the underlying surveillance-reconnaissance network and the weapon loses its combat utility.[iii] This is the risk systems take with network-dependency.

Candidate Principle #3: Contact Detection is Easy, Contact Classification and Identification are Not

The above SOSS analogy leads to a major observation regarding remote sensing: detecting something is not the same as knowing with confidence what it is. It cannot be overstated that no sensor can infallibly classify and identify its contacts: countermeasures exist against every sensor type.

As an example, for decades we have heard the argument ‘large signature’ platforms such as aircraft carriers are especially vulnerable because they cannot readily hide from wide-area surveillance radars and the like. If the only method of carrier concealment was broadband Radar Cross Section suppression, and if the only prerequisite for firing an anti-carrier weapon was a large surface contact’s detection, the assertions of excessive vulnerability would be true. A large surface contact held by remote radar, however, can just as easily be a merchant vessel, a naval auxiliary ship, a deceptive low campaign-value combatant employing signature-enhancement measures, or an artificial decoy. Whereas advanced radars’ synthetic or inverse synthetic aperture modes can be used to discriminate a contact’s basic shape as a classification tool, a variety of EW tactics and techniques can prevent those modes’ effective use or render their findings suspect. Faced with those kinds of obstacles, active sensor designers might turn to Low Probability of Intercept (LPI) transmission techniques to buy time for their systems to evade detection and also delay the opponent’s development of effective EW countermeasures. Nevertheless, an intelligent opponent’s signals intelligence collection and analysis efforts will eventually discover and correctly classify an active sensor’s LPI emissions. It might take multiple combat engagements over several months for them to do this, or it might take them only a single combat engagement and then a few hours of analysis. This means new LPI techniques must be continually developed, stockpiled, and then situationally employed only on a risk-versus-benefit basis if the sensor’s performance is to be preserved throughout a conflict’s duration.

Passive direction-finding sensors are confronted by an even steeper obstacle: a non-cooperative vessel can strictly inhibit its telltale emissions or can radiate deceptive emissions. Nor can electro-optical and infrared sensors overcome the remote sensing problem, as their spectral bands render them highly inefficient for wide-area searches, drastically limit their effective range, and leave them susceptible to natural as well as man-made obscurants.[iv]

If a prospective attacker possesses enough ordnance or is not cowed by the political-diplomatic risks of misidentification, he might not care to confidently classify a contact before striking it. On the other hand, if the prospective attacker is constrained by the need to ensure his precious advanced weapons inventories (and their launching platforms) are not prematurely depleted, or if he is constrained by a desire to avoid inadvertent escalation, remote sensing alone will not suffice for weapons-targeting.[v] Just as was the case with Soviet maritime bombers, a relatively risk-intolerant prospective attacker would be compelled to rely upon close-in (and likely visual) classification of targets following their remote detection. This dependency expands a defender’s space for layering its anti-scouting defenses, and suggests that standoff-range attacks cued by sensor-to-shooter networks will depend heavily upon penetrating (if not persistent) scouts that are either highly survivable (e.g., submarines and low-observable aircraft) or relatively expendable (e.g., unmanned system ‘swarms’ or sacrificial manned assets).

On the expendable scout side, an advanced weapon (whether a traditional missile or an unmanned vehicle swarm) could conceivably provide reconnaissance support for other weapons within a raid, such as by exposing itself to early detection and neutralization by the defender in order to provide its compatriots with an actionable targeting picture via a data link. An advanced weapon might alternatively be connected by data link to a human controller who views the weapon’s onboard sensor data to designate targets for it or other weapons in the raid, or who otherwise determines whether the target selected by the weapon is valid. While these approaches can help improve a weapon’s ability to correctly discriminate valid targets, they will nevertheless still lead to ordnance waste if the salvo is directed against a decoy group containing no targets of value. Likewise, as all sensor types can be blinded or deceived, a defender’s ability to thoroughly inflict either outcome upon a scout weapon’s sensor package—or a human controller—could leave an attacker little better off than if its weapons lacked data link capabilities in the first place.

We should additionally bear in mind that the advanced multi-band sensors and external communications capabilities necessary for a weapon to serve as a scout would be neither cheap nor quickly producible. As a result, an attacker would likely possess a finite inventory of these weapons that would need to be carefully managed throughout a conflict’s duration. Incorporation of highly-directional all-weather communications capabilities in a weapon to minimize its data link vulnerabilities would increase the weapon’s relative expense (with further impact to its inventory size). It might also affect the weapon’s physical size and power requirements on the margins depending upon the distance data link transmissions had to cover. An alternative reliance upon omni-directional LPI data link communications would run the same risk of eventual detection and exploitation over time we previously noted for active sensors. All told, the attacker’s opportunity costs for expending advanced weapons with one or more of the aforementioned capabilities at a given time would never be zero.[vi] A scout weapon therefore could conceivably be less expendable than an unarmed unmanned scout vehicle depending upon the relative costs and inventory sizes of both.

The use of networked wide-area sensing to directly support employment of long-range weapons could be quite successful in the absence of vigorous cyber-electromagnetic (and kinetic) opposition performed by thoroughly trained and conditioned personnel. The wicked, exploitable problems of contact classification and identification are not minor, though, and it is extraordinarily unlikely any sensor-to-shooter concept will perform as advertised if it inadequately confronts them. After all, the cyclical struggle between sensors and countermeasures is as old as war itself. Any advances in one are eventually balanced by advances in the other; the key questions are which one holds the upper hand at any given time, and how long that advantage can endure against a sophisticated and adaptive opponent.

In part three of the series, we will consider how a force network’s operational geometry impacts its defensibility. We will also explore the implications of a network’s capabilities for graceful degradation. Read Part Three here.

Jon Solomon is a Senior Systems and Technology Analyst at Systems Planning and Analysis, Inc. in Alexandria, VA. He can be reached at jfsolo107@gmail.com. The views expressed herein are solely those of the author and are presented in his personal capacity on his own initiative. They do not reflect the official positions of Systems Planning and Analysis, Inc. and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency. These views have not been coordinated with, and are not offered in the interest of, Systems Planning and Analysis, Inc. or any of its customers.

[i] David S. Alberts, John J. Garstka, and Frederick P. Stein. Network Centric Warfare: Developing and Leveraging Information Superiority, 2nd Ed. (Washington, D.C.: Department of Defense C4ISR Cooperative Research Program, August 1999), 32-34, 103-105, 250-265.

[ii] For some observations on the idea of network-enhanced systems, see Owen R. Cote, Jr. “The Future of Naval Aviation.” (Cambridge, MA: Massachusetts Institute of Technology Security Studies Program, 2006), 28, 59.

[iii] Solomon, “Defending the Fleet,” 39-78. For more details on Soviet anti-ship raiders dependencies upon visual-range (sacrificial) scouts, see Maksim Y. Tokarev. “Kamikazes: The Soviet Legacy.” Naval War College Review 67, No. 1 (Winter 2013): 71, 73-74, 77, 79-80.

[iv] See 1. Jonathan F. Solomon. “Maritime Deception and Concealment: Concepts for Defeating Wide-Area Oceanic Surveillance-Reconnaissance-Strike Networks.” Naval War College Review 66, No. 4 (Autumn 2013): 88-94; 2. Norman Friedman. Seapower and Space: From the Dawn of the Missile Age to Net-Centric Warfare. (Annapolis, MD: Naval Institute Press, 2000), 365-366.

[v] Solomon, “Defending the Fleet,” 94-96.

[vi] Solomon, “Maritime Deception and Concealment,” 95.

Apple believes it is protecting freedom. It’s wrong. Here’s why.

Ed. note: This is an expanded version of a previous article, “We Don’t Need Backdoors.”

By Dave Schroeder

Let me open by saying I’m not for backdoors in encryption. It’s a bad idea, and people who call for backdoors don’t understand how encryption fundamentally works.

Apple has been ordered by a court to assist the FBI in accessing data on an iPhone 5c belonging to the employer of one of the San Bernardino shooters, who planned and perpetrated an international terrorist attack against the United States. Apple has invested a lot in OS security and encryption, but Apple may be able comply with this order in this very specific set of circumstances.

Apple CEO Tim Cook penned a thoughtful open letter justifying Apple’s position that it shouldn’t have to comply with this order. However, what the letter essentially says is that any technical cooperation beyond the most superficial claims that there is “nothing that can be done” is tantamount to creating a “backdoor,” irrevocably weakening encryption, and faith in encryption, for everyone.

That is wrong on its face, and we don’t need “backdoors.”

What we do need is this:

A clear acknowledgment that what increasingly exists essentially amounts to virtual fortresses impenetrable by the legal and judicial mechanisms of free society, that many of those systems are developed and employed by US companies, within the US, and that US adversaries use those systems — sometimes specifically and deliberately because they are in the US — against the US and our allies, and for the discussion to start from that point.

The US has a clear and compelling interest in strong encryption, and especially in protecting US encryption systems used by our government, our citizens, and people around the world, from defeat. But the assumption that the only alternatives are either universal strong encryption, or wholesale and deliberate weakening of encryption systems and/or “backdoors,” is a false dichotomy.

How is that so?

Encrypted communication has to be decrypted somewhere, in order for it to be utilized by the recipient. That fact can be exploited in various ways. It is done now. It’s done by governments and cyber criminals and glorified script kiddies. US vendors like Apple, can be at least a partial aid in that process on a device-by-device, situation-by-situation basis, within clear and specific legal authorities, without doing things we don’t want, like key escrow, wholesale weakening of encryption, creating “backdoors,” or anything similar, with regard to software or devices themselves.

When Admiral Michael Rogers, Director of the National Security Agency and Commander, US Cyber Command, says:

“My position is — hey look, I think that we’re lying that this isn’t technically feasible. Now, it needs to be done within a framework. I’m the first to acknowledge that. You don’t want the FBI and you don’t want the NSA unilaterally deciding, so, what are we going to access and what are we not going to access? That shouldn’t be for us. I just believe that this is achievable. We’ll have to work our way through it. And I’m the first to acknowledge there are international implications. I think we can work our way through this.”

…some believe that is code for, “We need backdoors.” No. He means precisely what he says.

When US adversaries use systems and services physically located in the US, designed and operated by US companies, existing under US law, there are many things — entirely compatible with both the letter and spirit of our law and Constitution — that could be explored, depending on the precise system, service, software, device, and circumstances. Pretending that there is absolutely nothing that can be done, and that it must be either unbreakable, universal encryption for all, or nothing, is a false choice.

To further pretend that it’s some kind of “people’s victory” when a technical system renders itself effectively impenetrable to the legitimate legal, judicial, and intelligence processes of democratic governments operating under the rule of law in free civil society is curious indeed. Would we say the same about a hypothetical physical structure that cannot be entered by law enforcement with a court order?

Many ask why terrorists wouldn’t just switch to something else.

That’s a really easy answer — terrorists use these simple, turnkey platforms for the same reason normal people do: because they’re easy to use. A lot of our techniques, capabilities, sources, and methods have unfortunately been laid bare, but people use things like WhatsApp, iMessage, and Telegram because they’re easy. It’s the same reason that ordinary people — and terrorists — don’t use Ello instead of Facebook, or ProtonMail instead of Gmail. And when people switch to more complicated, non-turnkey encryption solutions — no matter how “simple” the more tech-savvy may think them — they make mistakes that can render their communications security measures vulnerable to defeat.

And as long as the US and its fundamental freedoms engender the culture of innovation which allows companies like Apple to grow and thrive, we will always have the advantage.

Vendors and cloud providers may not always be able to provide assistance; but sometimes they can, given a particular target (person, device, platform, situation, etc.), and they can do so in a way that comports with the rule of law in free society, doesn’t require creating backdoors in encryption, doesn’t require “weakening” their products, does not constitute an undue burden, and doesn’t violate the legal and Constitutional rights of Americans, or the privacy of free peoples anywhere in the world.

Some privacy advocates look at this as a black-and-white, either-or situation, without consideration for national interests, borders, or policy, legal, and political realities. They look at the “law” of the US or UK as fundamentally on the same footing the “law” of China, Russia, Iran, or North Korea: they’re all “laws”, and people are subject to them. They warn that if Apple provides assistance, even just this once, then someone “bad” — by their own, arbitrary standards, whether in our own government or in a repressive regime — will abuse it.

The problem is that this simplistic line of reasoning ignores other key factors in the debate. The US is not China. Democracy is not the same as Communism. Free states are not repressive states. We don’t stand for, defend, or espouse the same principles. Apple is not a Chinese company. If Apple really believes it will set a precedent for nations like China by complying with a lawful US court order, it really should perform a little self-examination and ask why it would seek to operate in China, and thus be subject to such law.

The other argument seems to be that if Apple does this once, it would constitute a “backdoor” for “all” iPhones, and thus the abrogation of the rights of all. That is also categorically false. There are a number of factors here: The iPhone belongs to the deceased individual’s employer. The FBI may have a companion laptop that this specific iPhone considers a “trusted device”, and is thus potentially able to deploy an OS update without a passcode. The specific device and/or OS version may have other vulnerabilities or shortcomings that can be exploited with physical access.

This argument seems to be equivalent to saying that if government has any power or capability, it will be abused, and thus should be denied; and that encryption, or anything related to it, should somehow be considered sacrosanct. It’s like saying, if we grant the government the lawful to enter a door, they could enter any door — even yours. Some might be quick to say this is not the same. Oh, but it is. This is not an encryption backdoor, and does not apply to all iPhones, or even all iPhone 5c models, or even most. It applies to this specific set of circumstances — legally and technically.

It is puzzling indeed to assert that the government can try to break this device, or its crypto, on its own, but if the creator of the cryptosystem helps in any way, that is somehow “weakening” the crypto or creating a “backdoor.” It is puzzling, because it is false.

Specific sets of conditions happen to exist that allows Apple to unlock certain older devices. These conditions exist less and less, and in fewer forms, as devices and iOS versions get newer. Unlocking iOS 7 only works, for example, because Apple has the key. The methodology would only work in this case because it’s specifically a pre-iPhone 6 model with a 4-digit passcode and there is a paired laptop in the government’s possession. All of this is moot on iPhone 6 and newer.

Apple is welcome to use every legal mechanism possible to fight this court order — that is their absolute right. But to start and grow their company in the United States, to exist here because of the fundamental environment we create for freedom and innovation, and then to act as if Apple is somehow divorced from the US and owes it nothing, even when ordered by a court to do so, is a puzzling and worrisome position.  They can’t have it both ways.

If Apple wishes to argue against the application of the All Writs Act — which, while old, is precisely on-point — it needs to make the case that performing the technical steps necessary to comply with this court order creates an “undue burden.” It may be able to make just that argument.

ios

We exist not in an idealized world where the differences of people, groups, and nation-states are erased by the promise of the Internet and the perceived panacea of unbreakable encryption.

We exist in a messy and complicated reality. People seek to do us harm. They use our own laws, creations, and technologies against us. People attack the US and the West, and they use iPhones.

Apple says that breaking this device, even just this once, assuming it is even technically possible in this instance, sets a dangerous precedent.

Refusing to comply with a legitimate court order levied by a democratic society, because of a devotion to some perceived higher ideal of rendering data off-limits under all circumstances to the valid legal processes of that society, is the dangerous precedent.

The national security implications of this case cannot be overstated. By effectively thumbing its nose at the court’s order, Apple is not protecting freedom; it is subverting the protection of it for the sake of a misguided belief in an ideal that does not exist, and is not supported by reality.

Dave Schroeder serves as an Information Warfare Officer in the US Navy. He is also is a tech geek at the University of Wisconsin—Madison. He holds a master’s degree in Information Warfare, is a graduate of the Naval Postgraduate School, and is currently in the Cybersecurity Policy graduate program at the University of Maryland University College. He also manages the Navy IWC Self Synchronization effort. Follow @daveschroeder and @IDCsync.

The views expressed in this article do not represent the views of the US Navy or the University of Wisconsin—Madison.

21st Century Maritime Operations Under Cyber-Electromagnetic Opposition Part One

The following article is part of our cross-posting partnership with Information Dissemination’s Jon Solomon.  It is republished here with the author’s permission.  You can read it in its original form here.

By Jon Solomon

Future high-end maritime warfare tends to be described as the use of distributed, networked maritime sensors that ‘seamlessly’ cue the tactical actions of dispersed forces armed with standoff-range guided weapons. Most commentary regarding these ‘sensor-to-shooter’ networks has been based around their hypothesized performances under ‘perfect’ conditions: sensors that see all within their predicted fields of view, processors that unfailingly discriminate and classify targets correctly, communications pathways that reliably and securely transmit data between network nodes, and situational pictures that assuredly portray ground truth to combat decision-makers. While it is not unreasonable to start with such an idealized view in order to grasp these networks’ potential, it is misguided to end analysis there. Regrettably, it is not unusual to come across predictions implying that these networks will provide their operators with an unshakable and nearly-omniscient degree of situational awareness, or that the more tightly-networked a force becomes the more likely the geographic area it covers will become a graveyard for the enemy.

Although we implicitly understand networked maritime warfare relies upon the electromagnetic spectrum and cyberspace, for some reason we tend to overlook the fact that these partially-overlapping domains will be fiercely contested in any major conflict. It follows that we tend not to consider the effects of an adversary’s cyber warfare and Electronic Warfare (EW) when assessing proposed operating concepts and force networking architectures. Part of this stems from the fact that U.S. Navy forces engaged in actual combat over the past seventy years seldom faced severe EW opposition, and have never faced equivalent cyber attacks. Even so, as recently as the 1980s, the Navy’s forward deployed forces routinely operated within intensive EW environments. Though certain specific skill sets and capabilities were highly compartmentalized due to classification considerations, Cold War-era regular Navy units and battle groups were trained not only to fight-through an adversary’s electronic attacks but also to wield intricate EW methods of their own for deception and concealment.[i] The Navy’s EW (and now cyber warfare) prowess lives on within its nascent Information Dominance Corps, but this is not the same as having a broad majority of the overall force equipped and conditioned to operate in heavily contested cyber-electromagnetic warfare environments.

Any theory of how force networking should influence naval procurement, force structure, or doctrine is dangerously incomplete if it inadequately addresses the challenges posed by cyber-electromagnetic opposition. Accordingly, we need to understand whether cyber-electromagnetic warfare principles exist that can guide our debates about future maritime operating concepts. 

This week I’ll be proposing several candidate principles that seem logical based on modern naval warfare systems’ and networks’ general characteristics. The resulting list should hardly be considered comprehensive, and is solely intended to stimulate debate. Needless to say, these candidates (and any others) will need to be subjected to rigorous testing within war games, campaign analyses, fleet exercises, and real world operations if they are to be validated as principles.

Candidate Principle #1: All Systems and Networks are Inherently Exploitable

It is a fact of nature, not to mention engineering, that notwithstanding their security features all complex systems (and especially the ‘systems of systems’ that constitute networks) inherently possess exploitable design vulnerabilities.[ii] Many vulnerabilities are relatively easy to identify and exploit, which conversely increases the chances a defender will uncover and then effectively mitigate them before an attacker can make best use of them. Others are buried deep within a system, which therefore makes them difficult for an adversary to discover let alone directly access. Still others, though perhaps more readily discernable, are only exploitable under very narrow circumstances or if significant resources are committed. It is entirely possible that notwithstanding its inherent vulnerabilities, a given system might survive an entire protracted conflict without being seriously exploited by an adversary. To confidently assume this ideal outcome would in fact occur, though, amounts to a high-stakes gamble at best and technologically unjustified hubris at worst. Instead, system architects and operators must assume that with enough time, an adversary will not only uncover a usable vulnerability but also develop a viable means of exploiting it if the anticipated spoils merit the requisite investments.

A handful of subtle design shortcomings may be enough to enable the blinding, distraction, or deception of a sensor system; disruption or penetration of network infrastructure systems; or manipulation of a Command and Control (C2) system’s situational picture. Systems can also be sabotaged, with ‘insider threats’ such as components received from compromised supply chains—not to mention actions by malevolent personnel—arguably being just as effective as remotely-launched attacks. For example, a successful inside-the-lifelines attack against the industrial controls of a shipboard auxiliary system might have the indirect effect of crippling any warfare systems that rely upon the former’s services. Cyber-electromagnetic indiscipline within one’s own forces might even be viewed as a particularly damaging, though not deliberately malicious, form of insider threat in which the inadequate ‘hygiene’ or ill-considered tactics of a single operator or maintainer can eviscerate an entire system’s or network’s security architecture.[iii]

Moreover, networking can allow an adversary to use their exploitation of a single, easily-overlooked system as a gateway for directly attacking important systems elsewhere, thereby negating the latter’s robust outward-facing cyber-electromagnetic defenses. Any proposed network connection into a system must be cynically viewed as a potential doorway for attack, even if its exploitation would seem to be incredibly difficult or costly to achieve.

This hardly means system developers must build a ‘brick wall’ behind every known vulnerability, if that were even feasible. Instead, a continuous process of searching for and examining potential vulnerabilities and exploits is necessary so that risks can be recognized and mitigation measures prioritized.[v] Operators, however, cannot take solace if told that the risks associated with every ‘critical’ vulnerability known at a given moment have been satisfactorily mitigated. There is simply no way to guarantee that undiscovered critical vulnerabilities do not exist, that all known ‘non-critical’ vulnerabilities’ characteristics are fully understood, that the mitigations are indeed sufficient, or that the remedies themselves do not spawn new vulnerabilities.

The next post in the series will investigate the fallacy of judging a force network’s combat viability by merely counting its number of nodes. We will also examine the challenges in classifying and identifying potential targets, and what that means for the employment of standoff-range weapons. Read Part Two here.

Jon Solomon is a Senior Systems and Technology Analyst at Systems Planning and Analysis, Inc. in Alexandria, VA. He can be reached at jfsolo107@gmail.com. The views expressed herein are solely those of the author and are presented in his personal capacity on his own initiative. They do not reflect the official positions of Systems Planning and Analysis, Inc. and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency. These views have not been coordinated with, and are not offered in the interest of, Systems Planning and Analysis, Inc. or any of its customers.

[i] Jonathan F. Solomon. “Defending the Fleet from China’s Anti-Ship Ballistic Missile: Naval Deception’s Roles in Sea-Based Missile Defense.” (master’s thesis, Georgetown University, 2011), 58-62.

[ii] Bruce Schneier. Secrets and Lies: Digital Security in a Networked World. (Indianapolis, IN: Wiley Publishing, 2004), 5-8.

[iii] For elaboration on the currently observed breadth and impacts of insufficient cyber discipline and hygiene, see 1. “FY12 Annual Report: Information Assurance (IA) and Interoperability (IOP).” (Washington, D.C.: Office of the Director, Operational Test and Evaluation (DOT&E), December 2012), 307-309; 2. “FY13 Annual Report: Information Assurance (IA) and Interoperability (IOP).” (Washington, D.C.: Office of the Director, Operational Test and Evaluation (DOT&E), January 2014), 330, 332-334.

[iv] For an excellent discussion of this and other vulnerability-related considerations from U.S. Navy senior leaders’ perspective, see Sydney J. Freedberg Jr. “Navy Battles Cyber Threats: Thumb Drives, Wireless Hacking, & China.” Breaking Defense, 04 April 2013, accessed 1/7/14, http://breakingdefense.com/2013/04/navy-cyber-threats-thumb-drives-wireless-hacking-china/

[v] Schneier, 288-303.

Towards A National Cyber Force “Department of the Air Force – US Cyber Corps”

By Don Donegan

The US needs a Cyber Corps as a new Service to successfully meet challenges in the cyber domain, but almost as importantly, to harvest military talent in an innovative manner. And we have a blueprint in front of us.

[otw_shortcode_button href=”https://cimsec.org/buying-cimsec-war-bonds/18115″ size=”medium” icon_position=”right” shape=”round” color_class=”otw-blue”]Donate to CIMSEC![/otw_shortcode_button]

The emergence and evolution of “cyberspace” as a warfare domain on par with the air, land, maritime, and space domains presents one of today’s fundamental military challenges – although cyberspace is somewhat awkwardly qualified as being “within the information environment.”[1] A new “front” in the cyberspace operations discussion continues to emerge as defense experts call for a separate cyber force, an idea raised notably by retired Admiral James Stavridis as one of his “heretical propositions on US defense policy[2]” and in recent Congressional testimony. With its own domain, acknowledged adversaries, and a continually increasing impact on warfighting, cyberspace should be the principal operating domain for a separate branch of the US Armed Forces, the US Cyber Corps (USCC).

To maximize the effectiveness of cyberspace operations (to include cyberspace attack and cyberspace counter-attack)[3], a service branch dedicated to and centered upon offensive cyberspace operations would lay the foundation to ensure warfighting success. The obvious historical analogy for the establishment of USCC is the evolution of the US Air Force (USAF), from its beginnings within the US Army to its designation as a service within its own department, including sharing responsibilities in the air domain with the other services. Post-World War II US military operations are difficult to re-imagine without the contributions of a military service primarily focused on the air domain – even if a separate air service seemed incomprehensible to military officers a century ago. However, USCC could have another historical precedent:  the Navy-Marine Corps relationship as two services within a single Department. Considering the evolution and broad nature of the cyberspace domain, the Department of the Air Force makes sense as the logical “umbrella” for both the USAF and USCC.

Based on USAF responsibilities in three domains (air, space, and cyberspace) and its core mission of global strike, creating the USCC under the auspice of the Department of the Air Force is a bold and innovative yet natural evolution for the Department. Separating the cyberspace mission from the air and space missions creates an opportunity to fully focus on the unique challenges in cyberspace operations. Placing USCC within the Department of the Air Force capitalizes on USCC-USAF linkages and allows them to share key resources. The Navy-Marine Corps dynamic within the Department of the Navy provides an initial blueprint for the expanded Department of the Air Force.

The principal advantages of establishing USCC as a Service within the Department of the Air Force include:

  • Fully dedicating a Service’s resources to the cyberspace domain, with a particular emphasis on cyberspace operations as a global strike capability.
  • Leveraging existing support and relationships with its sister Service in order to maintain existing USAF capabilities and control costs. In addition, the Departments of the Army and Navy would cede some cyberspace responsibilities and associated funding to USCC, offsetting some costs.
  • Providing a principal Defense Department entity for cyberspace operations to execute and coordinate at the same level as the other Services, particularly with regard to POTUS/SECDEF tasking as well as Defense Support to Civil Authorities (DSCA).
  • Developing the roles, responsibilities, and authorities required for cyberspace operations, particularly offensive cyberspace operations, in the manner today’s Services do for the other domains.
  • Creating a new paradigm for accessing, training, educating, retaining, and advancing the talent pool for cyberspace operations.

The new paradigm in personnel management presents perhaps the strongest argument for establishing USCC: providing this new service the latitude to recruit personnel using non-traditional methods and criteria, and then to develop them professionally to be, first and foremost, “cyber operators.” Specific opportunities include:

  • Capturing talent across the age spectrum by attracting and inducting experienced personnel, not just the 18-25 year old cohort, into the service.
  • Opening the aperture to include professionals who do not match the typical profile for recruits or officer candidates, including those who may not be world-wide deployable – since USCC would not deploy as other Services do.
  • Allowing US Air Force Academy graduates to select USAF or USCC as a service assignment and incorporating cyberspace in the Air University curriculum.
  • Inducting cyberspace/information professionals who have specialized and excelled in those areas within their own Service (inter-service transfers).
  • Growing true cyberspace professionals who compete for advancement, and thus leadership positions, on a level playing field with peers whose main focus is also the cyberspace domain.

As an alternative to establishing the US Cyber Corps, US Cyber Command (USCYBERCOM) could become more like US Special Operations Command (USSOCOM), employing SOCOM’s unique model of Title X responsibilities and authorities mixed with service-supported personnel and acquisition systems.[4] Like SOCOM, CYBERCOM would exercise worldwide responsibilities, plan and execute its special mission sets in coordination with geographic Combatant Commands, and maintain strong roots in each of the Services. However, this enabling option would miss the key opportunity presented in the US Cyber Corps proposal; namely, recruiting, educating, training, and retaining skilled personnel outside the traditional military accession and promotion systems.

150125-N-PK678-032 PENSACOLA, Fla. (Jan. 25, 2014) Information Systems Technician 1st Class Kyle Gosser, an instructor at the Center for Information Dominance Unit Corry Station, mentors a local high school student participating in the inaugural Cyberthon competition at the National Flight Academy at Naval Air Station Pensacola during the weekend of Jan. 23-25. The Cyberthon competition tests student teams on their abilities to use the computer skills they learned in their classrooms to defend and defeat cyber attacks on websites. (U.S. Navy photo by Ed Barker/Released)
PENSACOLA, Fla. (Jan. 25, 2014) Information Systems Technician 1st Class Kyle Gosser, an instructor at the Center for Information Dominance Unit Corry Station, mentors a local high school student participating in the inaugural Cyberthon competition at the National Flight Academy at Naval Air Station Pensacola during the weekend of Jan. 23-25. The Cyberthon competition tests student teams on their abilities to use the computer skills they learned in their classrooms to defend and defeat cyber attacks on websites. (U.S. Navy photo by Ed Barker/Released)

A principal argument against US Cyber Corps is that today’s fiscal environment cannot support additional costs in terms of “bureaucracy.” However, some savings and efficiencies can be certainly be realized by other services divesting some cyberspace responsibilities. Additionally, USCC would need far fewer bases, much less equipment and logistics support, and fewer personnel that its sister services. Training, education, personnel support, and infrastructure can be shared with other services, with much of the support coming from within the Department of the Air Force.

Returning to the historical analogy, the political and fiscal circumstances following World War II also presented a less than ideal time to create a new Armed Service. However, with opportunities and threats in the air domain, the National Security Act of 1947 created the US Air Force – a controversial step at the time that seems inevitable in retrospect. Today’s fiscal circumstances and operational threats echo those post-World War II concerns. Perhaps in 50 years the choice to dedicate a service to the cyberspace domain will also appear to have been self-evident.  

In conclusion, despite the importance of cyberspace operations as an operational enabler within and across the other domains, each service correctly focuses its acquisition and professional development efforts on winning the fight in its principal domain. A critical first step towards fully exploiting the potential of cyberspace operations is creating the foundation for a Service to “own” cyberspace as a warfighting domain. The formation of USCC would provide a unique approach, especially with respect to developing a professional cyberwarfare community, to enable the global, continuous reach of cyberspace operations.

Captain Donegan is a career surface warfare officer. A native of Hagerstown, MD, he graduated with merit from the United States Naval Academy in 1992 with a Bachelor of Science in History. He is also a graduate of both the American Military University with a Master of Arts in Military Studies (Naval Warfare) and the Naval War College. The views above are the author’s and do not represent those of the US Navy or the US Department of Defense.

[otw_shortcode_button href=”https://cimsec.org/buying-cimsec-war-bonds/18115″ size=”medium” icon_position=”right” shape=”round” color_class=”otw-blue”]Donate to CIMSEC![/otw_shortcode_button]

[1] JP 1-02, page 64.

[2] “Incoming: A Handful of Heretical Thoughts,” Adm. James Stavridis, USN (Ret.), Signal Magazine, 01 Dec 2015.

[3] Delineation of offensive and defensive cyberspace operations is a fuller topic. This article focuses on the need to establish the foundations for offensive cyberspace operations by creating USCC. Each Service retains responsibilities for cyberspace defense of its systems and platforms (analogous to force protection requirements).

[4] USCYBERCOM is a sub-unified command subordinate to U. S. Strategic Command (USSTRATCOM). Service elements include: Army Cyber Command (ARCYBER); Air Forces Cyber (AFCYBER); Fleet Cyber Command (FLTCYBERCOM); and Marine Forces Cyber Command (MARFORCYBER). Source: US Cyber Command Fact Sheet (Aug 2013),