Tag Archives: featured

Sea Control 110 – Small Arms Control and the South Pacific

This week Natalie Sambhi interviews fellow Aussie Laura Spano, Arms Control Manager with the Centre for Armed Violence Reduction and Pacific Regional Coordinator for 2385627680_e6f8561069_bControl Arms. They discuss small arms flows globally before focussing on the impacts of illicit arms flows, as a result of weak maritime security, into the South Pacific Islands—a region of great strategic importance to Australia. Laura explains the Arms Trade Treaty and how UN regimes on arms control are essential for development in Australia’s closest region.

DOWNLOAD Sea Control 110

You can follow Laura on Twitter @lspano27

For a quick glance at the Arms Trade Treaty, check out this fact sheet.

Image courtesy of Flickr user Teknorat.

[otw_shortcode_button href=”https://cimsec.org/buying-cimsec-war-bonds/18115″ size=”medium” icon_position=”right” shape=”round” color_class=”otw-blue”]Donate to CIMSEC![/otw_shortcode_button]

21st Century Maritime Operations Under Cyber-Electromagnetic Opposition Part Two

The following article is part of our cross-posting partnership with Information Dissemination’s Jon Solomon.  It is republished here with the author’s permission.  You can read it in its original form here.

Read part one of this series here.

By Jon Solomon

Candidate Principle #2: A Network’s Combat Viability is more than the Sum of its Nodes

Force networking generates an unavoidable trade-off between maximizing collective combat capabilities and minimizing network-induced vulnerability risks. The challenge is finding an acceptable balance between the two in both design and operation; networking provides no ‘free lunch.’

This trade-off was commonly discounted during the network-centric era’s early years. For instance, Metcalfe’s Law—the idea that a network’s potential increases as the square of the number of networked nodes—was often applied in ways suggesting a force would become increasingly capable as more sensors, weapons, and data processing elements were tied together to collect, interpret, and act upon battle space information.[i] Such assertions, though, were made without reference to the network’s architecture. The sheer number (or types) of nodes matter little if the disruption of certain critical nodes (relay satellites, for example) or the exploitation of any given node to access the network’s internals erode the network’s data confidentiality, integrity, or availability. This renders node-counting on its own a meaningless and perhaps even misleadingly dangerous measure of a network’s potential. The same is also true if individual systems and platforms have design limitations that prevent them from fighting effectively if force-level networks are undermined.

Consequently, there is a gigantic difference between a network-enhanced warfare system and a network-dependent warfare system. While the former’s performance expands greatly when connected to other force elements via a network, it nevertheless is designed to have a minimum performance that is ‘good enough’ to independently achieve certain critical tasks if network connectivity is unavailable or compromised.[ii] A practical example of this is the U.S. Navy’s Cooperative Engagement Capability (CEC), which extends an individual warship’s air warfare reach beyond its own sensors’ line-of-sight out to its interceptor missiles’ maximum ranges courtesy of other CEC-participating platforms’ sensor data. Loss of the local CEC network may significantly reduce a battle force’s air warfare effectiveness, but the participating warships’ combat systems would still retain formidable self and local-area air defense capabilities.

Conversely, a network-dependent warfare system fails outright when its supporting network is corrupted or denied. For instance, whereas in theory Soviet anti-ship missile-armed bombers of the late 1950s through early 1990s could strike U.S. aircraft carrier battle groups over a thousand miles from the Soviet coast, their ability to do so was predicated upon time-sensitive cueing by the Soviet Ocean Surveillance System (SOSS). SOSS’s network was built around a highly centralized situational picture-development and combat decision-making apparatus, which relied heavily upon remote sensors and long-range radio frequency communications pathways that were ripe for EW exploitation. This meant U.S. efforts to slow down, saturate, block, or manipulate sensor data inputs to SOSS, let alone to do the same to the SOSS picture outputs Soviet bomber forces relied upon in order to know their targets’ general locations, had the potential of cutting any number of critical links in the bombers’ ‘kill chain.’ If bombers were passed a SOSS cue at all, their crews would have had no idea whether they would find a carrier battle group or a decoy asset (and maybe an accompanying aerial ambush) at the terminus of their sortie route. Furthermore, bomber crews firing from standoff-range could only be confident they had aimed their missiles at actual high-priority ships and not decoys or lower-priority ships if they received precise visual identifications of targets from scouts that had penetrated to the battle group’s center. If these scouts failed in this role—a high probability once U.S. rules of engagement were relaxed following a war’s outbreak—the missile salvo would be seriously handicapped and perhaps wasted, if it could be launched at all. Little is different today with respect to China’s nascent Anti-Ship Ballistic Missile capability: undermine the underlying surveillance-reconnaissance network and the weapon loses its combat utility.[iii] This is the risk systems take with network-dependency.

Candidate Principle #3: Contact Detection is Easy, Contact Classification and Identification are Not

The above SOSS analogy leads to a major observation regarding remote sensing: detecting something is not the same as knowing with confidence what it is. It cannot be overstated that no sensor can infallibly classify and identify its contacts: countermeasures exist against every sensor type.

As an example, for decades we have heard the argument ‘large signature’ platforms such as aircraft carriers are especially vulnerable because they cannot readily hide from wide-area surveillance radars and the like. If the only method of carrier concealment was broadband Radar Cross Section suppression, and if the only prerequisite for firing an anti-carrier weapon was a large surface contact’s detection, the assertions of excessive vulnerability would be true. A large surface contact held by remote radar, however, can just as easily be a merchant vessel, a naval auxiliary ship, a deceptive low campaign-value combatant employing signature-enhancement measures, or an artificial decoy. Whereas advanced radars’ synthetic or inverse synthetic aperture modes can be used to discriminate a contact’s basic shape as a classification tool, a variety of EW tactics and techniques can prevent those modes’ effective use or render their findings suspect. Faced with those kinds of obstacles, active sensor designers might turn to Low Probability of Intercept (LPI) transmission techniques to buy time for their systems to evade detection and also delay the opponent’s development of effective EW countermeasures. Nevertheless, an intelligent opponent’s signals intelligence collection and analysis efforts will eventually discover and correctly classify an active sensor’s LPI emissions. It might take multiple combat engagements over several months for them to do this, or it might take them only a single combat engagement and then a few hours of analysis. This means new LPI techniques must be continually developed, stockpiled, and then situationally employed only on a risk-versus-benefit basis if the sensor’s performance is to be preserved throughout a conflict’s duration.

Passive direction-finding sensors are confronted by an even steeper obstacle: a non-cooperative vessel can strictly inhibit its telltale emissions or can radiate deceptive emissions. Nor can electro-optical and infrared sensors overcome the remote sensing problem, as their spectral bands render them highly inefficient for wide-area searches, drastically limit their effective range, and leave them susceptible to natural as well as man-made obscurants.[iv]

If a prospective attacker possesses enough ordnance or is not cowed by the political-diplomatic risks of misidentification, he might not care to confidently classify a contact before striking it. On the other hand, if the prospective attacker is constrained by the need to ensure his precious advanced weapons inventories (and their launching platforms) are not prematurely depleted, or if he is constrained by a desire to avoid inadvertent escalation, remote sensing alone will not suffice for weapons-targeting.[v] Just as was the case with Soviet maritime bombers, a relatively risk-intolerant prospective attacker would be compelled to rely upon close-in (and likely visual) classification of targets following their remote detection. This dependency expands a defender’s space for layering its anti-scouting defenses, and suggests that standoff-range attacks cued by sensor-to-shooter networks will depend heavily upon penetrating (if not persistent) scouts that are either highly survivable (e.g., submarines and low-observable aircraft) or relatively expendable (e.g., unmanned system ‘swarms’ or sacrificial manned assets).

On the expendable scout side, an advanced weapon (whether a traditional missile or an unmanned vehicle swarm) could conceivably provide reconnaissance support for other weapons within a raid, such as by exposing itself to early detection and neutralization by the defender in order to provide its compatriots with an actionable targeting picture via a data link. An advanced weapon might alternatively be connected by data link to a human controller who views the weapon’s onboard sensor data to designate targets for it or other weapons in the raid, or who otherwise determines whether the target selected by the weapon is valid. While these approaches can help improve a weapon’s ability to correctly discriminate valid targets, they will nevertheless still lead to ordnance waste if the salvo is directed against a decoy group containing no targets of value. Likewise, as all sensor types can be blinded or deceived, a defender’s ability to thoroughly inflict either outcome upon a scout weapon’s sensor package—or a human controller—could leave an attacker little better off than if its weapons lacked data link capabilities in the first place.

We should additionally bear in mind that the advanced multi-band sensors and external communications capabilities necessary for a weapon to serve as a scout would be neither cheap nor quickly producible. As a result, an attacker would likely possess a finite inventory of these weapons that would need to be carefully managed throughout a conflict’s duration. Incorporation of highly-directional all-weather communications capabilities in a weapon to minimize its data link vulnerabilities would increase the weapon’s relative expense (with further impact to its inventory size). It might also affect the weapon’s physical size and power requirements on the margins depending upon the distance data link transmissions had to cover. An alternative reliance upon omni-directional LPI data link communications would run the same risk of eventual detection and exploitation over time we previously noted for active sensors. All told, the attacker’s opportunity costs for expending advanced weapons with one or more of the aforementioned capabilities at a given time would never be zero.[vi] A scout weapon therefore could conceivably be less expendable than an unarmed unmanned scout vehicle depending upon the relative costs and inventory sizes of both.

The use of networked wide-area sensing to directly support employment of long-range weapons could be quite successful in the absence of vigorous cyber-electromagnetic (and kinetic) opposition performed by thoroughly trained and conditioned personnel. The wicked, exploitable problems of contact classification and identification are not minor, though, and it is extraordinarily unlikely any sensor-to-shooter concept will perform as advertised if it inadequately confronts them. After all, the cyclical struggle between sensors and countermeasures is as old as war itself. Any advances in one are eventually balanced by advances in the other; the key questions are which one holds the upper hand at any given time, and how long that advantage can endure against a sophisticated and adaptive opponent.

In part three of the series, we will consider how a force network’s operational geometry impacts its defensibility. We will also explore the implications of a network’s capabilities for graceful degradation. Read Part Three here.

Jon Solomon is a Senior Systems and Technology Analyst at Systems Planning and Analysis, Inc. in Alexandria, VA. He can be reached at [email protected]. The views expressed herein are solely those of the author and are presented in his personal capacity on his own initiative. They do not reflect the official positions of Systems Planning and Analysis, Inc. and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency. These views have not been coordinated with, and are not offered in the interest of, Systems Planning and Analysis, Inc. or any of its customers.

[i] David S. Alberts, John J. Garstka, and Frederick P. Stein. Network Centric Warfare: Developing and Leveraging Information Superiority, 2nd Ed. (Washington, D.C.: Department of Defense C4ISR Cooperative Research Program, August 1999), 32-34, 103-105, 250-265.

[ii] For some observations on the idea of network-enhanced systems, see Owen R. Cote, Jr. “The Future of Naval Aviation.” (Cambridge, MA: Massachusetts Institute of Technology Security Studies Program, 2006), 28, 59.

[iii] Solomon, “Defending the Fleet,” 39-78. For more details on Soviet anti-ship raiders dependencies upon visual-range (sacrificial) scouts, see Maksim Y. Tokarev. “Kamikazes: The Soviet Legacy.” Naval War College Review 67, No. 1 (Winter 2013): 71, 73-74, 77, 79-80.

[iv] See 1. Jonathan F. Solomon. “Maritime Deception and Concealment: Concepts for Defeating Wide-Area Oceanic Surveillance-Reconnaissance-Strike Networks.” Naval War College Review 66, No. 4 (Autumn 2013): 88-94; 2. Norman Friedman. Seapower and Space: From the Dawn of the Missile Age to Net-Centric Warfare. (Annapolis, MD: Naval Institute Press, 2000), 365-366.

[v] Solomon, “Defending the Fleet,” 94-96.

[vi] Solomon, “Maritime Deception and Concealment,” 95.

Apple believes it is protecting freedom. It’s wrong. Here’s why.

Ed. note: This is an expanded version of a previous article, “We Don’t Need Backdoors.”

By Dave Schroeder

Let me open by saying I’m not for backdoors in encryption. It’s a bad idea, and people who call for backdoors don’t understand how encryption fundamentally works.

Apple has been ordered by a court to assist the FBI in accessing data on an iPhone 5c belonging to the employer of one of the San Bernardino shooters, who planned and perpetrated an international terrorist attack against the United States. Apple has invested a lot in OS security and encryption, but Apple may be able comply with this order in this very specific set of circumstances.

Apple CEO Tim Cook penned a thoughtful open letter justifying Apple’s position that it shouldn’t have to comply with this order. However, what the letter essentially says is that any technical cooperation beyond the most superficial claims that there is “nothing that can be done” is tantamount to creating a “backdoor,” irrevocably weakening encryption, and faith in encryption, for everyone.

That is wrong on its face, and we don’t need “backdoors.”

What we do need is this:

A clear acknowledgment that what increasingly exists essentially amounts to virtual fortresses impenetrable by the legal and judicial mechanisms of free society, that many of those systems are developed and employed by US companies, within the US, and that US adversaries use those systems — sometimes specifically and deliberately because they are in the US — against the US and our allies, and for the discussion to start from that point.

The US has a clear and compelling interest in strong encryption, and especially in protecting US encryption systems used by our government, our citizens, and people around the world, from defeat. But the assumption that the only alternatives are either universal strong encryption, or wholesale and deliberate weakening of encryption systems and/or “backdoors,” is a false dichotomy.

How is that so?

Encrypted communication has to be decrypted somewhere, in order for it to be utilized by the recipient. That fact can be exploited in various ways. It is done now. It’s done by governments and cyber criminals and glorified script kiddies. US vendors like Apple, can be at least a partial aid in that process on a device-by-device, situation-by-situation basis, within clear and specific legal authorities, without doing things we don’t want, like key escrow, wholesale weakening of encryption, creating “backdoors,” or anything similar, with regard to software or devices themselves.

When Admiral Michael Rogers, Director of the National Security Agency and Commander, US Cyber Command, says:

“My position is — hey look, I think that we’re lying that this isn’t technically feasible. Now, it needs to be done within a framework. I’m the first to acknowledge that. You don’t want the FBI and you don’t want the NSA unilaterally deciding, so, what are we going to access and what are we not going to access? That shouldn’t be for us. I just believe that this is achievable. We’ll have to work our way through it. And I’m the first to acknowledge there are international implications. I think we can work our way through this.”

…some believe that is code for, “We need backdoors.” No. He means precisely what he says.

When US adversaries use systems and services physically located in the US, designed and operated by US companies, existing under US law, there are many things — entirely compatible with both the letter and spirit of our law and Constitution — that could be explored, depending on the precise system, service, software, device, and circumstances. Pretending that there is absolutely nothing that can be done, and that it must be either unbreakable, universal encryption for all, or nothing, is a false choice.

To further pretend that it’s some kind of “people’s victory” when a technical system renders itself effectively impenetrable to the legitimate legal, judicial, and intelligence processes of democratic governments operating under the rule of law in free civil society is curious indeed. Would we say the same about a hypothetical physical structure that cannot be entered by law enforcement with a court order?

Many ask why terrorists wouldn’t just switch to something else.

That’s a really easy answer — terrorists use these simple, turnkey platforms for the same reason normal people do: because they’re easy to use. A lot of our techniques, capabilities, sources, and methods have unfortunately been laid bare, but people use things like WhatsApp, iMessage, and Telegram because they’re easy. It’s the same reason that ordinary people — and terrorists — don’t use Ello instead of Facebook, or ProtonMail instead of Gmail. And when people switch to more complicated, non-turnkey encryption solutions — no matter how “simple” the more tech-savvy may think them — they make mistakes that can render their communications security measures vulnerable to defeat.

And as long as the US and its fundamental freedoms engender the culture of innovation which allows companies like Apple to grow and thrive, we will always have the advantage.

Vendors and cloud providers may not always be able to provide assistance; but sometimes they can, given a particular target (person, device, platform, situation, etc.), and they can do so in a way that comports with the rule of law in free society, doesn’t require creating backdoors in encryption, doesn’t require “weakening” their products, does not constitute an undue burden, and doesn’t violate the legal and Constitutional rights of Americans, or the privacy of free peoples anywhere in the world.

Some privacy advocates look at this as a black-and-white, either-or situation, without consideration for national interests, borders, or policy, legal, and political realities. They look at the “law” of the US or UK as fundamentally on the same footing the “law” of China, Russia, Iran, or North Korea: they’re all “laws”, and people are subject to them. They warn that if Apple provides assistance, even just this once, then someone “bad” — by their own, arbitrary standards, whether in our own government or in a repressive regime — will abuse it.

The problem is that this simplistic line of reasoning ignores other key factors in the debate. The US is not China. Democracy is not the same as Communism. Free states are not repressive states. We don’t stand for, defend, or espouse the same principles. Apple is not a Chinese company. If Apple really believes it will set a precedent for nations like China by complying with a lawful US court order, it really should perform a little self-examination and ask why it would seek to operate in China, and thus be subject to such law.

The other argument seems to be that if Apple does this once, it would constitute a “backdoor” for “all” iPhones, and thus the abrogation of the rights of all. That is also categorically false. There are a number of factors here: The iPhone belongs to the deceased individual’s employer. The FBI may have a companion laptop that this specific iPhone considers a “trusted device”, and is thus potentially able to deploy an OS update without a passcode. The specific device and/or OS version may have other vulnerabilities or shortcomings that can be exploited with physical access.

This argument seems to be equivalent to saying that if government has any power or capability, it will be abused, and thus should be denied; and that encryption, or anything related to it, should somehow be considered sacrosanct. It’s like saying, if we grant the government the lawful to enter a door, they could enter any door — even yours. Some might be quick to say this is not the same. Oh, but it is. This is not an encryption backdoor, and does not apply to all iPhones, or even all iPhone 5c models, or even most. It applies to this specific set of circumstances — legally and technically.

It is puzzling indeed to assert that the government can try to break this device, or its crypto, on its own, but if the creator of the cryptosystem helps in any way, that is somehow “weakening” the crypto or creating a “backdoor.” It is puzzling, because it is false.

Specific sets of conditions happen to exist that allows Apple to unlock certain older devices. These conditions exist less and less, and in fewer forms, as devices and iOS versions get newer. Unlocking iOS 7 only works, for example, because Apple has the key. The methodology would only work in this case because it’s specifically a pre-iPhone 6 model with a 4-digit passcode and there is a paired laptop in the government’s possession. All of this is moot on iPhone 6 and newer.

Apple is welcome to use every legal mechanism possible to fight this court order — that is their absolute right. But to start and grow their company in the United States, to exist here because of the fundamental environment we create for freedom and innovation, and then to act as if Apple is somehow divorced from the US and owes it nothing, even when ordered by a court to do so, is a puzzling and worrisome position.  They can’t have it both ways.

If Apple wishes to argue against the application of the All Writs Act — which, while old, is precisely on-point — it needs to make the case that performing the technical steps necessary to comply with this court order creates an “undue burden.” It may be able to make just that argument.

ios

We exist not in an idealized world where the differences of people, groups, and nation-states are erased by the promise of the Internet and the perceived panacea of unbreakable encryption.

We exist in a messy and complicated reality. People seek to do us harm. They use our own laws, creations, and technologies against us. People attack the US and the West, and they use iPhones.

Apple says that breaking this device, even just this once, assuming it is even technically possible in this instance, sets a dangerous precedent.

Refusing to comply with a legitimate court order levied by a democratic society, because of a devotion to some perceived higher ideal of rendering data off-limits under all circumstances to the valid legal processes of that society, is the dangerous precedent.

The national security implications of this case cannot be overstated. By effectively thumbing its nose at the court’s order, Apple is not protecting freedom; it is subverting the protection of it for the sake of a misguided belief in an ideal that does not exist, and is not supported by reality.

Dave Schroeder serves as an Information Warfare Officer in the US Navy. He is also is a tech geek at the University of Wisconsin—Madison. He holds a master’s degree in Information Warfare, is a graduate of the Naval Postgraduate School, and is currently in the Cybersecurity Policy graduate program at the University of Maryland University College. He also manages the Navy IWC Self Synchronization effort. Follow @daveschroeder and @IDCsync.

The views expressed in this article do not represent the views of the US Navy or the University of Wisconsin—Madison.

Publication Release: Chinese Military Strategy Week

Released: February 2016

The CIMSEC Chinese Military Strategy topic week ran from August 3-7, 2015 and featured shortly after a new Chinese Military Strategy white paper was released in May 2015, and after a new U.S. National Military Strategy was released in July 2015. Authors sought to identify key takeaways from the new Chinese white paper, establish historical context, and several compared the new Chinese document to the American strategy. 

Authors:Screenshot_1
Paul Pryce
Sherman Xiaogang Lai
Chad M. Pillai 
Jack McKechnie
Jan Stockbruegger
Chang Ching
Eric Gomez
Debalina Ghoshal
Amanda Conklin
Justin Chock
Xunchao Zhang

Editors:
Eric Murphy

Dmitry Filipoff
Matt Hipple
Matt Merighi
John Stryker

Download Here

Articles:
The Influence of Han Feizi on  China’s Defence Policy By Paul Pryce

From Expediency to the Strategic Chinese Dream? By Sherman Xiaogang Lai
Where You Stand Depends on Where You Sit: U.S. & Chinese Strategic Views By Daniel Hartnett
Bear, Dragon & Eagle: Russian, Chinese & U.S. Military Strategies By Chad M. Pillai
Avoiding Conditions for an Asia-Pacific Cold War By Jack McKechnie
Beyond the Security Dilemma? De-Escalating Tension in the South China Sea By Jan Stockbruegger
A Grain of Contextual Salt in the Chinese Military Strategy By Chang Ching
Deep Accomodation: The Best Option for Preventing War in the Taiwan Strait By Eric Gomez
Assessing China’s Nuclear Ambitions By Debalina Ghoshal
The Unnamed Protagonist in China’s Maritime Objectives By Amanda Conklin
A Pacific Rebalance with Chinese Characteristics By Justin Chock
Becoming a Maritime Power? The First Chinese base in the Indian Ocean? By Xunchao Zhang

Be sure to browse other compendiums in the publications tab, and feel free send compendium ideas to [email protected].

[otw_shortcode_button href=”https://cimsec.org/buying-cimsec-war-bonds/18115″ size=”medium” icon_position=”right” shape=”round” color_class=”otw-blue”]Donate to CIMSEC![/otw_shortcode_button]