Category Archives: Future Tech

What is coming down the pipe in naval and maritime technology?

Enabling Distributed Lethality: The Role of Naval Cryptology

Distributed Lethality Topic Week

By LCDR Chuck Hall and LCDR David T. Spalding

The U.S. Navy’s Surface Force is undergoing a cultural shift.  Known as “Distributed Lethality,” this strategy calls for our naval combatants to seize the initiative, operate in dispersed formations known as “hunter-killer” surface action groups (SAG), and employ naval combat power in a more offensive manner. After years of enjoying maritime dominance and focusing on power projection ashore, the U.S. Navy is now planning to face a peer competitor in an Anti-Access/Area Denial (A2AD) environment. Long overdue, Distributed Lethality shifts the focus to one priority – warfighting.  Far from a surface warfare problem alone, achieving victory against a peer enemy in an A2AD environment will require leveraging all aspects of naval warfare, including naval cryptology.

[otw_shortcode_button href=”https://cimsec.org/buying-cimsec-war-bonds/18115″ size=”medium” icon_position=”right” shape=”round” color_class=”otw-blue”]Donate to CIMSEC![/otw_shortcode_button]

Naval Cryptology has a long, proud history of supporting and enabling the Fleet. From the Battle of Midway in 1942, to leading the Navy’s current efforts in cyberspace, the community’s expertise in SIGINT, Cyber Operations, and Electronic Warfare is increasingly relevant in an A2AD environment. Led by Commander, U.S. Fleet Cyber Command/U.S. TENTH Fleet, the community is comprised of officers and enlisted personnel serving afloat and ashore and who are well integrated with the Fleet, intelligence community, and U.S. Cyber Command. Given its past history and current mission sets, naval cryptology is poised to enable distributed lethality by providing battlespace awareness, targeting support, and effects, in and through the electromagnetic spectrum and cyberspace.   

Battlespace Awareness

Battlespace Awareness, as defined in the Information Dominance Roadmap, 2013-2028, is “the ability to understand the disposition and intentions of potential adversaries as well as the characteristics and conditions of the operational environment.”  It also includes the “capacity, capability, and status” of friendly and neutral forces and is most typically displayed as a Common Operating Picture (COP).  To be effective, however, battlespace awareness must seek to provide much more than just a COP. It must also include a penetrating knowledge and understanding of the enemy and environment — the end-user of which is the operational commander. The operational commander must be able to rely on predictive analysis of enemy action in the operational domain to successfully employ naval combat power in an A2AD environment.  

Naval Cryptology has historically provided battlespace awareness through the execution of Signals Intelligence (SIGINT) operations.  During World War II, Station HYPO, located in Pearl Harbor and headed by Commander Joseph Rochefort, collected and decrypted the Japanese naval code, known as JN-25. Station HYPO’s exploitation of Japanese naval communications was sufficient to provide daily intelligence reports and assessments of Japanese force dispositions and intentions. These reports were provided to naval operational commanders, to include Admiral Chester W. Nimitz, Commander in Chief, U.S. Pacific Fleet and Commander in Chief, Pacific Ocean Areas. On May 13, 1942, navy operators intercepted a Japanese message directing a logistics ship to load cargo and join an operation headed to “Affirm Fox” or “AF.”  Linguists from Station HYPO had equated “AF” to Midway in March after the Japanese seaplane attack on Hawaii (Carlson, 308) and was thus able to confirm Midway as the objective of the upcoming Japanese naval operation.  Station HYPO was also able to give Nimitz the time and location of the Japanese attack point: 315 degrees, 50 nm from Midway, commencing at 7:00AM (Carlson, 352). This allowed Nimitz to position his forces at the right place, designated Point Luck, northeast of Midway, placing the U.S. fleet on the flank of the Japanese (Carlson, 354). Had Station HYPO’s efforts failed to provide this battlespace awareness, Admiral Nimitz would not have had enough time to thwart what might have been a surprise Japanese attack.  

Photo shows work being done on the Japanese Naval code J-25 by Station HYPO in Hawaii. The Japanese order to prepare for war was sent in J-25 prior to the attack on Pearl Harbor, but decoders had been ordered to suspend work on the Naval code and focus efforts on the diplomatic code. Later, enough of J-25 was broken to be used as an advanced warning to the Japanese attack on Midway. NSA photo.
Photo shows work being done on the Japanese Naval code J-25 by Station HYPO in Hawaii. The Japanese order to prepare for war was sent in J-25 prior to the attack on Pearl Harbor, but decoders had been ordered to suspend work on the Naval code and focus efforts on the diplomatic code. Later, enough of J-25 was broken to be used as an advanced warning to the Japanese attack on Midway. NSA photo.

Victory at Midway was founded on the operational commander’s knowledge of the enemy’s force construct and disposition. Currently the product of both active and passive, organic and non-organic sensors, achieving battlespace awareness in an A2AD environment will require more emphasis on passive and non-organic sensors, and increased national-tactical integration in order to prevent detection and maintain the initiative.  The “hunter-killer” SAGs will be entirely dependent upon an accurate and timely COP – not just of enemy forces, but of dispersed friendly forces as well.  Just as battlespace awareness enabled triumph against the Imperial Japanese Navy, so too will it be the very foundation upon which the success of distributed lethality rests. Without it, the operational commander cannot effectively, and lethally, disperse his forces over time and space.    

Targeting Support

Another key enabler of the Surface Navy’s shift to the offensive will be accurate and timely targeting support.  Though support to targeting can come in many forms, as used here it refers to the triangulation and precision geolocation of adversary targets via communications intelligence and radio direction finding (RDF).  In an environment in which options to “fix” the enemy via radar or other active means introduces more risk than gain, RDF presents itself as a more viable option.  Indeed, the passive nature of direction finding/precision geolocation makes it particularly well suited for stealthy, offensive operations in an A2AD environment.  Leveraging both organic and non-organic sensors in a fully integrated manner — RDF will provide “hunter-killer” SAG commanders with passive, real-time, targeting data.     

Perhaps one of the best historical examples of Naval Cryptology’s support to targeting can be seen in the Battle of the Atlantic. The Third Reich had threatened the very lifeline of the war in Europe as Admiral Donitz’ U-boats were wreaking havoc on Allied merchant vessels throughout the war. Though America had begun intercepting and mapping German naval communications and networks as early as 1938, it was not as critical then as it was upon entry into the war. By the time America entered the war, the U.S. Navy’s SIGINT and cryptanalysis group, OP-20-G, boasted near 100 percent coverage of German naval circuits. Many of these circuits were used for high frequency (HF), long range shore-ship, ship-shore, and ship-ship communications. The ability to both intercept these communications and to locate their source would be necessary to counter the Axis’ attack. That ability was realized in an ever growing high frequency direction finding (HFDF) network.

The HFDF network originally consisted of only a handful of shore stations along the Atlantic periphery. Throughout the course of the war it grew to a rather robust network comprised of U.S., British, and Canadian shore-based and shipborne systems. The first station to intercept a German naval transmission would alert all other stations simultaneously via an established “tip-off” system.  Each station would then generate a line of bearing, the aggregate of which formed an ellipse around the location of the target.  This rudimentary geolocation of German U-boats helped to vector offensive patrols and enable attack by Allied forces — thus taking the offensive in what had previously been a strictly defensive game.  The hunter had become the hunted.        

German U-boats threatened the very lifeline of the war in Europe by wreaking havoc on Allied merchant vessels throughout the war.
German U-boats threatened the very lifeline of the war in Europe by wreaking havoc on Allied merchant vessels throughout the war.

Enabling the effectiveness of increased offensive firepower will require more than battlespace awareness and indications and warning.  Going forward, Naval cryptologists must be agile in the support they provide — quickly shifting from exploiting and analyzing the enemy, at the operational level, to finding and fixing the enemy at the tactical level. Completing the “find” and “fix” steps in the targeting process will enable the “hunter-killer” SAGs to accomplish the “finish.”

Cyber Effects

Finally, cyber.  Receiving just a single mention, the original distributed lethality article in Proceedings Magazine refers to the cyber realm as, “the newest and, in many ways most dynamic and daunting, levels of the battlespace—one that the Surface Navy, not to mention the U.S. military at large—must get out in front of, as our potential adversaries are most certainly trying to do.” Indeed, the incredible connectivity that ships at sea enjoy today introduces a potentially lucrative vulnerability, for both friendly forces and the adversary. Similar to battlespace awareness and targeting, Naval Cryptology has history, albeit limited, in cyberspace. Cryptologic Technicians have long been involved in Computer Network Exploitation (CNE) and the Navy was the first service to designate an enlisted specialty (CTN) in the cyber field. According to the FCC/C10F strategy, not only do they, “operate and defend the Navy’s networks,” but they also, “plan and direct operations for a subset of USCYBERCOM’s Cyber Mission Forces.”  The combination of history and experience in cyberspace, coupled with the FCC/C10F designation as the Navy’s lead cyber element, clearly places the onus on naval cryptology. As the Navy seeks to protect its own cyber vulnerabilities, and exploit those of the adversary, the execution of effective cyber operations by the cryptologic community will be critical in enabling distributed lethality.

Going Forward

Today, through a wide array of networked, passive, non-organic sensors, and integration with national intelligence agencies and U.S. Cyber Command, naval cryptology is well-positioned to enable distributed lethality by providing battlespace awareness, targeting support, and effects, in and through the electromagnetic spectrum and cyberspace. Yet, similar to the surface force, a cultural shift in the cryptologic community will be required. First, we must optimize national-tactical integration and better leverage and integrate off-board sensors. The uniqueness of the A2AD environment demands the integration and optimization of passive, organic and non-organic sensors in order to prevent counter-targeting. Second, we must prioritize the employment of direction finding and geolocation systems, ensuring they are accurate and sufficiently integrated to provide timely targeting data for weapons systems. This will require a shift in mindset as well, from simple exploitation to a focus on “find, fix.” Third, we must continue to lead in cyberspace, ensuring cyber defense in depth to our ships at sea while developing effects that effectively exploit adversary cyber vulnerabilities. Finally, naval cryptology’s role in distributed lethality cannot occur in a vacuum — increased integration with the Fleet will be an absolute necessity.

Distributed lethality is the future of Naval Surface Warfare — a future in which the cryptologic community has a significant role. In order to ensure the Surface Force can seize the initiative, operate in dispersed formations known as “hunter-killer” SAGs, and employ naval combat power in a more offensive manner in an A2AD environment, Naval Cryptology must stand ready to provide battlespace awareness, targeting support, and effects, in and through the electromagnetic spectrum and cyberspace.

LCDR Chuck Hall is an active duty 1810 with more than 27 years of enlisted and commissioned service.  The opinions expressed here are his own.

LCDR David T. Spalding is a  former Cryptologic Technician Interpretive.  He was commissioned in 2004 as a Special Duty Officer Cryptology (Information Warfare/1810).  The opinions expressed here are his own.

[otw_shortcode_button href=”https://cimsec.org/buying-cimsec-war-bonds/18115″ size=”medium” icon_position=”right” shape=”round” color_class=”otw-blue”]Donate to CIMSEC![/otw_shortcode_button]

Works cited:

Ballard, Robert. Return to Midway. Washington, D.C: National Geographic, 1999.

Parshall, Jonathan. Shattered Sword : The Japanese Story of the Battle of Midway. Dulles, Va. Poole: Potomac Chris Lloyd distributor, 2007.

Carlson, Elliot. Joe Rochefort’s War: the Odyssey of the Codebreaker Who Outwitted Yamamoto at Midway. Annapolis, MD: Naval Institute, 2011. Print.

21st Century Maritime Operations Under Cyber-Electromagnetic Opposition Part Two

The following article is part of our cross-posting partnership with Information Dissemination’s Jon Solomon.  It is republished here with the author’s permission.  You can read it in its original form here.

Read part one of this series here.

By Jon Solomon

Candidate Principle #2: A Network’s Combat Viability is more than the Sum of its Nodes

Force networking generates an unavoidable trade-off between maximizing collective combat capabilities and minimizing network-induced vulnerability risks. The challenge is finding an acceptable balance between the two in both design and operation; networking provides no ‘free lunch.’

This trade-off was commonly discounted during the network-centric era’s early years. For instance, Metcalfe’s Law—the idea that a network’s potential increases as the square of the number of networked nodes—was often applied in ways suggesting a force would become increasingly capable as more sensors, weapons, and data processing elements were tied together to collect, interpret, and act upon battle space information.[i] Such assertions, though, were made without reference to the network’s architecture. The sheer number (or types) of nodes matter little if the disruption of certain critical nodes (relay satellites, for example) or the exploitation of any given node to access the network’s internals erode the network’s data confidentiality, integrity, or availability. This renders node-counting on its own a meaningless and perhaps even misleadingly dangerous measure of a network’s potential. The same is also true if individual systems and platforms have design limitations that prevent them from fighting effectively if force-level networks are undermined.

Consequently, there is a gigantic difference between a network-enhanced warfare system and a network-dependent warfare system. While the former’s performance expands greatly when connected to other force elements via a network, it nevertheless is designed to have a minimum performance that is ‘good enough’ to independently achieve certain critical tasks if network connectivity is unavailable or compromised.[ii] A practical example of this is the U.S. Navy’s Cooperative Engagement Capability (CEC), which extends an individual warship’s air warfare reach beyond its own sensors’ line-of-sight out to its interceptor missiles’ maximum ranges courtesy of other CEC-participating platforms’ sensor data. Loss of the local CEC network may significantly reduce a battle force’s air warfare effectiveness, but the participating warships’ combat systems would still retain formidable self and local-area air defense capabilities.

Conversely, a network-dependent warfare system fails outright when its supporting network is corrupted or denied. For instance, whereas in theory Soviet anti-ship missile-armed bombers of the late 1950s through early 1990s could strike U.S. aircraft carrier battle groups over a thousand miles from the Soviet coast, their ability to do so was predicated upon time-sensitive cueing by the Soviet Ocean Surveillance System (SOSS). SOSS’s network was built around a highly centralized situational picture-development and combat decision-making apparatus, which relied heavily upon remote sensors and long-range radio frequency communications pathways that were ripe for EW exploitation. This meant U.S. efforts to slow down, saturate, block, or manipulate sensor data inputs to SOSS, let alone to do the same to the SOSS picture outputs Soviet bomber forces relied upon in order to know their targets’ general locations, had the potential of cutting any number of critical links in the bombers’ ‘kill chain.’ If bombers were passed a SOSS cue at all, their crews would have had no idea whether they would find a carrier battle group or a decoy asset (and maybe an accompanying aerial ambush) at the terminus of their sortie route. Furthermore, bomber crews firing from standoff-range could only be confident they had aimed their missiles at actual high-priority ships and not decoys or lower-priority ships if they received precise visual identifications of targets from scouts that had penetrated to the battle group’s center. If these scouts failed in this role—a high probability once U.S. rules of engagement were relaxed following a war’s outbreak—the missile salvo would be seriously handicapped and perhaps wasted, if it could be launched at all. Little is different today with respect to China’s nascent Anti-Ship Ballistic Missile capability: undermine the underlying surveillance-reconnaissance network and the weapon loses its combat utility.[iii] This is the risk systems take with network-dependency.

Candidate Principle #3: Contact Detection is Easy, Contact Classification and Identification are Not

The above SOSS analogy leads to a major observation regarding remote sensing: detecting something is not the same as knowing with confidence what it is. It cannot be overstated that no sensor can infallibly classify and identify its contacts: countermeasures exist against every sensor type.

As an example, for decades we have heard the argument ‘large signature’ platforms such as aircraft carriers are especially vulnerable because they cannot readily hide from wide-area surveillance radars and the like. If the only method of carrier concealment was broadband Radar Cross Section suppression, and if the only prerequisite for firing an anti-carrier weapon was a large surface contact’s detection, the assertions of excessive vulnerability would be true. A large surface contact held by remote radar, however, can just as easily be a merchant vessel, a naval auxiliary ship, a deceptive low campaign-value combatant employing signature-enhancement measures, or an artificial decoy. Whereas advanced radars’ synthetic or inverse synthetic aperture modes can be used to discriminate a contact’s basic shape as a classification tool, a variety of EW tactics and techniques can prevent those modes’ effective use or render their findings suspect. Faced with those kinds of obstacles, active sensor designers might turn to Low Probability of Intercept (LPI) transmission techniques to buy time for their systems to evade detection and also delay the opponent’s development of effective EW countermeasures. Nevertheless, an intelligent opponent’s signals intelligence collection and analysis efforts will eventually discover and correctly classify an active sensor’s LPI emissions. It might take multiple combat engagements over several months for them to do this, or it might take them only a single combat engagement and then a few hours of analysis. This means new LPI techniques must be continually developed, stockpiled, and then situationally employed only on a risk-versus-benefit basis if the sensor’s performance is to be preserved throughout a conflict’s duration.

Passive direction-finding sensors are confronted by an even steeper obstacle: a non-cooperative vessel can strictly inhibit its telltale emissions or can radiate deceptive emissions. Nor can electro-optical and infrared sensors overcome the remote sensing problem, as their spectral bands render them highly inefficient for wide-area searches, drastically limit their effective range, and leave them susceptible to natural as well as man-made obscurants.[iv]

If a prospective attacker possesses enough ordnance or is not cowed by the political-diplomatic risks of misidentification, he might not care to confidently classify a contact before striking it. On the other hand, if the prospective attacker is constrained by the need to ensure his precious advanced weapons inventories (and their launching platforms) are not prematurely depleted, or if he is constrained by a desire to avoid inadvertent escalation, remote sensing alone will not suffice for weapons-targeting.[v] Just as was the case with Soviet maritime bombers, a relatively risk-intolerant prospective attacker would be compelled to rely upon close-in (and likely visual) classification of targets following their remote detection. This dependency expands a defender’s space for layering its anti-scouting defenses, and suggests that standoff-range attacks cued by sensor-to-shooter networks will depend heavily upon penetrating (if not persistent) scouts that are either highly survivable (e.g., submarines and low-observable aircraft) or relatively expendable (e.g., unmanned system ‘swarms’ or sacrificial manned assets).

On the expendable scout side, an advanced weapon (whether a traditional missile or an unmanned vehicle swarm) could conceivably provide reconnaissance support for other weapons within a raid, such as by exposing itself to early detection and neutralization by the defender in order to provide its compatriots with an actionable targeting picture via a data link. An advanced weapon might alternatively be connected by data link to a human controller who views the weapon’s onboard sensor data to designate targets for it or other weapons in the raid, or who otherwise determines whether the target selected by the weapon is valid. While these approaches can help improve a weapon’s ability to correctly discriminate valid targets, they will nevertheless still lead to ordnance waste if the salvo is directed against a decoy group containing no targets of value. Likewise, as all sensor types can be blinded or deceived, a defender’s ability to thoroughly inflict either outcome upon a scout weapon’s sensor package—or a human controller—could leave an attacker little better off than if its weapons lacked data link capabilities in the first place.

We should additionally bear in mind that the advanced multi-band sensors and external communications capabilities necessary for a weapon to serve as a scout would be neither cheap nor quickly producible. As a result, an attacker would likely possess a finite inventory of these weapons that would need to be carefully managed throughout a conflict’s duration. Incorporation of highly-directional all-weather communications capabilities in a weapon to minimize its data link vulnerabilities would increase the weapon’s relative expense (with further impact to its inventory size). It might also affect the weapon’s physical size and power requirements on the margins depending upon the distance data link transmissions had to cover. An alternative reliance upon omni-directional LPI data link communications would run the same risk of eventual detection and exploitation over time we previously noted for active sensors. All told, the attacker’s opportunity costs for expending advanced weapons with one or more of the aforementioned capabilities at a given time would never be zero.[vi] A scout weapon therefore could conceivably be less expendable than an unarmed unmanned scout vehicle depending upon the relative costs and inventory sizes of both.

The use of networked wide-area sensing to directly support employment of long-range weapons could be quite successful in the absence of vigorous cyber-electromagnetic (and kinetic) opposition performed by thoroughly trained and conditioned personnel. The wicked, exploitable problems of contact classification and identification are not minor, though, and it is extraordinarily unlikely any sensor-to-shooter concept will perform as advertised if it inadequately confronts them. After all, the cyclical struggle between sensors and countermeasures is as old as war itself. Any advances in one are eventually balanced by advances in the other; the key questions are which one holds the upper hand at any given time, and how long that advantage can endure against a sophisticated and adaptive opponent.

In part three of the series, we will consider how a force network’s operational geometry impacts its defensibility. We will also explore the implications of a network’s capabilities for graceful degradation. Read Part Three here.

Jon Solomon is a Senior Systems and Technology Analyst at Systems Planning and Analysis, Inc. in Alexandria, VA. He can be reached at [email protected]. The views expressed herein are solely those of the author and are presented in his personal capacity on his own initiative. They do not reflect the official positions of Systems Planning and Analysis, Inc. and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency. These views have not been coordinated with, and are not offered in the interest of, Systems Planning and Analysis, Inc. or any of its customers.

[i] David S. Alberts, John J. Garstka, and Frederick P. Stein. Network Centric Warfare: Developing and Leveraging Information Superiority, 2nd Ed. (Washington, D.C.: Department of Defense C4ISR Cooperative Research Program, August 1999), 32-34, 103-105, 250-265.

[ii] For some observations on the idea of network-enhanced systems, see Owen R. Cote, Jr. “The Future of Naval Aviation.” (Cambridge, MA: Massachusetts Institute of Technology Security Studies Program, 2006), 28, 59.

[iii] Solomon, “Defending the Fleet,” 39-78. For more details on Soviet anti-ship raiders dependencies upon visual-range (sacrificial) scouts, see Maksim Y. Tokarev. “Kamikazes: The Soviet Legacy.” Naval War College Review 67, No. 1 (Winter 2013): 71, 73-74, 77, 79-80.

[iv] See 1. Jonathan F. Solomon. “Maritime Deception and Concealment: Concepts for Defeating Wide-Area Oceanic Surveillance-Reconnaissance-Strike Networks.” Naval War College Review 66, No. 4 (Autumn 2013): 88-94; 2. Norman Friedman. Seapower and Space: From the Dawn of the Missile Age to Net-Centric Warfare. (Annapolis, MD: Naval Institute Press, 2000), 365-366.

[v] Solomon, “Defending the Fleet,” 94-96.

[vi] Solomon, “Maritime Deception and Concealment,” 95.

Apple believes it is protecting freedom. It’s wrong. Here’s why.

Ed. note: This is an expanded version of a previous article, “We Don’t Need Backdoors.”

By Dave Schroeder

Let me open by saying I’m not for backdoors in encryption. It’s a bad idea, and people who call for backdoors don’t understand how encryption fundamentally works.

Apple has been ordered by a court to assist the FBI in accessing data on an iPhone 5c belonging to the employer of one of the San Bernardino shooters, who planned and perpetrated an international terrorist attack against the United States. Apple has invested a lot in OS security and encryption, but Apple may be able comply with this order in this very specific set of circumstances.

Apple CEO Tim Cook penned a thoughtful open letter justifying Apple’s position that it shouldn’t have to comply with this order. However, what the letter essentially says is that any technical cooperation beyond the most superficial claims that there is “nothing that can be done” is tantamount to creating a “backdoor,” irrevocably weakening encryption, and faith in encryption, for everyone.

That is wrong on its face, and we don’t need “backdoors.”

What we do need is this:

A clear acknowledgment that what increasingly exists essentially amounts to virtual fortresses impenetrable by the legal and judicial mechanisms of free society, that many of those systems are developed and employed by US companies, within the US, and that US adversaries use those systems — sometimes specifically and deliberately because they are in the US — against the US and our allies, and for the discussion to start from that point.

The US has a clear and compelling interest in strong encryption, and especially in protecting US encryption systems used by our government, our citizens, and people around the world, from defeat. But the assumption that the only alternatives are either universal strong encryption, or wholesale and deliberate weakening of encryption systems and/or “backdoors,” is a false dichotomy.

How is that so?

Encrypted communication has to be decrypted somewhere, in order for it to be utilized by the recipient. That fact can be exploited in various ways. It is done now. It’s done by governments and cyber criminals and glorified script kiddies. US vendors like Apple, can be at least a partial aid in that process on a device-by-device, situation-by-situation basis, within clear and specific legal authorities, without doing things we don’t want, like key escrow, wholesale weakening of encryption, creating “backdoors,” or anything similar, with regard to software or devices themselves.

When Admiral Michael Rogers, Director of the National Security Agency and Commander, US Cyber Command, says:

“My position is — hey look, I think that we’re lying that this isn’t technically feasible. Now, it needs to be done within a framework. I’m the first to acknowledge that. You don’t want the FBI and you don’t want the NSA unilaterally deciding, so, what are we going to access and what are we not going to access? That shouldn’t be for us. I just believe that this is achievable. We’ll have to work our way through it. And I’m the first to acknowledge there are international implications. I think we can work our way through this.”

…some believe that is code for, “We need backdoors.” No. He means precisely what he says.

When US adversaries use systems and services physically located in the US, designed and operated by US companies, existing under US law, there are many things — entirely compatible with both the letter and spirit of our law and Constitution — that could be explored, depending on the precise system, service, software, device, and circumstances. Pretending that there is absolutely nothing that can be done, and that it must be either unbreakable, universal encryption for all, or nothing, is a false choice.

To further pretend that it’s some kind of “people’s victory” when a technical system renders itself effectively impenetrable to the legitimate legal, judicial, and intelligence processes of democratic governments operating under the rule of law in free civil society is curious indeed. Would we say the same about a hypothetical physical structure that cannot be entered by law enforcement with a court order?

Many ask why terrorists wouldn’t just switch to something else.

That’s a really easy answer — terrorists use these simple, turnkey platforms for the same reason normal people do: because they’re easy to use. A lot of our techniques, capabilities, sources, and methods have unfortunately been laid bare, but people use things like WhatsApp, iMessage, and Telegram because they’re easy. It’s the same reason that ordinary people — and terrorists — don’t use Ello instead of Facebook, or ProtonMail instead of Gmail. And when people switch to more complicated, non-turnkey encryption solutions — no matter how “simple” the more tech-savvy may think them — they make mistakes that can render their communications security measures vulnerable to defeat.

And as long as the US and its fundamental freedoms engender the culture of innovation which allows companies like Apple to grow and thrive, we will always have the advantage.

Vendors and cloud providers may not always be able to provide assistance; but sometimes they can, given a particular target (person, device, platform, situation, etc.), and they can do so in a way that comports with the rule of law in free society, doesn’t require creating backdoors in encryption, doesn’t require “weakening” their products, does not constitute an undue burden, and doesn’t violate the legal and Constitutional rights of Americans, or the privacy of free peoples anywhere in the world.

Some privacy advocates look at this as a black-and-white, either-or situation, without consideration for national interests, borders, or policy, legal, and political realities. They look at the “law” of the US or UK as fundamentally on the same footing the “law” of China, Russia, Iran, or North Korea: they’re all “laws”, and people are subject to them. They warn that if Apple provides assistance, even just this once, then someone “bad” — by their own, arbitrary standards, whether in our own government or in a repressive regime — will abuse it.

The problem is that this simplistic line of reasoning ignores other key factors in the debate. The US is not China. Democracy is not the same as Communism. Free states are not repressive states. We don’t stand for, defend, or espouse the same principles. Apple is not a Chinese company. If Apple really believes it will set a precedent for nations like China by complying with a lawful US court order, it really should perform a little self-examination and ask why it would seek to operate in China, and thus be subject to such law.

The other argument seems to be that if Apple does this once, it would constitute a “backdoor” for “all” iPhones, and thus the abrogation of the rights of all. That is also categorically false. There are a number of factors here: The iPhone belongs to the deceased individual’s employer. The FBI may have a companion laptop that this specific iPhone considers a “trusted device”, and is thus potentially able to deploy an OS update without a passcode. The specific device and/or OS version may have other vulnerabilities or shortcomings that can be exploited with physical access.

This argument seems to be equivalent to saying that if government has any power or capability, it will be abused, and thus should be denied; and that encryption, or anything related to it, should somehow be considered sacrosanct. It’s like saying, if we grant the government the lawful to enter a door, they could enter any door — even yours. Some might be quick to say this is not the same. Oh, but it is. This is not an encryption backdoor, and does not apply to all iPhones, or even all iPhone 5c models, or even most. It applies to this specific set of circumstances — legally and technically.

It is puzzling indeed to assert that the government can try to break this device, or its crypto, on its own, but if the creator of the cryptosystem helps in any way, that is somehow “weakening” the crypto or creating a “backdoor.” It is puzzling, because it is false.

Specific sets of conditions happen to exist that allows Apple to unlock certain older devices. These conditions exist less and less, and in fewer forms, as devices and iOS versions get newer. Unlocking iOS 7 only works, for example, because Apple has the key. The methodology would only work in this case because it’s specifically a pre-iPhone 6 model with a 4-digit passcode and there is a paired laptop in the government’s possession. All of this is moot on iPhone 6 and newer.

Apple is welcome to use every legal mechanism possible to fight this court order — that is their absolute right. But to start and grow their company in the United States, to exist here because of the fundamental environment we create for freedom and innovation, and then to act as if Apple is somehow divorced from the US and owes it nothing, even when ordered by a court to do so, is a puzzling and worrisome position.  They can’t have it both ways.

If Apple wishes to argue against the application of the All Writs Act — which, while old, is precisely on-point — it needs to make the case that performing the technical steps necessary to comply with this court order creates an “undue burden.” It may be able to make just that argument.

ios

We exist not in an idealized world where the differences of people, groups, and nation-states are erased by the promise of the Internet and the perceived panacea of unbreakable encryption.

We exist in a messy and complicated reality. People seek to do us harm. They use our own laws, creations, and technologies against us. People attack the US and the West, and they use iPhones.

Apple says that breaking this device, even just this once, assuming it is even technically possible in this instance, sets a dangerous precedent.

Refusing to comply with a legitimate court order levied by a democratic society, because of a devotion to some perceived higher ideal of rendering data off-limits under all circumstances to the valid legal processes of that society, is the dangerous precedent.

The national security implications of this case cannot be overstated. By effectively thumbing its nose at the court’s order, Apple is not protecting freedom; it is subverting the protection of it for the sake of a misguided belief in an ideal that does not exist, and is not supported by reality.

Dave Schroeder serves as an Information Warfare Officer in the US Navy. He is also is a tech geek at the University of Wisconsin—Madison. He holds a master’s degree in Information Warfare, is a graduate of the Naval Postgraduate School, and is currently in the Cybersecurity Policy graduate program at the University of Maryland University College. He also manages the Navy IWC Self Synchronization effort. Follow @daveschroeder and @IDCsync.

The views expressed in this article do not represent the views of the US Navy or the University of Wisconsin—Madison.

21st Century Maritime Operations Under Cyber-Electromagnetic Opposition Part One

The following article is part of our cross-posting partnership with Information Dissemination’s Jon Solomon.  It is republished here with the author’s permission.  You can read it in its original form here.

By Jon Solomon

Future high-end maritime warfare tends to be described as the use of distributed, networked maritime sensors that ‘seamlessly’ cue the tactical actions of dispersed forces armed with standoff-range guided weapons. Most commentary regarding these ‘sensor-to-shooter’ networks has been based around their hypothesized performances under ‘perfect’ conditions: sensors that see all within their predicted fields of view, processors that unfailingly discriminate and classify targets correctly, communications pathways that reliably and securely transmit data between network nodes, and situational pictures that assuredly portray ground truth to combat decision-makers. While it is not unreasonable to start with such an idealized view in order to grasp these networks’ potential, it is misguided to end analysis there. Regrettably, it is not unusual to come across predictions implying that these networks will provide their operators with an unshakable and nearly-omniscient degree of situational awareness, or that the more tightly-networked a force becomes the more likely the geographic area it covers will become a graveyard for the enemy.

Although we implicitly understand networked maritime warfare relies upon the electromagnetic spectrum and cyberspace, for some reason we tend to overlook the fact that these partially-overlapping domains will be fiercely contested in any major conflict. It follows that we tend not to consider the effects of an adversary’s cyber warfare and Electronic Warfare (EW) when assessing proposed operating concepts and force networking architectures. Part of this stems from the fact that U.S. Navy forces engaged in actual combat over the past seventy years seldom faced severe EW opposition, and have never faced equivalent cyber attacks. Even so, as recently as the 1980s, the Navy’s forward deployed forces routinely operated within intensive EW environments. Though certain specific skill sets and capabilities were highly compartmentalized due to classification considerations, Cold War-era regular Navy units and battle groups were trained not only to fight-through an adversary’s electronic attacks but also to wield intricate EW methods of their own for deception and concealment.[i] The Navy’s EW (and now cyber warfare) prowess lives on within its nascent Information Dominance Corps, but this is not the same as having a broad majority of the overall force equipped and conditioned to operate in heavily contested cyber-electromagnetic warfare environments.

Any theory of how force networking should influence naval procurement, force structure, or doctrine is dangerously incomplete if it inadequately addresses the challenges posed by cyber-electromagnetic opposition. Accordingly, we need to understand whether cyber-electromagnetic warfare principles exist that can guide our debates about future maritime operating concepts. 

This week I’ll be proposing several candidate principles that seem logical based on modern naval warfare systems’ and networks’ general characteristics. The resulting list should hardly be considered comprehensive, and is solely intended to stimulate debate. Needless to say, these candidates (and any others) will need to be subjected to rigorous testing within war games, campaign analyses, fleet exercises, and real world operations if they are to be validated as principles.

Candidate Principle #1: All Systems and Networks are Inherently Exploitable

It is a fact of nature, not to mention engineering, that notwithstanding their security features all complex systems (and especially the ‘systems of systems’ that constitute networks) inherently possess exploitable design vulnerabilities.[ii] Many vulnerabilities are relatively easy to identify and exploit, which conversely increases the chances a defender will uncover and then effectively mitigate them before an attacker can make best use of them. Others are buried deep within a system, which therefore makes them difficult for an adversary to discover let alone directly access. Still others, though perhaps more readily discernable, are only exploitable under very narrow circumstances or if significant resources are committed. It is entirely possible that notwithstanding its inherent vulnerabilities, a given system might survive an entire protracted conflict without being seriously exploited by an adversary. To confidently assume this ideal outcome would in fact occur, though, amounts to a high-stakes gamble at best and technologically unjustified hubris at worst. Instead, system architects and operators must assume that with enough time, an adversary will not only uncover a usable vulnerability but also develop a viable means of exploiting it if the anticipated spoils merit the requisite investments.

A handful of subtle design shortcomings may be enough to enable the blinding, distraction, or deception of a sensor system; disruption or penetration of network infrastructure systems; or manipulation of a Command and Control (C2) system’s situational picture. Systems can also be sabotaged, with ‘insider threats’ such as components received from compromised supply chains—not to mention actions by malevolent personnel—arguably being just as effective as remotely-launched attacks. For example, a successful inside-the-lifelines attack against the industrial controls of a shipboard auxiliary system might have the indirect effect of crippling any warfare systems that rely upon the former’s services. Cyber-electromagnetic indiscipline within one’s own forces might even be viewed as a particularly damaging, though not deliberately malicious, form of insider threat in which the inadequate ‘hygiene’ or ill-considered tactics of a single operator or maintainer can eviscerate an entire system’s or network’s security architecture.[iii]

Moreover, networking can allow an adversary to use their exploitation of a single, easily-overlooked system as a gateway for directly attacking important systems elsewhere, thereby negating the latter’s robust outward-facing cyber-electromagnetic defenses. Any proposed network connection into a system must be cynically viewed as a potential doorway for attack, even if its exploitation would seem to be incredibly difficult or costly to achieve.

This hardly means system developers must build a ‘brick wall’ behind every known vulnerability, if that were even feasible. Instead, a continuous process of searching for and examining potential vulnerabilities and exploits is necessary so that risks can be recognized and mitigation measures prioritized.[v] Operators, however, cannot take solace if told that the risks associated with every ‘critical’ vulnerability known at a given moment have been satisfactorily mitigated. There is simply no way to guarantee that undiscovered critical vulnerabilities do not exist, that all known ‘non-critical’ vulnerabilities’ characteristics are fully understood, that the mitigations are indeed sufficient, or that the remedies themselves do not spawn new vulnerabilities.

The next post in the series will investigate the fallacy of judging a force network’s combat viability by merely counting its number of nodes. We will also examine the challenges in classifying and identifying potential targets, and what that means for the employment of standoff-range weapons. Read Part Two here.

Jon Solomon is a Senior Systems and Technology Analyst at Systems Planning and Analysis, Inc. in Alexandria, VA. He can be reached at [email protected]. The views expressed herein are solely those of the author and are presented in his personal capacity on his own initiative. They do not reflect the official positions of Systems Planning and Analysis, Inc. and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency. These views have not been coordinated with, and are not offered in the interest of, Systems Planning and Analysis, Inc. or any of its customers.

[i] Jonathan F. Solomon. “Defending the Fleet from China’s Anti-Ship Ballistic Missile: Naval Deception’s Roles in Sea-Based Missile Defense.” (master’s thesis, Georgetown University, 2011), 58-62.

[ii] Bruce Schneier. Secrets and Lies: Digital Security in a Networked World. (Indianapolis, IN: Wiley Publishing, 2004), 5-8.

[iii] For elaboration on the currently observed breadth and impacts of insufficient cyber discipline and hygiene, see 1. “FY12 Annual Report: Information Assurance (IA) and Interoperability (IOP).” (Washington, D.C.: Office of the Director, Operational Test and Evaluation (DOT&E), December 2012), 307-309; 2. “FY13 Annual Report: Information Assurance (IA) and Interoperability (IOP).” (Washington, D.C.: Office of the Director, Operational Test and Evaluation (DOT&E), January 2014), 330, 332-334.

[iv] For an excellent discussion of this and other vulnerability-related considerations from U.S. Navy senior leaders’ perspective, see Sydney J. Freedberg Jr. “Navy Battles Cyber Threats: Thumb Drives, Wireless Hacking, & China.” Breaking Defense, 04 April 2013, accessed 1/7/14, http://breakingdefense.com/2013/04/navy-cyber-threats-thumb-drives-wireless-hacking-china/

[v] Schneier, 288-303.