All posts by Guest Author

Sieges, Containerships, and Ecosystems: Rethinking Maritime Cybersecurity

Maritime Cybersecurity Topic Week

By LCDR Ryan Hilger

In feudal times, a king measured the security of his or her kingdom by the size of the city walls, the capacity of the granaries, and the ability of the archers. A strong defense meant the ability to withstand a siege and repel attacks while maintaining an acceptable quality of life inside the walls. Siege warfare brought the rise of asymmetric tactics to breach the walls: ballistas, catapults, and trebuchets, tunneling and sappers with explosives, siege engines, boiling pots of oil, and even biological warfare. Siege warfare has a long history, going all the way back to Odysseus and the Trojan Horse – the progenitor of the trojan attack in cybersecurity. But the Trojan Horse revealed the fundamental flaw in early defenses: once you were inside the walls, there was little that could be done to stop the adversary short of a heroic effort by your knights and militia at the point of the breach. Successfully resisting sieges is not particularly common in history.

At least until a decade or so ago, cybersecurity took a very similar approach to network defense. Strong firewalls, air gaps, intrusion detection systems, and alert network defense personnel were the best defense anyone could proffer in cybersecurity. The goal was simple: keep the adversary – amateur hacker or nation state – out of your systems. The attack methods were analogous to siege warfare: overwhelming of systems through denial-of-service attacks, buffer overflows to stop systems cold, trojans, and more.

But like the ancient and medieval eras, as the economic patterns changed, fortified cities found that the walls offered less and less protection. Insider threats, the increase in trading activities and the merchants present, among other vectors, all brought more threats inside the city walls and put more resources and people outside the walls. And this was well before defenders often realized an attack was underway–much like our digital domains today. Bubonic plague, or the black death, could easily be viewed in cybersecurity terms as a particularly vicious worm that spread easily among the population and caused nearly one in four to die. The plague generally came into cities on fleas or rats, not from an adversary easily seen. Though in the cyber arena, the losses can be much higher, as Saudi Aramco found out the hard way.

It would take several centuries for new forms of defense to emerge and supplant city and castle walls as the preferred form of protecting a nation state. Defending a country from a cornucopia of attacks is no easy matter, and the problems are not simple, but rather volatile, uncertain, complex, and ambiguous. Perhaps the most iconic failure of legacy defenses came at the outset of World War II, where the Germans simply went over and around the French Maginot Line, circumventing all defenses and moving rapidly on Paris. The French, purportedly with one of the best armies in continental Europe, were out of the war in less than two months. But fighting in cities, with a myriad of rooms, walls, sewers, potentially hostile populations, and more, proved exponentially harder and more bloody, as both sides learned during the following five years.

In 2017*, the maritime industry collectively shuddered when the NotPetya attack, originally targeting Ukrainian utilities infrastructure, spread beyond the region and into the global information commons. The malware spread through a backend software program developed by the Linkos Group in Ukraine. Like SolarWinds in the United States, the software was widely used, and Maersk ran it on their systems. Their saving grace was a single, offline service in Ghana. Not exactly a comforting plan to ensure resiliency. The crippling attack had economic ramifications on a global scale, costing Maersk alone an estimated $250-300 million in damage and lost revenue, and more than $1.2 billion worldwide. After the attack, Maersk moved rapidly to improve their cybersecurity posture, and the company continues to place a premium on information and cybersecurity.

In the modern cybersecurity age, defenses like firewalls, air gaps, and encryption still have their place, but a reliance on a strong defense to prevent catastrophic defeat only makes the fall that much worse. The best defense, as with recent military history, is to assume that your position must be dynamic and your system able to respond and continue its mission despite intrusion or attack. In the language of the maritime industry, approaches need to be looked at from the perspective of containerships, not car carriers. Car carriers, like the fatal voyage of the MV Tricolor in 2002, show what happens when their hull is breached. MV Tricolor went down in less than an hour and a half as water surged through the voluminous open spaces. On the other hand, the containership it collided with, the MV Kariba, managed to escape with superficial damage. Containerships are hard to sink, at least as long as they do not lose too many of their containers.

Rethinking Cybersecurity

Today, cyber and information security is effectively siloed throughout the broader cybersecurity community, regardless of which industry it serves. Product teams working to deliver products to market and maximize returns are doing the minimum possible to get the products to market. They rarely, if ever, talk with the IT teams who run the enterprise infrastructure that they develop their products on. If they do, it is to improve services, capacity, and more, not to improve security or address threats to the product from the enterprise side. Yet that is the attack vector that both NotPetya and Solarwinds exploited, and it shows just how intertwined the enterprise environments are with both products and operations.

A modern approach to cybersecurity requires the maritime industry acknowledge three things. First, that security is complex, and we must treat it as such. Oversimplification of security measures and failure to acknowledge the complex adaptive system that cybersecurity lives in threatens the resiliency of products and reputations. Complex is different from complicated. Complicated requires understanding and can be fully described and managed, but does not allow for new or emergent behaviors to occur. Complicated systems are deterministic. Complexity acknowledges that systems may be used in ways different from how they were originally intended, or display emergent capabilities or behaviors that could not have been anticipated.

Second, they must accept that adversaries are already in their networks and control systems and act accordingly. The fundamental attribute of these complex ecosystems must be the absence of trust. This means that systems must be designed to produce resilience and mission assurance in the face of constant attacks and be able to continue operating. Zero trust manages all users, assets, and resources as inherently untrustworthy, and seeks to ensure credibility and trustworthiness.

Third, that the common element to the first two considerations is people. We do not design systems to operate fully autonomously, and general artificial intelligence is still a long way off. Every system, both enterprise and operational products, requires people at every step of the process. Currently, cybersecurity practitioners tend to focus primarily on technical solutions and processes to ensure the security of products and networks. But attacks require people to launch them, and networks require people to defend, patch, update, and otherwise correctly operate them, even as things become more automated. Electronic systems, whether embedded in the products or deployed on vast scales in the cloud, do not deliver value until people use them to create and maintain business value or desirable outcomes. Therefore, people must be treated as an integral part of the system, prone to failure, irrational or unexpected behaviors, turnover, and fatigue. Systems must be designed with people in mind.

Secure systems require the adoption of an ecosystem-centric approach to cybersecurity. Ecosystems are incredibly dynamic environments where actors – people, animals, microscopic organisms, whatever – continually work to survive, control resources, and at a minimum maintain the status quo and ensure the viability of future generations and operations.

The ecosystem from a cyber perspective includes everything discussed thus far: the products and operational systems, the enterprise systems that enable their creation, deployment, and maintenance, adversary systems, the neutral domains between them, and the people operating these systems on both sides. The closest analog is the program-level, which is inclusive of the enterprise system and product lines.

The Department of Defense has recently started to refer to this approach as “mission engineering,” but even that definition does not fully capture the dynamics of an ecosystem. The industry must place operational resilience or mission assurance as the ultimate objective, regardless of what havoc people may bring. Designing for resilience of the ecosystem means accounting meaningfully for the more chaotic events like geopolitical or geoeconomic actions, weather and natural disasters, and perpetual tension and conflict – the black swans and the pink flamingos.

Conclusion

Designing for resilience requires a markedly different approach from security. But as cyberattacks only continue to grow in pace, scope, and impact, we must engineer and operate for resilience to ensure that the company or mission does not irrevocably lose the credibility and trust needed to survive in the ecosystem. Beyond practical approaches like expansive defense in depth, zero trust architectures, and redundancy or watchdog mechanisms to balance against complex or emergent behaviors, the approach must separate the systems from the information. Understanding not only the desired operational outcomes that the coupling of the system and information provides, but making fully transparent the data and information flows to enable resilient defense of both systems and data. This must occur at the ecosystem level, not the individual system or enterprise-only levels. Failure to account for the defense of the program, not just the products, courts failure and the consequences that it brings.

The underpinnings of the global economy rely not on centralized control of a benevolent organization, but on the collective efforts of the global maritime ecosystem to take the necessary actions to ensure that the maritime commons remain credible and viable to transport the world’s goods. But the maritime industry must acknowledge that they are already under siege and act accordingly. As former Commandant of the Marines Corps General Robert Neller stated in 2019, “If you’re asking me if I think we’re at war, I think I’d say yes…We’re at war right now in cyberspace. We’ve been at war for maybe a decade. They’re pouring oil over the castle walls every day.”

*This article originally stated the NotPetya attack occurred in 2015, it occurred in 2017.

Lieutenant Commander Ryan Hilger is a Navy Engineering Duty Officer stationed in Washington D.C. He has served onboard USS Maine (SSBN 741), as Chief Engineer of USS Springfield (SSN 761), and ashore at the CNO Strategic Studies Group XXXIII and OPNAV N97. He holds a Masters Degree in Mechanical Engineering from the Naval Postgraduate School. His views are his own and do not represent the official views or policies of the Department of Defense or the Department of the Navy.

Featured Image: Operation Specialist 1st Class Jonathan Hudson, assigned to the Ticonderoga-class guided-missile cruiser USS Shiloh (CG-67), prepares to take tactical air control over a MH-60R Seahawk Helicopter, attached to the “Warlords” of Helicopter Maritime Strike Squadron Five One (HSM-51). (U.S. Navy Photo by Fire Controlman 2nd Class Kristopher G. Horton/Released)

Bilge Pumps Episode 38: Section 22 – The Forgotten Electronic Warfare Superstars of WWII

By Alex Clarke

Hello everyone! This is the first of our users’ submissions to air. Originally due to the number of people involved, it was thought to be one of the hardest episodes to organize, but it came together remarkably quickly. For that we have to thank the three members of the Section 22 research team who joined us:

Trent Telenko (@trenttelenko), Craig Bellamy, and Peter Dunn (@Ozatwar) of Australia at War

They were fantastic guests and we hope you love this bumper-length and bumper content episode as much as we did.

#Bilgepumps is still a newish series and new avenue, which may no longer boast the new car smell, in fact decidedly more of pineapple/irn bru smell with a hint of mint cake and the faintest whiff of fried chicken. But we’re getting the impression it’s liked, so we’d very much like any comments, topic suggestions or ideas for artwork to be tweeted to us, the #Bilgepump crew (with #Bilgepumps), at Alex (@AC_NavalHistory), Drach (@Drachinifel), and Jamie (@Armouredcarrier). Or you can comment on our Youtube channels (listed down below).

Bilge Pumps Episode 38: Section 22 – The Forgotten Electronic Warfare Superstars of WWII

Links

1. Dr. Alex Clarke’s Youtube Channel
2. Drachinifel’s Youtube Channel
3. Jamie Seidel’s Youtube Channel

Alex Clarke is the producer of The Bilge Pumps podcast.

Contact the CIMSEC podcast team at [email protected].

Bilge Pumps Episode 37: Rating Ships – It Can’t Be Fourth Rate, NATO Nations Don’t Buy Anything But First Rate

By Alex Clarke

So last week, in the Long War discussion of Episode 36, we chatted for a small amount of time, as the Bilge Pumps crew is want to do, about applying the age of sail rating system to modern ships. Someone may have mentioned that the Daring-class are fourth rate under those terms and that the U.S. Navy is rapidly getting rid of its only first rates. Drach may have pointed out that the Sejong the Greats are first rates, and Jamie may have questioned the rationale for certain ships…

Anyway, this seems to have made several people rather angry, and furthermore, it may have led to a number of contacts where we were firmly told we were wrong as “<insert western/European nation of choice here> doesn’t buy anything but first rate ships for their <insert adjective of choice here> navy” and that we should drop it. So we did, right into the suggestion box for the headline topic of this week. Enjoy! And seriously, for people who claimed to be “regular listeners to our otherwise brilliant show,” it’s as if they didn’t know us at all!

#Bilgepumps is still a newish series and new avenue, which may no longer boast the new car smell, in fact decidedly more of pineapple/irn bru smell with a hint of mint cake and the faintest whiff of fried chicken. But we’re getting the impression it’s liked, so we’d very much like any comments, topic suggestions or ideas for artwork to be tweeted to us, the #Bilgepump crew (with #Bilgepumps), at Alex (@AC_NavalHistory), Drach (@Drachinifel), and Jamie (@Armouredcarrier). Or you can comment on our Youtube channels (listed down below).

Bilge Pumps Episode 37: Rating Ships – It Can’t Be Fourth Rate, NATO Nations Don’t Buy Anything But First Rate


Links

1. Dr. Alex Clarke’s Youtube Channel
2. Drachinifel’s Youtube Channel
3. Jamie Seidel’s Youtube Channel

Alex Clarke is the producer of The Bilge Pumps podcast.

Contact the CIMSEC podcast team at [email protected].

The Future is Unmanned: Why the Navy’s Next Generation Fighter Shouldn’t Have a Pilot

By Trevor Phillips-Levine, Dylan Phillips-Levine, and Walker D. Mills

In August 2020, USNI News reported that the Navy had “initiated work to develop its first new carrier-based fighter in almost 20 years.” While the F-35C Lightning II will still be in production for many years, the Navy needs to have another fighter ready to replace the bulk of the F/A-18E/F/G Super Hornets and Growlers by the mid-2030s. This new program will design that aircraft. While this is an important development, it will be to the Navy’s detriment if the Next Generation Air Dominance (NGAD) program yields a manned fighter.

Designing a next-generation manned aircraft will be a critical mistake. Every year remotely piloted aircraft (RPAs) replace more and more manned aviation platforms, and artificial intelligence (AI) is becoming ever increasingly capable. By the mid-2030s, when the NGAD platform is expected to begin production, it will be obsolete on arrival if it is a manned platform. In order to make sure the Navy maintains a qualitative and technical edge in aviation, it needs to invest in an unmanned-capable aircraft today. Recent advances and long-term trends in automation and computing make it clear that such an investment is not only prudent but necessary to maintain capability overmatch and avoid falling behind.

Artificial Intelligence

This year, AI designed by a team from Heron Systems defeated an Air Force pilot, call sign “Banger,” 5-0 in a simulated dogfight run by DARPA. Though the dogfight was simulated and had numerous constraints, it was only the latest in a long string of AI successes in competitions against human masters and experts.

Since 1997, when IBM’s DeepBlue beat the reigning world chess champion Gary Kasparov over six games in Philadelphia, machines have been on a winning streak against humans. In 2011, IBM’s “Watson” won Jeopardy!. In 2017, DeepMind’s (Google) “AlphaGo” beat the world’s number one Go player at the complex Chinese board game. In 2019, DeepMind’s “AlphaStar” beat one of the world’s top-ranked Starcraft II players, a real-time computer strategy game, 5-0. Later that year an AI from Carnegie Mellon named “Pluribus” beat six professionals in a game of Texas Hold’em poker. On the lighter side, an AI writing algorithm nearly beat the writing team for the game Cards Against Humanity in a competition to see who could sell more card packs in a Black Friday write-off. After the contest the company’s statement read: “The writers sold 2% more packs, so their jobs will be replaced by automation later instead of right now. Happy Holidays.”

It’s a joke, but the company is right. AI is getting better and better every year and human abilities will continue to be bested by AI in increasingly complex and abstract tasks. History shows that human experts have been repeatedly surprised by AI’s rapid progress and their predictions on when AI will reach human parity in specific tasks often come true years or a decade early. We can’t make the same mistake with unmanned aviation.

Feb, 11, 1996 – Garry Kasparov, left, reigning world chess champion, plays a match against IBM’s Deep Blue, in the second of a six-game match in Philadelphia. Moving the chess pieces for IBM’s Deep Blue is Feng-hsiung Hsu, architect and principal designer of the Deep Blue chess machine. (H. Rumph, Jr./AP File)

Most of these competitive AIs use machine learning. A subset of machine learning is deep reinforcement learning which uses biologically inspired evolutionary techniques to pit a model against itself over and over. Models that that are more successful at accomplishing the specific goal – such as winning at Go or identifying pictures of tigers, continue on. It is like a giant bracket, except that the AI can compete against itself millions or even billions of times in preparation to compete against a human. Heron Systems’ AI, which defeated the human pilot, had run over four billion simulations before the contest. The creators called it “putting a baby in the cockpit.” The AI was given almost no instructions on how to fly, so even basic practices like not crashing into the ground were things it had to learn through trial and error.

This type of ‘training’ has advantages – algorithms can come up with moves that humans have never thought of, or use maneuvers humans would not choose to utilize. In the Go matches between Lee SeDol and AlphaGo, the AI made a move on turn 37, in game two, that shocked the audience and SeDol. Fan Hui, a three-time European Go champion and spectator of the match said, “It’s not a human move. I’ve never seen a human play this move.” It is possible that the move had never been played before in the history of the game. In the AlphaDogfight competition, the AI favored aggressive head-on gun attacks. This tactic is considered high-risk and prohibited in training. Most pilots wouldn’t attempt it in combat. But an AI could. AI algorithms can develop and employ maneuvers that human pilots wouldn’t think of or wouldn’t attempt. They can be especially unpredictable in combat against humans because they aren’t human.

A screen capture from the AlphaDogFight challenge produced by DARPA on Thursday, August 20, 2020. (Photo via DARPA/Patrick Tucker)

An AI also offers significant advantages over humans in piloting an aircraft because it is not limited by biology. An AI can make decisions in fractions of a second and simultaneously receive input from any number of sensors. It never has to move its eyes or turn its head to get a better look. In high-speed combat where margins are measured in seconds or less, this speed matters. An AI also never gets tired – it is immune to the human factors of being a pilot. It is impervious to emotion, mental stress, and arguably the most critical inhibitor, the biological stresses of high-G maneuvers. Human pilots have a limit to their continuous high-G maneuver endurance. In the AlphaDogfight, both the AI and “Banger,” the human pilot, spent several minutes in continuous high-G maneuvers. While high G-maneuvers would be fine for an AI, real combat would likely induce loss of consciousness or G-LOC for human pilots.

Design and Mission Profiles

Aircraft, apart from remotely piloted aircraft (RPAs), are designed with a human pilot in mind. It is inherent to the platform that it will have to carry a human pilot and devote space and systems to all the necessary life support functions. Many of the maximum tolerances the aircraft can withstand are bottlenecked not by the aircraft itself, but to its pilot. An unmanned aircraft do not have to worry about protecting a human pilot or carrying one. It can be designed solely for the mission.

Aviation missions are also limited to the endurance of human pilots, where there is a finite number of hours a human can remain combat effective in a cockpit. Using unmanned aircraft changes that equation so that the limit is the capabilities of the aircraft and systems itself. Like surveillance drones, AI-piloted aircraft could remain on station for much longer than human piloted aircraft and (with air-to-air refueling) possibly for days.

The future operating environment will be less and less forgiving for human pilots. Decisions will be made at computational speed which outpaces a human OODA loop. Missiles will fly at hypersonic speeds and directed energy weapons will strike targets at the speed of light. Lockheed Martin has set a goal for mounting lasers on fighter jets by 2025. Autonomous aircraft piloted by AI will have distinct advantages in the future operating environment because of the quickness of its ability to react and the indefinite sustainment of that reaction speed. The Navy designed the Phalanx system to be autonomous in the 1970s and embedded doctrine statements into the Aegis combat system because it did not believe that humans could react fast enough in the missile age threat environment. The future will be even more unforgiving with a hypersonic threat environment and decisions made at the speed of AI that will often trump those made at human speeds in combat.

Unmanned aircraft are also inherently more “risk worthy” than manned aircraft. Commanders with unmanned aircraft can take greater risks and plan more aggressive missions that would have featured an unacceptably low probability of return for manned missions. This increased flexibility will be essential in rolling back and dismantling modern air defenses and anti-access, area-denial networks.

Unmanned is Already Here

The U.S. military already flies hundreds of large RPAs like the MQ-9 Predator and thousands of smaller RPAs like the RQ-11 Raven. It uses these aircraft for reconnaissance, surveillance, targeting, and strike. The Marine Corps has flown unmanned cargo helicopters in Afghanistan and other cargo-carrying RPAs and autonomous aircraft have proliferated in the private sector. These aircraft have been displacing human pilots in the cockpit for decades with human pilots now operating from the ground. The dramatic proliferation of unmanned aircraft over the last two decades has touched every major military and conflict zone. Even terrorists and non-state actors are leveraging unmanned aircraft for both surveillance and strike.

Apart from NGAD, the Navy is going full speed ahead on unmanned and autonomous vehicles. Last year it awarded a $330 million dollar contract for a medium-sized autonomous vessel. In early 2021, the Navy plans to run a large Fleet Battle Problem exercise centered on unmanned vessels. The Navy has also begun to supplement its MH-60S squadrons with the unmanned MQ-8B. Chief among its advantages over the manned helicopter is the long on-station time. The Navy continues to invest in its unmanned MQ-4C maritime surveillance drones and has now flight-tested the unmanned MQ-25 Stingray aerial tanker. In fact, the Navy has so aggressively pursued unmanned and autonomous vehicles that Congress has tried to slow down its speed of adoption and restrict some funding.

The Air Force too has been investing in unmanned combat aircraft. The unmanned “loyal wingman” drone is already being tested and in 2019 the service released its Artificial Intelligence Strategy arguing that “AI is a capability that will underpin our ability to compete, deter and win.” The service is also moving forward with testing their “Golden Horde,” an initiative to create a lethal swarm of autonomous drones.

https://gfycat.com/sentimentalunknownharpyeagle

The XQ-58A Valkyrie demonstrator, a long-range, high subsonic unmanned air vehicle completed its inaugural flight March 5, 2019 at Yuma Proving Grounds, Arizona. (U.S. Air Force video)

The Marine Corps has also decided to bet heavily on an unmanned future. In the recently released Force Design 2030 Report, the Commandant of the Marine Corps calls for doubling the Corps’ unmanned squadrons. Marines are also designing unmanned ground vehicles that will be central to their new operating concept, Expeditionary Advanced Base Operations (EABO) and new, large unmanned aircraft. Department of the Navy leaders have said that they would not be surprised if as much as 50 percent of Marine Corps aviation is unmanned “relatively soon.” The Marine Corps is also investing in a new “family of systems” to meet its requirement for ship-launched drones. With so much investment in other unmanned and autonomous platforms, why is the Navy not moving forward on an unmanned NGAD?

Criticism

An autonomous, next-generation combat aircraft for the Navy faces several criticisms. Some concerns are valid while others are not. Critics can rightly point out that AI is not ready yet. While this is certainly true, it likely will be ready enough by the mid-2030s when the NGAD is reaching production. 15 years ago, engineers were proud of building a computer that could beat Gary Kasparov at chess. Today, AIs have mastered ever more complex real-time games and aerial dogfighting. One can only expect AI will make a similar if not greater leap in the next 15 years. We need to be future-proofing future combat aircraft. So the question should not be, “Is AI ready now?” but, “Will AI be ready in 15 years when NGAD is entering production?”

Critics of lethal autonomy should note that it is already here. Loitering munitions are only the most recent manifestation of weapons without “a human in the loop.” The U.S. military has employed autonomous weapons ever since Phalanx was deployed on ships in the 1970s, and more recently with anti-ship missiles featuring intelligent seeker heads. The Navy is also simultaneously investing in autonomous surface vessels and unmanned helicopters, proving that there is room for lethal autonomy in naval aviation.

Some have raised concerns that autonomous aircraft can be hacked and RPAs can have their command and control links broken, jammed, or hijacked. But these concerns are no more valid with unmanned aircraft than manned aircraft. Modern 5th generation aircraft are full of computers, networked systems, and use fly-by-wire controls. A hacked F-35 will be hardly different than a hacked unmanned aircraft, except there is a human trapped aboard. In the case of RPAs, they have “lost link” protocols that can return them safely to base if they lose contact with a ground station.

Unfortunately, perhaps the largest obstacle to an unmanned NGAD is imagination. Simply put, it is difficult for Navy leaders, often pilots themselves, to imagine a computer doing a job that they have spent years mastering. They often consider it as much an art as a science. But these arguments sound eerily similar to arguments made by mounted cavalry commanders in the lead up to the Second World War. As late as 1939, Army General John K. Kerr argued that tanks could not replace horses on the battlefield. He wrote: “We must not be misled to our own detriment to assume that the untried machine can displace the proved and tried horse.” Similarly, the U.S. Navy was slow to adopt and trust search radars in the Second World War. Of their experience in Guadalcanal, historian James D. Hornfischer wrote, “…The unfamiliar power of a new technology was seldom a match for a complacent human mind bent on ignoring it.” Today we cannot make the same mistakes.

Conclusion 

The future of aviation is unmanned aircraft – whether remotely piloted, autonomously piloted, or a combination. There is simply no reason that a human needs to be in the cockpit of a modern, let alone next-generation aircraft. AI technology is progressing rapidly and consistently ahead of estimates. If the Navy waits to integrate AI into combat aircraft until it is mature, it will put naval aviation a decade or more behind.

Platforms being designed now need to be engineered to incorporate AI and future advances. Human pilots will not be able to compete with mature AI – already pilots are losing to AI in dogfights; arguably the most complex part of their skillset. The Navy needs to design the next generation of combat aircraft for unmanned flight or it risks making naval aviation irrelevant in the future aerial fight.

Trevor Phillips-Levine is a lieutenant commander in the United States Navy. He has flown the F/A-18 “Super Hornet” in support of operations New Dawn and Enduring Freedom and is currently serving as a department head in VFA-2. He can been reached on Twitter @TPLevine85.

Dylan Phillips-Levine is a lieutenant commander in the United States Navy. He has flown the T-6B “Texan II” as an instructor and the MH-60R “Seahawk.” He is currently serving as an instructor in the T-34C-1 “Turbo-Mentor” as an exchange instructor pilot with the Argentine navy. He can be reached on Twitter @JooseBoludo.

Walker D. Mills is a captain in the Marines. An infantry officer, he is currently serving as an exchange instructor at the Colombian naval academy. He is an Associate Editor at CIMSEC and an MA student at the Center for Homeland Defense and Security at the Naval Postgraduate School. You can find him on twitter @WDMills1992.

Featured Image: The XQ-58A Valkyrie demonstrator, a long-range, high subsonic unmanned air vehicle completed its inaugural flight March 5, 2019 at Yuma Proving Grounds, Arizona. (DoD)