The US Navy recently announced that it will make more use of submarine drones, contracting with marine technology developer Teledyne Benthos to re-purpose the Slocum Glider as an instrument used for military activity. The contract is worth $203.7M.
If you haven’t heard of it yet, here is what the Slocum Glider is: a 5 foot-long autonomous underwater vehicle capable of moving to specific locations and descending to depths of 4,000 feet. It is driven by variable buoyancy, and it can move both horizontally and vertically.
The Slocum Glider can be programmed to patrol for weeks at a time, collecting data on its environment, surfacing to transmit to shore while downloading new instructions at regular intervals.
Compared to traditional methods, the drones have a relative small cost: the need for personnel and infrastructure is reduced to its minimum and the vehicle is able to work around the clock and around the calendar. It works very well: in November 2012, an autonomous glider set a Guinness World Record by traveling over 14,000 kilometers on an autonomous journey of just over one year duration!
Many Navies and ocean research organizations already use a wide variety of gliders, which cost around $100,000. But the US Navy now plans to increase the number of those drones from 65 to 150 by 2015. In its 2015 budget request, the US Defense Advanced Research Projects Agency even claimed for $19 million to develop drones “that can provide non-lethal effects or situational awareness over large maritime areas.” This represents a spending increase of nearly 60 percent over 2014!
The good news for us is that these submarine drones, unlike the majority of airborne drones, won’t use environmentally unfriendly fuel. Instead, the glider is propelled by the thermocline, which is thermal energy found between the upper and lower mixed layers of sea water. The upper surface has a near atmospheric temperature while the deep water ocean has a temperature situated between 2 and 4 °C.
Those new submarine drones can be used to predict the weather by collecting an enormous amount of data at various spots in the ocean. In 2011, a US Government Accountability Office report warned that without improvements to their earth-monitoring capabilities, the USA would “not be able to provide key environmental data that are important for sustaining climate and space weather measurements”; data for warnings of extreme events such as hurricanes, storm surges, and floods would then be less accurate and timely. This led the US Navy to make a deal to share the Navy Ocean Forecast System software with the National Ocean and Atmospheric Administration.
But that’s not all: another autonomous submarine drone, the Bluefin-21, created by the American company Bluefin Robotics, has scanned just over 300 square kilometers of Indian Ocean seabed searching for the wreckage of the lost Malaysian plane, whichdisappeared from radar screens on 8th March. The drone was launched from the Australian Defence Vessel Ocean Shield.
Bluefin-21 is an autonomous underwater vehicle, 4.93 meters long and 53 centimeters in diameter, specially designed for detection, recognition and statements in the seabed.It is capable of carrying various sensors and payloads. This technology, called side-scan sonar, builds a picture of the seabed at a 4500 meters depth.
This drone also has a significant autonomy, 25 hours at 3 knots average, which allows it to achieve extended underwater missions.It weighs 750pounds, which makes it easily transportable by a wide range of boats.
From all this, it is clear that submarine drones will become an important part of the navies’ equipment!
Commercial 3D Imaging in Naval Maintenance and Repair
As with most things in life, a frequent hindrance to quickly fixing degraded systems aboard naval vessels is the inability to effectively communicate – in this case describing the problem to support facilities, sometimes thousands of miles away. Compounding the frustration is how long it can take to ship a solution part (perhaps soon alleviated by local additive manufacturing hubs) or send a team to perform the repair work, before realizing the disconnect between what the problem is and what the support facility thinks it is. Fortunately, the advent of cheap digital cameras, now nearly ubiquitous in cell phones, has eased the effort as photos now accompany many of the requests.
Tango and Cache: 3D rendering of a room captured by Google’s Project Tango
A further aid may well soon be at hand. According to the Wall Street Journal, Google plans in June to begin production of a tablet with “two back cameras, infrared depth sensors and advanced software that can capture precise three-dimensional images of objects,” or “to create a kind of three-dimensional map of its user’s surroundings.”
Mobile 3D imaging technology is not new. We here at CIMSEC have previously discussed it in the context of potential tactical naval applications, such as for use by VBSS boarding teams either in a “recon” mode to gain a better picture of their tactical environment, or a “record” mode for later examination, intel exploitation, and lessons learned. Additionally, scientists have earlier noted that sound waves can be used to recreate a 3D representation of a cell phone user’s environment, with intriguing implications for security and spyware. Laser scanners – the peripheral of choice for generating 3D-rendered computer images for 3D printer files – may offer a similar and higher-fidelity solution, at least for the time being.
The advantage with Google’s Project Tango, as the initiative is known, is that a commercial behemoth integrating the technology into a widely used and compact mobile platform makes it much more likely to be available cheaply, and for developers to speed up the cycle of refining applications. Further, the ability to portray a degraded component in situ or the environment in question rather than as a standalone piece is an advantage over most of today’s laser scanners (although some companies have in fact marketed laser scanners as environment mappers).
The normal caveats about this being an immature technology that has yet to prove itself in the real world apply. But, if a picture of a problem is worth a thousand words, a 3D image may soon be worth at least much to the Navy’s repair reach-back commands.
LT Scott Cheney-Peters is a surface warfare officer in the U.S. Navy Reserve and the former editor of Surface Warfare magazine. He is the founder and vice president of the Center for International Maritime Security (CIMSEC), a graduate of Georgetown University and the U.S. Naval War College, and a member of the Truman National Security Project’s Defense Council.
Drones are a rapidly expanding market in the international arms trade. Intelligence, surveillance, and reconnaissance (ISR) is crucial for operating in the modern battlespace and drones are the best way to get that information by maximizing loiter and removing risk to a pilot. Demand is high and supply is low; only a few countries produce the class of drones that are most in demand. This would seem a perfect market for the United States to sell its wares and dominate the exchange but it is currently hamstrung by policies which discourage their export. The hesitation to export the technology, while done for good reasons like maintaining United States’ technological advantage and protecting a powerful capability from exploitation by foreign agents, is misguided; without the powerful network of communications satellites and Global Information Grid (GIG), the drones themselves are little more than complex model airplanes with good cameras. The United States’ efforts are akin to closing Pandora’s Box because of imagined evils without recognizing the good that remains left trapped inside.
Exporting drones is a good thing for the United States. First, it promulgates a capability we want our allies and partners to possess. For years, British and Italian MQ-9 Reapers have patrolled the skies over Afghanistan, bringing the twin benefit of additional ISR to the battlefield and eliminating the need for American assets to cover those units. In addition, the British have armed MQ-9s that provide additional strike assets to coalition operations. The United States only stands to gain by exporting more of these assets.
Dominating the supply of drones brings the United States leverage it would not otherwise have. Just as with other aviation assets, drones need a steady stream of supplies to be viable. If the country that operates those assets uses them for purposes that are against the United States’ interests, the United States can then press forward with sanctions and cut off supply of crucial parts needed to keep the assets operational. In a world fraught with fault lines and shifting loyalties, leverage matters.
There are a couple arguments in favor of restricting drone exports. The first is wishful thinking. The argument holds that by restricting the sale of to foreign clients, we will deny them drone capabilities, particularly their ability to conduct strike missions. The problem is that Pandora’s Box is already open. Even though there are few suppliers of in the field right now, there are many others that are about to enter the market. A joint European consortium, led by France, is developing the nEUROn. Britain is developing the Taranis. China is aggressively marketing the ASN-209 at international airshows. Chris Rawley highlighted Singapore’s entry into the market in his recent article (https://cimsec.org/unmanned-systems-distributed-operations-one-many/). Even Turkey is developing the Anka. If there are lots of suppliers, the United States will no longer have its privileged negotiating position and will need to make more available to encourage use of its platforms. This means expanding the list of what is exportable and seriously considering exporting armed assets.
Britain is developing the Taranis, one of many competitors the United States will face in the international drone marketplace (image from BAE Systems)
The other argument against exporting drones is out of fantasy (as Dave Blair elucidates in his excellent article here: https://cimsec.org/remote-aviation-technology-actually-talking/). The argument goes that the United States should not export drones because they are a revolutionary capability that would unfairly strengthen possible adversaries. This, too, falls short. The aircraft themselves are only a small portion of the equation and what makes them great tools of war. The real strength of drones is their ability to conduct global operations which requires the United States’ network of satellite communications to operate in a distributed manner. Without that network, the drones are nothing more than a more capable model airplane that linger longer than a fighter or helicopter.
The story of Pandora’s Box ends with Pandora desperately shutting the lid in a vain attempt to keep bad things from entering the world. Unfortunately for Pandora, it was too late; the damage was done. The only effect that she reaped by keeping the box closed was to leave hope penned inside. While the United States did not unleash the desire for countries to acquire drones, it certainly is achieving the same effect as Pandora by ignoring the world in which it lives. The better course of action is to recognize what drones are truly capable of on their own and embrace an export mindset.
Matthew Merighi is a civilian employee with the United States Air Force’s Office of International Affairs (SAF/IA). His views do not reflect those of the United States Government, Department of Defense, or Air Force.
Computer wargames cannot be fully analyzed without scrutinizing the video game systems that power them. The technology that drives these video game systems has transformed dramatically over the past 10-15 years. Initially, leaps in computational power allowed players to control and manipulate hundreds of units and perform an array of functions, as demonstrated in the earliest versions of the Harpoon computer simulation. Subsequently, the graphics behind these games experienced multiple breakthroughs that range from three dimensional features to advanced motion capture systems capable of detecting even the slightest facial animations. Eventually, game consoles and PCs reached the point where they could combine this computational complexity with stunning visuals into a single, effective simulation. Simply, these systems have evolved at a rapid rate.
Yet, as we near the midpoint of the second decade of the 21st century, it is important to ask “What’s next?” What future technologies will impact the design of military simulations? After reaching out to a variety of gamers, there are two technologies that CIMSEC readers should look forward to: 1) virtual reality (VR) headsets, and 2) comprehensive scenario design tools with better artificial intelligence (AI).
Virtual Reality Headsets—A Gamer’s Toy or Useful Tool?
VR headsets are by far one of the most anticipated innovations of the next few years. Gamers are not the only individuals excited for this development; Facebook’s $2 billion purchase of VR developer of Oculus VR and Sony’s Project Morpheus demonstrate how VR is a potential revolution. For those unfamiliar with a VR headset, it is a device mounted on the head that features a high definition display and positional tracking (if you turn your head right, your in-game character will turn his head right simultaneously). When worn with headphones, users claim that these headsets give them an immersive, virtual reality experience. One user describes the integration of a space dogfighting game with a Oculus Rift VR headset below:
The imagery is photorealistic to a point that is difficult to describe in text, as VR is a sensory experience beyond just the visual. Being able to lean forward and look up and under your cockpit dashboard due to the new DK2 technology tracking your head movements adds yet another layer of immersion…I often found myself wheeling right while scanning up and down with my head to search for targets like a World War II pilot scanning the sky…The level of detail in the cockpit, the weave of the insulation on the pipes, the frost on the cockpit windows, the gut-punch sound of the autocannons firing, every aspect has been developed with an attention to detail and an intentionality which is often missing in other titles.
An Oculus Rift headset
Even though VR headsets strictly provide a first-person experience, they can still play a serious role in military simulations and wargames. At the tactical level, VR headsets can supplement training by simulating different environments custom built from the ground up. For example, imagine a team Visit, board, search, and seizure (VBSS) team training for a situation on an oil rig. Developers can create and render a digital model of an oil rig that members of the VBSS team could explore with the assistance of VR headsets in order to better understand the environment. In addition to supplementing training, VR headset technology could potentially be manipulated to enhance battlefield helmets. Although this concept is many years away (at least 15), readers should think of the F-35’s Distributed Aperture System for pilot helmets; even though this helmet currently faces development challenges, it demonstrates how a VR system can track and synthesize information for the operator. Essentially, the first-person nature of VR headsets restricts their application to the technical and tactical levels.
Better Tools: Enabling the Construction of Realistic Simulations
Although not as visually impressive as VR headsets, the ability to design complex military scenarios that will run on even the simplest laptops is an exciting feature that many spectators disregard. Many wargames are often judged by their complexity. When crafting scenarios, designers ask “Does the simulation take account for _______, what would ________ action trigger,” and other similar questions that try to factor in as many variables as possible. Their answers to these questions are programmed into the simulation with the assistance of a variety of development tools. Within the next decade, the capabilities of these tools will increase significantly and ultimately provide developers the ability to craft more comprehensive military simulations.
Since these technical tools can be confusing, I am going to use a personal example to demonstrate their abilities. In a game called Arma 2, a retail version built off the Virtual Battlespace 2 engine, I designed a scenario inspired by Frederick Forseyth’s famous novel, Dogs of War. Human players would assault an African dictator’s palace defended by units commanded by AI. Using the game’s mission editor, I inserted multiple layers of defense each programmed to respond differently. The AI had multiple contingency plans for different scenarios. If the force was observed in the open, aircraft would be mobilized. If certain defending units did not report in every 15 minutes, then the AI would dispatch a quick reaction force (QRF) to investigate. If the dictator’s palace was assaulted, his nearby loyal armor company would immediately mobilize to rescue him. These are just a few examples but illustrate how I was able to detail multiple different scenarios for the AI. Yet, the mission was not completely scripted. When the AI came into contact, it would respond differently based on the attacking force’s actions; during testing, I witnessed the dictator’s armor company conduct a variety of actions ranging from simply surrounding the city to conducting a full assault on the palace using multiple avenues of approach.
The Arma 2 Mission Editor
When considering the complexity of the above scenario, it may appear that extensive programming knowledge and experience were required. The astounding fact is that this is not the case because of the system’s mission editor (I do not know how to program). Yet, after spending one weekend building this scenario with the system’s editor, I was able to craft this comprehensive scenario. In the future, we will witness the development of tools and AI systems that allow for the construction of more detailed military simulations.
Conclusion
We have identified two technologies—VR headsets and more comprehensive simulation design tools—that will rapidly evolve throughout the next several years. Yet, the challenge is not the development of these technologies, but determining how to effectively harness their power and integrate them into meaningful, military simulations that go beyond ‘pilot programs.’ Even as these two technologies improve, they will not substitute for real-world experience; for instance, VR headset users cannot feel the sweat after a long hike and scenarios cannot to be customized to fully depict the active populations in counterinsurgency simulations. Nevertheless, as technology improves and is better leveraged, the utility of military simulations will only increase.
Bret Perry is a student at the Walsh School of Foreign Service at Georgetown University. The views expressed are solely those of the author.