Tag Archives: featured

Awkward Conversations in Naval History

Hey! We TOLD you not to leave behind any lizards!
Hey! We TOLD you not to leave behind any lizards!

In 1946, the United States chose Bikini Atoll as the test site for Operation Crossroads, a series of nuclear weapons tests. One of our dear editors heard this clip during the week, imagining that first awkward conversation to “borrow” the island…

Bikini Atoll Conversation

(With permission from the Adam Carolla Show: the pirate ship sailing on mangria)

Leading the Blind: Teaching UCAV to See

In “A Scandal in Bohemia”, Sherlock Holmes laments, “You [Watson] see, but you do not observe. The distinction is clear.” Such is the current lament of America’s fleet of UCAVs, UGV’s, and other assorted U_V’s: they have neither concept nor recognition of the world around them. To pass from remote drones living on the edges of combat to automated systems at the front, drones must cross the Rubicon of recognition.

To See

Still can't see a thing.
Help!

The UCAV is the best place to start, as the skies are the cleanest canvas upon which drones could cast their prying eyes. As with any surveillance system, the best ones are multi-faceted. Humans use their five senses and a good portion of deduction.  Touch is a bit too close for UCAV, smell and hearing would be both useless and uncomfortable at high speed, and taste would be awkward. Without that creative deductive spark, drones will need a bit more than a Mk 1 Eyeball. Along with radar, good examples for how a drone might literally “see” besides a basic radar picture are the likes of the layered optics of the ENVG (Enhanced Night Vision) or the RLS (Artillery Rocket Launch Spotter).

Operators for typical optical systems switch between different modes to understand a picture. A USN Mk38 Mod-2 24MM Bushmaster has a camera system with an Electro-Optical System (EOS), Forward Looking Infrared (FLIR), and a laser range-finder. While a Mod-2 operator switches between the EOS and FLIR, in the ENVG, both modes are combined to create an NVG difficult to blind. For a drone, digital combination isn’t necessary, all inputs can be perceived by a computer at one time. Optical systems can also be put on multiple locations on the UCAV to aid in creating a 3D composite of the contact being viewed. Using an array of both EOS and FLIR systems simultaneously could allow drones to “see” targets in more varied and specific aspect than the human eye.

For the deployment of these sensors, the RLS is a good example of how sensors can “pass” targets to one another. In RLS, after target data is collected amongst audio and IR sensors, flagged threats are passed to the higher-grade FLIR for further designation and potential fire control solution. A UCAV outfitted with multiple camera systems could, in coordination with radar, pass detected targets within a certain parameter “up” to better sensors. Targets viewed in wide-angle scans (such as stealth aircraft only seen) can be passed “down” to radar with further scrutiny based on bearing. UCAV must be given a suite of sensors that would not merely serve a remote human operator, but for the specific utility of the UCAV itself that could take advantage of the broad-access of computer capabilities.

And Observe

In-game models for real-life comparison.
In-game models for real-life comparison.

However, this vast suite of ISR equipment still leaves a UCAV high-and-dry when it comes to target identification. Another officer suggested to me that, “for a computer to identify an air target, it has to have an infinite number of pictures of every angle and possibility.” With 3-D rendered models of desired aircraft, UCAV could have that infinite supply of pictures with varying sets of weapons and angles of light. If a UCAV can identify an aircraft’s course and speed, it would decrease that “range” of comparison to other aircraft or a missiles by orienting that contact’s shape and all comparative models along that true motion axis. Whereas programs like facial recognition software build models from front-on pictures, we have the specifications on most if not all global aircraft. Just as searching the internet for this article, typing “Leading” into the search bar eliminates all returns without the word. In the same way, a UCAV could eliminate all fighter aircraft when looking at a Boeing 747. 3-D modeled comparisons sharpened by target-angle perspective comparisons could identify an airborne contact from any angle.

A UCAV also need not positively identify every single airborne target. A UCAV could be loaded with a set of parameters as well as a database limited to those aircraft of concern in the operating area. AEGIS flags threats by speed, trajectory, and other factors; so too could a UCAV gauge its interest level in a contact based on target angle and speed in relation to the Carrier Strike Group (CSG). Further, loading every conceivable aircraft into an onboard database is as sensible as training a pilot to recognize the make and model of every commercial aircraft on the planet. A scope of parameters for “non-military” could be loaded into a UCAV along with the specific models of regional aircraft-of-interest. The end-around of strapping external weapons to commercial aircraft or using those aircraft as weapons could be defeated by the previously noted course/speed parameters, as well as a database of weapons models.

Breaking Open the Black Box

The musings of an intrigued amateur will not solve these problems; our purpose here is to break open the black box of drone operations and start thinking about our next step. We take for granted the remote connections that allow our unmanned operations abroad, but leave a hideously soft underbelly for our drones to be compromised, destroyed, or surveilled at the slightest resistance. Success isn’t as simple as building the airframe and programming it to fly. For a truly successful UCAV, autonomy must be a central goal. A whole bevy of internal processes must be mastered, in particular the ability of the UCAV to conceive and understand the world around it. The more we parse out the problem, the more ideas we may provide to those who can execute them. I’m often told that, “if they could do this, they would have done it”… but there’s always the first time.

Matt Hipple is a surface warfare officer in the U.S. Navy.  The opinions and views expressed in this post are his alone and are presented in his personal capacity.  They do not necessarily represent the views of U.S. Department of Defense or the U.S. Navy.

Video Game AI and the Future UCAV Top Gun

My brother in flight school should be glad we played so much Ace Combat 4.
Alright, Roomba, now start sweeping for enemy units.

A Roomba is useful because it can sweep up regular messes without constant intervention, not because it can exit and enter its docking station independently. Although the Navy’s new X-47B Unmanned Combat Air Vehicle (UCAV) has, by landing on a carrier, executed an astounding feat even for humans, this ability only means our weapons have matured past their typical one-way trips. The real challenge will be getting a UCAV to defend units while sweeping up the enemy without remote guidance (i.e. autonomously). The answer is as close as the games running on your Xbox console.

 

 

Player One: Insert Coin

ace-combat-5-the-unsung-war-20041024114416859
Simulated fighters are UCAVs having an out-of body experience.

Considering the challenge of how an air-to-air UCAV might be programmed, recall that multiple generations of America’s youth have already fought untold legions of advanced UCAV’s. Developers have created artificial “intelligences” designed to combat a human opponent in operational and tactical scenarios with imperfect information; video games have paved the way for unmanned tactical computers.

A loose application of video game intelligence (VGI) would work because VGI is designed to operate in the constrained informational environment in which a real-life UCAV platform would operate. Good (i.e. fun) video game AI exists in the same fog of war constraints as their human opponents; the same radar, visual queues, and alerts are provided to the computer and human players. The tools to lift that veil for computer and human are the same. Often, difficulty levels in video games are not just based on the durability and damage of an enemy, but on the governors installed by programmers on a VGI to make competition fair with a human opponent. This is especially evident in Real Time Strategy (RTS), where the light-speed all-encompassing force management and resource calculations of a VGI can more often than not overwhelm the subtler, but slower, finesse of the human mind within the confines of the game. Those who wonder when humans will go to war using autonomous computers fail to see the virtual test-bed in which we already have, billions of times.

This Ain’t Galaga

No extra lives, and forget about memorizing the level's flight patterns.
No extra lives, and forget about memorizing the level’s flight patterns.

Those uninitiated must understand how VGI has progressed by leaps and bounds from the pre-programmed paths of games such as the early 1980’s arcade shooter Galaga; computer opponents hunt, take cover, maneuver evasively, and change tactics based on opportunities or a sudden states of peril. The 2000’s Half-Life and HALO game series were especially lauded for their revolutions in AI – creating opponents that seemed rational, adapting to a player’s tactics. For the particular case of UCAV air-to-air engagements, since the number of flight combat simulators is innumerable, from Fighter Pilot on the Commodore 64 in 1984 to the Ace Combat series. Computers have been executing pursuit curves, displacement rolls, and defensive spirals against their human opponents since before I was born.

However, despite its utility, VGI is still augmented with many “illusions” of intelligence, mere pre-planned responses (PPR); the real prize is a true problem-solving VGI to drive a UCAV. That requires special programming and far more processing power. In a real UCAV, these VGI would be installed into a suite far more advanced than a single Pentium i7 or an Xbox. To initiate a learning and adapting problem-solving tactical computer, the DARPA SyNAPSE program offers new possibilities, especially when short-term analog reasoning is coordinated with messier evolutionary algorithms. Eventually, as different programs learn and succeed, they can be downloaded and replace the lesser adaptations on other UCAVs.

I’ve Got the Need, The Need For Speed

Unlike Maverick, drones will never have to go through motorcycle safety training.
Unlike Maverick, drones will never have to go through motorcycle safety training.

When pilots assert that they are more intuitive than computer programs, they are right; this is, however, like saying the amateur huntsman with an AR-15 is lesser trained than an Austrian Arabesquer. The advantage is not in the quality of tactical thought, but in the problem solving rate-of-fire and speed of physical action. A VGI executing air-to-air tactics in a UCAV can execute the OODA loop encompassing the whole of inputs much faster than the human mind, where humans may be faster or more intuitive in solving particular, focused problems due to creativity and intuition. Even with the new advanced HUD system in their helmets, a human being cannot integrate input from all sensors at an instant in time (let alone control other drones). Human pilots are also limited in their physical ability to maneuver. G-suits exist because our 4th and 5th generation fighters have abilities far in excess of what the human body is capable. This artificially lowers aircraft tactical performance to prevent the death or severe damage of the pilot inside.

Pinball Wizard: I Can’t See!

VGI doesn’t have a problem with the how, it’s the who that will be the greatest challenge when the lessons of VGI are integrated into a UCAV. In a video-game, the VGI is blessed with instant recognition; its enemy is automatically identified when units are revealed, their typology is provided instantly to both human and VGI. A UCAV unable to differentiate between different radar contacts or identify units via its sensors is at a disadvantage to its human comrades or enemies. Humans still dominate the field of integrating immediate quality analysis with ISR within the skull’s OODA loop. Even during the landing sequence, the UCAV cheated in a way by being fed certain amounts of positional data from the carrier.

We’ve passed the tutorial level of unmanned warfare; we’ve created the unmanned platforms capable of navigating the skies and a vast array of programs designed to drive tactical problems against human opponents. Before we pat ourselves on the back, we need to effectively integrate those capabilities into an independent platform.

Matt Hipple is a surface warfare officer in the U.S. Navy.  The opinions and views expressed in this post are his alone and are presented in his personal capacity.  They do not necessarily represent the views of U.S. Department of Defense or the U.S. Navy.

Wi-Vi: People Radar for Boarding Teams?

In war, we often take for granted the vast array of systems designed to detect the enemy. From the phased-array on a U.S. Navy DDG to the infrared scope on a soldier, locating the enemy is the first step in gaining a firing solution or determining one’s peril. There is one place, however, where this technology has been rather absent: indoors. Detection of people indoors is often no more advanced than sound or a mirror on a stick (which can be seen). At the highest end we’ve seen thermal imaging or advanced optics systems combined with discrete robotics, conceptually mirrors on a stick on a robot. At MIT, Mr. Fadel Adib and Professor Dina Katabi have developed a potential new weapon for those in the Close Quarters Battle (CQB) environment: the Wi-Vi, an affordable and portable system by which a simple WiFi device can detect motion through walls.

three-persons
Wi-vi signal return showing three distinct sources of movement.

While more expensive, unwieldy systems do exist, Wi-Vi is remarkable in its affordability and portability. With little more than a typical wireless router, one has the makings of a rudimentary people radar. Using an encrypted WiFi signal to differentiate the 2.4 GHz signal from white noise, multiple signals are fired into a room, reflected back, and processed. When nothing is moving, the signal is zeroed out. When objects moves, the signal changes. For each thing moving, there is a separate discernible changed return in signal, allowing the system to detect multiple objects or people. USCG and USN boarding teams would find a tactical, deployable version of this system particularly useful.

The ability to detect possible human movement in holds, around hatches, or even in CONEX boxes would be a boon to boarding teams. Tactical movement indoors are often the most dangerous; movements are limited to a small number of paths that can be easily monitored by an opponent. This especially applies to ships, where rooms and passageways are especially constrained. With a tactical version of the Wi-Vi, boarding teams could detect movement and the number of personnel in a room before entering. Wi-Vi could also potentially detect movement within a certain distance in large cargo-holds or eventually for checking CONEX boxes for potential victims of human trafficking as they move inside.

Penetration is the major challenge for shipboard use; although Wi-Vi has been tested on 8″ concrete, terrible shipboard cellphone reception has made Navy and Coast Guard personnel aware of the basic problems of signal propagation. Cellphones operate anywhere from a half GHz to 2GHz, and couldn’t receive a signal inside my patrol craft if life defended on it. The Wi-Vi system operates at 40 GHz: far less penetrating than the 2GHz of shipboard radios. Upon inquiry, Mr. Adib elaborated, “The walls with which we tested (i.e. concrete and hollow walls) have metal support; specifically, they are supported by steel frames. Naturally, most walls have metal support, and this is not a problem for the operation of Wi-Vi. However, the device does not work if the wall is fully covered with a metal sheet.”At this stage of development, then, a tactical version of the device would be best suited for wooden dhows, fiberglass fishing boats, or berthing areas with mostly false bulkheads in large commercial vessels.

It is also worth noting the identification limitations of this technology. Wi-Vi can show the number and relative movement of any objects in motion in space, but neither their specific locations, nor the presence of immobile objects. “Secure for sea” could well be the enemy of Wi-Vi onboard ships. So could complacency; teams untrained in the device might assume a “clear” reading on Wi-Vi means a room is empty as opposed to containing a very still and patient gunman.

MemeCenter_1373321719080_600

Wi-Vi is an exciting technology for those engaged in the CQB environment – our Marine Corps bretheren may make sooner use of the tech due to the less metallic nature of most urban walls. Wi-Vi may be deployable for hunting for stowaways on a commercial vessel or trafficked humans behind a false bulkheads on dhows. With further development of lower-frequency devices, Wi-Vi might be usable for CONEX boxes and lighter metal areas of ships. The ability to deploy relatively cheap, light-weight human detection systems to the field could mean this novel MIT project is the first snowflake in the avalanche of tactical-gear to come.

 

Matt Hipple is a surface warfare officer in the U.S. Navy.  The opinions and views expressed in this post are his alone and are presented in his personal capacity.  They do not necessarily represent the views of U.S. Department of Defense or the U.S. Navy.

h/t to Scott for sending the article.