Tag Archives: artificial intelligence

Leading the Blind: Teaching UCAV to See

In “A Scandal in Bohemia”, Sherlock Holmes laments, “You [Watson] see, but you do not observe. The distinction is clear.” Such is the current lament of America’s fleet of UCAVs, UGV’s, and other assorted U_V’s: they have neither concept nor recognition of the world around them. To pass from remote drones living on the edges of combat to automated systems at the front, drones must cross the Rubicon of recognition.

To See

Still can't see a thing.
Help!

The UCAV is the best place to start, as the skies are the cleanest canvas upon which drones could cast their prying eyes. As with any surveillance system, the best ones are multi-faceted. Humans use their five senses and a good portion of deduction.  Touch is a bit too close for UCAV, smell and hearing would be both useless and uncomfortable at high speed, and taste would be awkward. Without that creative deductive spark, drones will need a bit more than a Mk 1 Eyeball. Along with radar, good examples for how a drone might literally “see” besides a basic radar picture are the likes of the layered optics of the ENVG (Enhanced Night Vision) or the RLS (Artillery Rocket Launch Spotter).

Operators for typical optical systems switch between different modes to understand a picture. A USN Mk38 Mod-2 24MM Bushmaster has a camera system with an Electro-Optical System (EOS), Forward Looking Infrared (FLIR), and a laser range-finder. While a Mod-2 operator switches between the EOS and FLIR, in the ENVG, both modes are combined to create an NVG difficult to blind. For a drone, digital combination isn’t necessary, all inputs can be perceived by a computer at one time. Optical systems can also be put on multiple locations on the UCAV to aid in creating a 3D composite of the contact being viewed. Using an array of both EOS and FLIR systems simultaneously could allow drones to “see” targets in more varied and specific aspect than the human eye.

For the deployment of these sensors, the RLS is a good example of how sensors can “pass” targets to one another. In RLS, after target data is collected amongst audio and IR sensors, flagged threats are passed to the higher-grade FLIR for further designation and potential fire control solution. A UCAV outfitted with multiple camera systems could, in coordination with radar, pass detected targets within a certain parameter “up” to better sensors. Targets viewed in wide-angle scans (such as stealth aircraft only seen) can be passed “down” to radar with further scrutiny based on bearing. UCAV must be given a suite of sensors that would not merely serve a remote human operator, but for the specific utility of the UCAV itself that could take advantage of the broad-access of computer capabilities.

And Observe

In-game models for real-life comparison.
In-game models for real-life comparison.

However, this vast suite of ISR equipment still leaves a UCAV high-and-dry when it comes to target identification. Another officer suggested to me that, “for a computer to identify an air target, it has to have an infinite number of pictures of every angle and possibility.” With 3-D rendered models of desired aircraft, UCAV could have that infinite supply of pictures with varying sets of weapons and angles of light. If a UCAV can identify an aircraft’s course and speed, it would decrease that “range” of comparison to other aircraft or a missiles by orienting that contact’s shape and all comparative models along that true motion axis. Whereas programs like facial recognition software build models from front-on pictures, we have the specifications on most if not all global aircraft. Just as searching the internet for this article, typing “Leading” into the search bar eliminates all returns without the word. In the same way, a UCAV could eliminate all fighter aircraft when looking at a Boeing 747. 3-D modeled comparisons sharpened by target-angle perspective comparisons could identify an airborne contact from any angle.

A UCAV also need not positively identify every single airborne target. A UCAV could be loaded with a set of parameters as well as a database limited to those aircraft of concern in the operating area. AEGIS flags threats by speed, trajectory, and other factors; so too could a UCAV gauge its interest level in a contact based on target angle and speed in relation to the Carrier Strike Group (CSG). Further, loading every conceivable aircraft into an onboard database is as sensible as training a pilot to recognize the make and model of every commercial aircraft on the planet. A scope of parameters for “non-military” could be loaded into a UCAV along with the specific models of regional aircraft-of-interest. The end-around of strapping external weapons to commercial aircraft or using those aircraft as weapons could be defeated by the previously noted course/speed parameters, as well as a database of weapons models.

Breaking Open the Black Box

The musings of an intrigued amateur will not solve these problems; our purpose here is to break open the black box of drone operations and start thinking about our next step. We take for granted the remote connections that allow our unmanned operations abroad, but leave a hideously soft underbelly for our drones to be compromised, destroyed, or surveilled at the slightest resistance. Success isn’t as simple as building the airframe and programming it to fly. For a truly successful UCAV, autonomy must be a central goal. A whole bevy of internal processes must be mastered, in particular the ability of the UCAV to conceive and understand the world around it. The more we parse out the problem, the more ideas we may provide to those who can execute them. I’m often told that, “if they could do this, they would have done it”… but there’s always the first time.

Matt Hipple is a surface warfare officer in the U.S. Navy.  The opinions and views expressed in this post are his alone and are presented in his personal capacity.  They do not necessarily represent the views of U.S. Department of Defense or the U.S. Navy.

Video Game AI and the Future UCAV Top Gun

My brother in flight school should be glad we played so much Ace Combat 4.
Alright, Roomba, now start sweeping for enemy units.

A Roomba is useful because it can sweep up regular messes without constant intervention, not because it can exit and enter its docking station independently. Although the Navy’s new X-47B Unmanned Combat Air Vehicle (UCAV) has, by landing on a carrier, executed an astounding feat even for humans, this ability only means our weapons have matured past their typical one-way trips. The real challenge will be getting a UCAV to defend units while sweeping up the enemy without remote guidance (i.e. autonomously). The answer is as close as the games running on your Xbox console.

 

 

Player One: Insert Coin

ace-combat-5-the-unsung-war-20041024114416859
Simulated fighters are UCAVs having an out-of body experience.

Considering the challenge of how an air-to-air UCAV might be programmed, recall that multiple generations of America’s youth have already fought untold legions of advanced UCAV’s. Developers have created artificial “intelligences” designed to combat a human opponent in operational and tactical scenarios with imperfect information; video games have paved the way for unmanned tactical computers.

A loose application of video game intelligence (VGI) would work because VGI is designed to operate in the constrained informational environment in which a real-life UCAV platform would operate. Good (i.e. fun) video game AI exists in the same fog of war constraints as their human opponents; the same radar, visual queues, and alerts are provided to the computer and human players. The tools to lift that veil for computer and human are the same. Often, difficulty levels in video games are not just based on the durability and damage of an enemy, but on the governors installed by programmers on a VGI to make competition fair with a human opponent. This is especially evident in Real Time Strategy (RTS), where the light-speed all-encompassing force management and resource calculations of a VGI can more often than not overwhelm the subtler, but slower, finesse of the human mind within the confines of the game. Those who wonder when humans will go to war using autonomous computers fail to see the virtual test-bed in which we already have, billions of times.

This Ain’t Galaga

No extra lives, and forget about memorizing the level's flight patterns.
No extra lives, and forget about memorizing the level’s flight patterns.

Those uninitiated must understand how VGI has progressed by leaps and bounds from the pre-programmed paths of games such as the early 1980’s arcade shooter Galaga; computer opponents hunt, take cover, maneuver evasively, and change tactics based on opportunities or a sudden states of peril. The 2000’s Half-Life and HALO game series were especially lauded for their revolutions in AI – creating opponents that seemed rational, adapting to a player’s tactics. For the particular case of UCAV air-to-air engagements, since the number of flight combat simulators is innumerable, from Fighter Pilot on the Commodore 64 in 1984 to the Ace Combat series. Computers have been executing pursuit curves, displacement rolls, and defensive spirals against their human opponents since before I was born.

However, despite its utility, VGI is still augmented with many “illusions” of intelligence, mere pre-planned responses (PPR); the real prize is a true problem-solving VGI to drive a UCAV. That requires special programming and far more processing power. In a real UCAV, these VGI would be installed into a suite far more advanced than a single Pentium i7 or an Xbox. To initiate a learning and adapting problem-solving tactical computer, the DARPA SyNAPSE program offers new possibilities, especially when short-term analog reasoning is coordinated with messier evolutionary algorithms. Eventually, as different programs learn and succeed, they can be downloaded and replace the lesser adaptations on other UCAVs.

I’ve Got the Need, The Need For Speed

Unlike Maverick, drones will never have to go through motorcycle safety training.
Unlike Maverick, drones will never have to go through motorcycle safety training.

When pilots assert that they are more intuitive than computer programs, they are right; this is, however, like saying the amateur huntsman with an AR-15 is lesser trained than an Austrian Arabesquer. The advantage is not in the quality of tactical thought, but in the problem solving rate-of-fire and speed of physical action. A VGI executing air-to-air tactics in a UCAV can execute the OODA loop encompassing the whole of inputs much faster than the human mind, where humans may be faster or more intuitive in solving particular, focused problems due to creativity and intuition. Even with the new advanced HUD system in their helmets, a human being cannot integrate input from all sensors at an instant in time (let alone control other drones). Human pilots are also limited in their physical ability to maneuver. G-suits exist because our 4th and 5th generation fighters have abilities far in excess of what the human body is capable. This artificially lowers aircraft tactical performance to prevent the death or severe damage of the pilot inside.

Pinball Wizard: I Can’t See!

VGI doesn’t have a problem with the how, it’s the who that will be the greatest challenge when the lessons of VGI are integrated into a UCAV. In a video-game, the VGI is blessed with instant recognition; its enemy is automatically identified when units are revealed, their typology is provided instantly to both human and VGI. A UCAV unable to differentiate between different radar contacts or identify units via its sensors is at a disadvantage to its human comrades or enemies. Humans still dominate the field of integrating immediate quality analysis with ISR within the skull’s OODA loop. Even during the landing sequence, the UCAV cheated in a way by being fed certain amounts of positional data from the carrier.

We’ve passed the tutorial level of unmanned warfare; we’ve created the unmanned platforms capable of navigating the skies and a vast array of programs designed to drive tactical problems against human opponents. Before we pat ourselves on the back, we need to effectively integrate those capabilities into an independent platform.

Matt Hipple is a surface warfare officer in the U.S. Navy.  The opinions and views expressed in this post are his alone and are presented in his personal capacity.  They do not necessarily represent the views of U.S. Department of Defense or the U.S. Navy.

Thinking Weapons are Closer Than We Think

This piece also at USNI News.

The Defense Advanced Research Projects Agency (DARPA) has constructed a neuromorphic device—the functioning structure of a mammalian brain—out of artificial materials. DARPA’s project, SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) signals a new level for biomimicry in engineering. The project team included IBM, HRL, and their subcontracted universities.

Biomimicry is not new. The most recent example is the undulating “robojelly” developed by the Universirty of Texas at Dallas and Virginia Tech. This new drone swims through the sea like a jellyfish, collecting energy from the oxygen in the water, as does any breathing organism. There is also the graceful Pesto SmartBird, an aerial drone that mimics the shape and physical flight of birds. A knockoff was found crashed in Pakistan. If not the shape, at least the actions are often mimicked, as shown by UPenn’s quadrotors being programmed to use crane claws like predatory birds rather than construction cranes. However, these examples of biomimicry only cover the external actions of an animal. SyNAPSE goes deeper, building a synthetic version of the mind that develops these actions.

In the quest for autonomous machines, the suggestions have been either-or: machines programmed to be like brains or the integration of biological processors to provide that processing flexibility. DARPA has found the “middle path” in constructing a series of synthetic synapses out of nano-scale wire. This takes the physical form of those biological processors and constructs them from the base material of conventional computers. According to James Gimzewski at UCLA, the device manages information through a method of self-organization, a key trait of autonomous action and learning, “Rather than move information from memory to processor, like conventional computers, this device processes information in a totally new way.” Moving past the surface mimicking of physical shape and function, SyNAPSE will mimic living organisms’ basic way of processing information.

However, as the possibility for real autonomy approaches, the legal challenge becomes more urgent. An article in Defense News summarizes the catalogue of problems quite well, from accidental breaches of airspace/territorial waters, to breaches in navigational rules, to accidental deaths all caused by machines not having a direct operator to hold responsible. However, as the director of naval intelligence Vice Admiral Kendall Card noted, “Unmanned systems are not a luxury; they are absolutely imperative to the future of our Navy.” Like the CIA’s armed predator program, someone will eventually open Pandora’s box and take responsibility for their new machines to gain the operational edge. DARPA’s SyNAPSE project is that next step toward an autonomous reality.

A DARPA scale of the make-up of a neuromorphic circuit and their biological equivalents.
A DARPA scale of the make-up of a neuromorphic circuit and their biological equivalents.

Matt Hipple is a surface warfare officer in the U.S. Navy.  The opinions and views expressed in this post are his alone and are presented in his personal capacity.  They do not necessarily represent the views of U.S. Department of Defense or the U.S. Navy.