Tag Archives: drones

MFP 4: Emerging Technology and Naval Warfare

What emerging technology is going to most profoundly change the way naval warfare is conducted, and why?

This is the Fourth in our series of posts from our Maritime Futures Project.  For more information on the contributors, click here.  Note: The opinions and views expressed in these posts are those of the authors alone and are presented in their personal capacity.  They do not necessarily represent the views of their parent institution U.S. Department of Defense, the U.S. Navy, any other agency, or any other foreign government.

Unmanned aviation made many advances in 2012...but will it radically change naval warfare?
Unmanned aviation made many advances in 2012…but will it radically change naval warfare?

CDR Chris Rawley, USNR:

Most of CIMSEC’s readers are familiar with Moore’s Law as it relates to integrated circuits increasing in power while falling in cost. Some may have also heard of Kryder’s Law, which deals with shrinking costs for magnetic memory. Other related concepts include Koomey’s Law, which says that battery requirements for a fixed computer load continue to fall and the Shannon-Hartley Theorem, which impacts data transmission speeds. These laws have resulted in increased capability and falling prices for commercial and consumer tools reliant on computing power. It’s a given that military hardware is also becoming more high tech and miniaturized. So why does the cost of military technology continue to skyrocket? There are a number of reasons for this dichotomy, the primary being the U.S. military’s unresponsive and byzantine joint acquisition systems. Those problems aside, the Navy (and DoD) need to figure out how to leverage laws of technology to reduce inflation in new military hardware. One way to do this is with smaller, more numerous, and cheaper systems – many of them unmanned – which can operate distributed over large geographic areas. At Information Dissemination, I frequently discuss a concept for future naval warfare called distributed maritime operations (DMO).  DMO as I see it will use highly distributed, highly connected – but independently commanded – small footprint fighting elements. In the same way that special operations forces have used similar concepts to fight a global terrorist threat, I believe DMO will allow small naval forces to work together in a variety of scenarios to produce out-sized combat effects.

LT Drew Hamblen, USN:

Anti-ship ballistic missiles and the implications of Unmanned Aerial System (UAS) proliferation will shake up carrier battle groups – specifically the ability of UASs to numerically overwhelm manned assets. How will a carrier air wing confront 3 air wings’ worth of unmanned aircraft that have twice the on-station time and no pilot-fatigue limitations?

Marc Handelman, WA, U.S.:

– Naval drones (Surface, Sub-surface, Aerial)
– Power-projection exploitation capabilities (battlespace control, sustainment, and attack via drones)
– Tiny sensors known as MEM (microelectromechnical) devices such as DARPA’s SmartDust project to facilitate ISR exploitation and communication.
– The ONR-funded Sea Jet Advanced Electric Ship (obvious efficiencies in power management, logistics, acoustic signature reduction, et cetera)

Felix Seidler, seidlers-sicherheitspolitik.net, Germany:

Cyber-warfare is going to change things soon. The world’s best warships are worth nothing if the IT systems supporting command, control, communications, intelligence, etc. are offline. Hence, navies will have to pay greater attention to safeguarding their IT. For example, malware intrusions into the targeting and control software for all kinds of sea-launched missiles could not only miss their target, but be redirected to strike their ship of origin instead. For the present and the future, the joint forces approach must also include a nation’s cyber warriors.

YN2(SW) Michael George, USN:

As we are still in the early ages of the internet and wireless technology, I believe that there will be an increasingly important role both play in our country’s defense.

Sebastian Bruns, Fellow, Institute for Security, University of Kiel, Germany:

I think cyber warfare, although more of a concept than a technology is providing the basis for the most profound change in naval warfare. The concept is diffuse, difficult to understand, and impossible to directly feel (cue Donald Rumsfeld’s “known knowns, known unknowns, and unknown unknowns”). In fact, cyber warfare’s challenges, opportunities, and limitations have not been fully grasped. If cyber is understood as a domain, I would compare our current state of mind (and understanding of the subject matter) to the early 1910’s perspective on air power: There has not been a full-fledged cyber war, much like there had not been an appreciation of airpower until World War I. At the same time, the generation of sailors and flag officers that is currently rising through the ranks has already been sensitized (largely by growing up with cyber technology) towards the subject matter; air power and space power did not provide a comparable perspective. It seems logical to quickly adopt cyber warfare concepts and embrace them as part of institutional and individual, strategic and tactical learning.

Rex Buddenberg, Naval Postgraduate School:

Before projecting forward, it may help to look back an equivalent amount of time to see what technologies changed maritime business (warfare included) in the past half-century – essentially since WWII. Some of these technologies, like radars and fathometers, are
gadgets. Others are information systems, such as radionav systems like Loran, GPS, digital GPS, and AIS and its work-alikes including USMER, AMVER, MOVREP, and those built around OTH-Gold, Link 14/11.

Still other technologies constitute the potential components of information systems, chiefly communications. The maritime VHF system has revolutionized the SAR business in the USCG in our lifetimes. And, integration with accurate navigation, has revolutionized it further. For instance, when I was stationed on the Oregon coast, a distressed mariner could give us a pair of Loran TD (time/difference data-points) and a fathometer reading (essentially as a checksum) and we could fly a helo right to him … regularly. This phenomenon has attracted the term ‘maritime domain awareness (MDA)’ albeit without a decent usable definition. Now look ahead a bit…

Can I get these in tablet form?
Can I get these in tablet form?

Gadgets: The march of new gadgets will, of course, proceed. The change here will be that the gadget will increasingly export the data rather than only provide a local display. To do that, the gadget will have an internet interface (like webcams). Example: remember PDAs … like Palm Pilots? They had no comms ability to speak of, other than a serial line to sync with local computer. But once the PDA functionality was integrated with the cellphone infrastructure, PDAs morphed into smartphones. I’ve got a PDA … its sitting up on a high shelf.

Systems: The implementation of new systems will also proceed. But there is a sea change in the offing, one that has already occurred elsewhere and is about to occur here: integration and interoperability.  Most of the systems above are ‘stovepipe’. The chief characteristic of stovepipe is the locking of a single application (e.g. position reporting) to a single comms system (channels 87B and 88B) to yield something like AIS. The comms channels cannot be used for anything else, such as distress or weather comms, and the systems are usually hard to maintain throughout their life-cycle because you can’t form-fit swap in new components without changes cascading through the system. To get a whiff of the future, look in your office or your residence – we have ‘internet plumbing’ which is application-agnostic. It supports a myriad of applications (messaging, video, scrabble (my wife’s current fixation), … the list is long and ever-changing. The appearance of a new application does not require changes in the underlying comms plumbing. This has partially emerged in the maritime world, but will become ubiquitous, perhaps in the next decade (the technology exists, the problems have to do with infrastructure and mentalities).

The telltale here will be rise of the internet … in this case in the internet’s extension to platforms at sea. We see the harbingers of that now, such as ADNS in Navy. This is the single biggest enabler of integration of the rest.

The operational effect of the increase and integration of information systems is more intelligent application of industrial capability. In slang, less turning circles in the ocean. And in slogan, we might be able to “take the search out of SAR”.

CDR Chuck Hill, USCG (Ret.):

For the Coast Guard’s operations, in both peace and war, the most important aspect is likely to be processed vessel track information. Given the ability to track every vessel in the EEZ, identify it, and correlate it to its past history including the cargoes it has received, would be the ultimate goal. Over-the-Horizon radar/Satellite/AIS (Automatic Identification System)-derived information may eliminate the search in search and rescue (SAR), allow us to know where all the fishing vessels are, and allow us to recognize anomalous voyages that might be smugglers. To do this effectively we need to be able to track small vessels as well as the large.

In wartime this will also make blockade enforcement more effective, and permit prompt response when vessels are attacked.

Dr. Robert Farley, Professor, University of Kentucky:

The expansion of unmanned vehicles (air, surface, and sub-surface) has the potential to work tremendous changes in how we think about naval warfare. We’re already seeing this in littoral projection, and beginning to see it in ASW (anti-submarine warfare). As navies work through the theoretical implications of unmanned vehicles, they’ll begin to develop platforms capable of taking greatest advantage of the technologies, extending both eyes/ears and reach.

Pew-Pew-Pew!
Pew-Pew-Pew!

LCDR Mark Munson, USN:

Earlier this year, Admiral Greenert, the US Navy’s Chief of Naval Operations, declared that “Payloads were more important than Platforms.” I’m interested in how this plays out in terms of Intelligence, Surveillance, and Reconnaissance (ISR). Traditionally the mission of sensors onboard planes, ships, and subs has been subordinated to the operation of those platforms. Is the Navy’s BAMS (Broad Area Maritime Surveillance) UAV going to be just a P-3 without an aircrew onboard, or will it represent a new approach to collecting the information needed to generate actionable intelligence?

It’s been a long time since the U.S. Navy has fought a sustained war at sea, and no one has actual experience in how our current and future sensors need to be used to generate the intelligence required to engage capable enemy at sea. Unfortunately, the model successfully developed by our counterparts ashore during the last decade was in a permissive air environment. It allowed lots of UAVs to provide Full Motion Video (FMV) to intel analysts, developing a pattern of life for terrorist targets that could be fused with other data in order to generate actionable targeting data, but this most likely would not apply to a fight at sea against a capable enemy.

Bryan McGrath, Director, Delex Consulting, Studies and Analysis:

Although it is hardly an “emerging” technology, electric drives will profoundly change naval warfare. They will make submarines even quieter than they currently are, and they will serve to reverse the precision-guided munitions (PGM) imbalance with China by enabling future generations of electric weapons.

LT Alan Tweedie, USNR:

Directed energy and rail guns, while requiring massive up-front R&D costs will produce fantastic combat capability. The ability to have nearly unlimited ammunition without replenishment will make our fleet more capable of conducting sustained operations against enemies.

LT Chris Peters, USN:

I think one of the bigger upcoming changes will come from the installment of rail guns on DDG-1000 and beyond. These could be game-changers in power projection when you combine TLAM (Tomahawk Land Attack Missile)-like range with the cost per round of 5” (NGFS) Naval Gun Fire Support shells.

LT Scott Cheney-Peters, USNR:

3D printed drone
Drones from desktop 3D printers are quickly becoming reality.

I mentioned the general trend of increasing data integration in MFP 3 – essentially the Navy capitalizing on the spread of what’s possible with the information revolution.  On the logistics and design side, we’ve waxed on about the effects 3D printing will have.  But as far as actual naval warfare, I’m going to have to agree with those thinking about directed energy weapons and rail guns as the most likely to have a nearer-term impact on the tactical level.  Both have technical hurdles to overcome, but when they do, they’ll shake up the modern calculus of naval engagements – giving surface vessels a much greater ability to hold their own in a fight, and greatly increasing the potential of drones once component miniaturization and energy reductions have sufficiently advanced reduced to allow their outfit aboard.  Bryan McGrath has a good run down over at Information Dissemination on directed energy and electric weapon systems (DEEWS). Finally, the greatest potential for disruption in naval warfare comes from the use of unmanned systems in myriad combinations that are hard to predict but fascinating to think about – for example the combined cyber warfare assisted by drones.

LTJG Matt Hipple, USN:

Perhaps Scott Cheney-Peters and I are beating a dead horse here, but 3D printing in a big way. I know I’m beating an extra-dead horse when I include automation. 3D printing drastically changes the required logistical chain for both ground and naval forces. It changes the way the entire supply system would work, the kinds of people it would employ, and the navy’s relationship with industry. With an influx of business partners that consider themselves problem “hackers”, the Navy will hopefully get a fresh new perspective on life.

I say automation in the smaller big way because, rather than revolutionizing warfare, it is merely a ramping up of speed and density with a decrease in size. Now, my one caveat is that if laser technology becomes sufficiently powerful, fast, and accurate enough to end missile and aircraft threats at great enough range, we potentially have a game-changer with the return of naval gunnery and a real emphasis on submarine warfare as the counter.

LT Jake Bebber, USN:

While much will undoubtedly be written about advances in computer network operations, A2AD systems and space systems, the most profound impact in naval warfare will be the navy that best adapts to operating and fighting in a communications-denied environment. When satellites are shot down, when internet communications are blocked, and when radar emissions are masked or jammed, which navy will still be able to pull out the paper charts to get to where they need to be, fight, and win? So it won’t be an emerging technology that wins the next war. It will be the navy that best adapts to fighting much as we did during World War II, and before.

The Mark II Eyeball

I'm sure I don't need my peripheral vision...
I’m sure I don’t need my peripheral vision…

If you ask Sailors in the U.S. Navy to list the most important sensors for accomplishing their mission, a likely response is the high-tech name for that fundamental, but decidedly low-tech device – the Mark I Eyeball.  In light of recent developments the Mark II may well be only a short way off. 

Earlier this year we reported on the conceptual developments coming out of Google’s Project Glass – Augmented Reality (AR) goggles – and began to address their potential applications for the Navy, specifically for damage control teams and bridge and CIC watchstanders.  The benefits of such devices are two-fold.  First, they are a way to reduce physical requirements, such as carrying or activating a handheld radio with one’s hand, or walking to a console (and perhaps using hands again) to call up information.  Second, as the walk to a console highlights AR goggles can facilitate quicker data retrieval when time is of the essence, and deliver it more accurately than when shouted by a shipmate.

Yet there are limitations to AR goggles – both practical and technological. 

First, for the time being, AR devices require the same (or more) data exchange capabilities as the handsets they might someday replace.  For now this means WiFi, Bluetooth, interior cellular, or radio.  Unfortunately, as anyone who has enjoyed communicating during fire-fighting or force protection drills can tell you, large, metal, watertight doors are excellent obstructors of transmissions.  While strides have been made over the past decade, range and transmission quality will remain a challenge, especially for roving or expansive interior shipboard watchteams.  While it might make sense to install transmitters in areas such as the bridge or CIC for fixed-use watchstanders, setting up transmitters and relays throughout the ship for coverage for larger watchteams is likely to be a more costly endeavor.

Second, voice recognition is integral to much of the system, yet is an imprecise interface for highlighting or selecting physical objects encountered in actual reality (“Siri, what is the distance to that sailboat?  No, the other one, with the blue hull.  Siri, that’s a buoy”).  Some of this can be mitigated by auto-designation systems of the type already in use by most radars (“Siri, kill track 223”).  However an auto-designation approach would involve cumbersome doctrine/rule-setting, run the risk of inundating the user with too much data, and still allow a large number of objects to remain outside the system – likely rendering it unmanageable for roving watchstanders.

One solution is letting a Sailors’ eyes do the selecting.  This can be achieved by incorporating eye tracking into the AR headsets.  Eye tracking is a type of interface that, as the name suggests, allows computers to determine exactly where, and therefore at what, someone’s eyes are looking though the use of miniature cameras and infrared light (for more on the science, read this).

As The Economist reports, the cost of using headset-mounted eye-tracking technology has halved over the past decade, to about $15,000.  That’s still a bit pricey.  And when the requirements for making headsets ruggedized for military use and integrating them with AR headsets are taken into account the price tag is likely to increase to the point of being unaffordable at present.  But the fundamental technology is there and operational.

In the Eye of the SailorA third AR limitation is that cramming all of the equipment into a headset can make the device pretty unwieldy.  Luckily, miniaturization of components is expected to continue apace, and one DARPA project could yield another approach (at the potential loss of an eye-tracking ability).  The Soldier Centric Imaging via Computational Cameras (SCENICC) program is funding research to develop contact lenses for both AR and VR displays.  As the site describes it, “Instead of oversized virtual reality helmets, digital images are projected onto tiny full-color displays that are very near the eye.  These novel contact lenses allow users to focus simultaneously on objects that are close up and far away.”  Such contact lenses could be ideal for providing watchstanders a limited heads-up display of the most important data (a ship’s course and speed, for example), but would need an integrated data-receiver.

Beyond AR

Both eye tracking and the DARPA contact lenses point to other potential naval uses.  The Economist describes truckers’ use of eye-tracking to monitor their state of alertness and warn them if they start falling asleep.  If the price of integrated headsets drops enough, watchteam leaders may no longer need to flash a light in the eyes of their team to determine who needs a cup of Joe.  Already, Eye-Com Corporation has designed an “eye-tracking scuba mask for Navy SEALs that detects fatigue, levels of blood oxygen and nitrogen narcosis, a form of inebriation often experienced on deep dives.”

Naval aviation could also get a boost.  A Vermont firm is using eye tracking to help train pilots by monitoring their adherence to checklist procedures.  Pilots flying an aircraft from the cockpit or remotely piloting an unmanned system (aerial or otherwise) from the ground could use eye tracking to control movement or weapons targeting.     

Eye tracking models soon available for jealous girflriends everywhere.
Eye tracking headsets soon available for jealous girflriends everywhere.

Likewise the DARPA lenses point to potential drone use.  While the lenses are not designed to provide input to a drone – that would have to be done using a different interface such as a traditional joy stick or game controller, until (and if) integrated with eye tracking – VR lenses could provide a more user-friendly, portable way to manually take in the ISR data gathered by the drone (i.e. to see what its camera is seeing).  The use for eyeball targeting/selection in naval aviation/naval drones might, however, be surpassed before it is fully developed, by dichotomous forces.  One one hand,  brain-control interfaces (BCIs) (as demonstrated earlier this year in China) provide more direct control, allowing users essentially to think the aircraft or drone into action.  On the other hand, drones will likely become more autonomous, obviating the need for as much direct control (Pete Singer did a good job of breaking down the different drone interface options and levels of autonomy a few years ago in Wired for War).

Yet not everyone is likely to be comfortable plugging equipment into their body.  So while BCI technology could similarly work for AR goggles, unless BCI use becomes a military mandate, AR headsets with integrated voice recognition/communication and eye-tracking are likely to be the pinnacle goal for shipboard use.  There will undoubtedly be delays.  For example, widespread adoption of near-term improvements such as voice-activated radios (or other inputs that don’t require hand-use) would temporarily upset the cost-benefit calculation of continued AR headset development.  However, if industry can bring down the cost enough (and the initial reception to Project Glass indicates there could by widespread consumer demand for the technology), the future will contain integrated headsets of some sort, making the good ‘ol eyeball all the more important.

LT Scott Cheney-Peters is a surface warfare officer in the U.S. Navy Reserve and the former editor of Surface Warfare magazine. He is the founding director of the Center for International Maritime Security and holds a master’s degree in National Security and Strategic Studies from the U.S. Naval War College.
 
The opinions and views expressed in this post are his alone and are presented in his personal capacity. They do not necessarily represent the views of U.S. Department of Defense or the U.S. Navy. 

Drones of the Navy SEALs

ScanEagle Launched from Mk V SOC
ScanEagle Launched from Mk V SOC

The mystique of Navy SEALs has been heavily celebrated in the media and films due to recent real-world exploits.  Yet Naval Special Warfare (NSW) Sailors have been heavily engaged in combat operations for more than 11 consecutive years.  Warfare is still a decidedly human endeavor, and America’s naval special warriors are quick to embrace the truth that “humans are more important than hardware.”  Nevertheless, today’s SEALs, Special Warfare Combat Crewmen, and other supporting personnel in the NSW community have benefited greatly from technology, which increasingly includes unmanned systems.

Two primary realizations within the NSW community drove the rapid introduction of UAVs for combat operations in Southwest and Central Asia.  The first realization was that even the best shooters in the world are ineffective if they are unable to locate their targets.  Simply, UAVs are a force multiplier for SEALs and enable an exponential increase in their ability to find, fix, and finish targets.  Secondly, as more and more small UAVs were added to the force, NSW began to understand that as valuable as these unmanned systems were, the skills required to operate and maintain them were a distraction for highly trained shooters.  This epiphany led to the creation of Unmanned Aircraft Systems Troops at Naval Special Warfare Support Activity (SUPACT) One in Coronado, California, and SUPACT Two at what is now Joint Expeditionary Base Little Creek-Fort Story, Virginia.  According to Naval Special Warfare Command, each UAS Troop totals 35 personnel among three detachments of UAS operators, a group of instructors, and military and civilian maintenance technicians.

For some additional first-person historical perspective on the evolution of unmanned air systems (UAS) in NSW, former Navy SEAL UAS expert and current lighter-than-air unmanned systems entrepreneur John Surmount discusses the origins of unmanned air systems in Naval Special Warfare in Operation Enduring Freedom in this podcast.  Since those early days, the breadth and depth of unmanned systems used by Naval Special Warfare Operators has expanded tremendously.

The exact tactics, techniques, and procedures for UAS use with NSW are a closely guarded secret (as well they should be), but in general, SEALs use drones to support the four core missions of NSW:

  • Direct Action (DA) – offensive missions to capture/kill enemy targets
  • Special Reconaissance (SR) – surveillance and monitoring of enemy activity and the littoral environment including beaches and ports
  • Counter-terrorism (CT) – conducting DA against terrorist networks
  • Foreign Internal Defense (FID) – assisting foreign military partners in developing their own special operations capacity.


UAVs are especially critical for finding and fixing the exact location of an enemy in DA and CT.  They also support, and in some cases replace, the eyes of operators in SR missions.  On a micro-scale, a demonstration the utility of UAVs can be seen in the film “Act of Valor” where a Raven UAV – launched by actual operators from Special Boat Team 22 – provides ISR over-watch of SEAL operators on a mission.  A more-capable, marinized UAV, the Puma AE, is also part of NSW’s inventory.

The beauty of these rucksack-portable systems is that they can provide organic support to a platoon or smaller-size group of SEALs.  The primary drawback is limited endurance.  Enter the Small Tactical UAS (STUAS).   NSW has embraced the ScanEagle for missions where long endurance ISR is a requirement.  NSW ScanEagles can be sea-launched from vessels as small as a MK V Special Operations Craft or based ashore at expeditionary sites.  Another example of the value of UAVs in the over-watch role was demonstrated in April 2009, when a ScanEagle provided a real time feed to assist SEALs in rescuing the Maersk Alabama’s Captain Richard Phillips from his pirate captors.   

More recently, NSW has benefited from the Navy’s introduction of the shipboard vertical take-off and landing (VTOL) Fire Scout.  Requirements for the next-generation VTOL UAS, the Fire-X MQ-8C, are also driven by special operations forces.  Future developments in Navy UAS integration for NSW will undoubtedly include armed tactical UAVs providing fire support to operators on the ground and sea.

The same concept of ISR support and armed over-watch applies to more complex operations with larger UAVs.  Land-based Air Force Predator and Reapers support NSW missions in Afghanistan and other areas.  A low-signature RQ-170 drone reportedly assisted the SEALs who conducted the raid to kill Usama bin Laden in May 2011.  NSW is also slowly progressing in the implementation of unmanned undersea vehicles (UUV).  These systems are used for missions such as hydrographic reconnaissance reducing the risk to operators and letting them focus on other core missions.  Much as the Navy’s Explosive Ordnance community has embraced autonomous underwater vehicles to help them hunt and neutralize mines, SEALs will eventually find themselves reliant on robots to survey beach landing sites.

Along with other underwater assets such as swimmer delivery vehicles, UUVs fall under the auspices of Naval Special Warfare Group Three (NSWG-3).  In 2010, Naval Special Warfare Command ordered some Iver2 autonomous undersea vehicles for experimentation.  NSW has also purchased 18 Semi-autonomous Hydrographic Reconnaissance Vehicles (SAHRV) outfitted with side-scan sonar and an Acoustic Doppler Current Profiler.  SAHRV is an adaptation of the REMUS 100.  On the USV side, earlier this year, Naval Sea Systems Command’s Naval Special Warfare Program Office sponsored a test of a Protector USV armed with Spike missiles.  The application of such a capability in support of NSW missions is unclear.

The combination of the world’s most proficient naval special operators enhanced by modern technology will continue to produce powerful strategic effects through tactical actions.

 

This article was re-posted by permission from, and appeared in its original form at NavalDrones.com.

On the Wings of the Sun? Harnessing Solar Power for Aviation

Solar Impulse HB-SIA in flight
         It may be a little gangly, but that’s just a sign of growth spurts

A few months back we had a guest post from NavalDrones on the site discussing power needs for drones, focusing on the advantages of batteries compared to today’s combustion engines. Engines are noisy, limiting drones’ stealthiness, and both engines and batteries require refueling/recharging. Thus, lengthy, days-long on-station operations aren’t in the cards for today’s drones. (For example, the Global Hawk can fly continuously for about 28 hours.) A balloon or dirigible could stay aloft for longer periods, but at the expense of maneuverability and speed. For reasons like these, harvesting solar power during flight has captured the attention of many aerospace engineers.

One challenge terrestrial solar-powered vehicles face is the variability of cloud cover. In contrast with its grounded brethren, solar aircraft can often negate a cloudy day by just climbing to a sufficient altitude. However, night is, of course, still an obstacle to long-term flight (or short-term missions not in the daytime).

Nevertheless, with the aid of batteries, today’s solar drones and UAVs can fly non-stop for weeks. The British-US aerospace and defense company QinetiQ developed the drone Zephyr, which stayed aloft for 14 days in July 2010 (h/t to Solar Impulse). Zephyr is not small (12-m [39-ft] wingspan), as one can see in the following video, but it is light—only 27 kg, or ~60 lbs, hence the hand-launch. It reached an altitude of 21.6 km (13.4 mi) on that first flight, boosting its observational capabilities.

 

[youtube http://www.youtube.com/watch?v=ejXaAwsIDoI&w=560&h=315]

Meanwhile, the goals of the Solar Impulse team might be even more audacious: a solar-powered flight around the world in 2015— with a pilot. While it’s perhaps not the most agile, the HB-SIA has already demonstrated 24-hr flight in the past year (with a battery system) from Switzerland to Morocco. And the team has strong backing; it was launched by Bertrand Piccard, who made his name in aviation by circumnavigating the world in the Breitling Orbiter balloon in 1999. Industrial partners include Solvay, Décision, and Bayer MaterialScience, who increased their funding for the project in October [h/t to Flightglobal]. In contrast to Zephyr, HB-SIA’s mass is 1600 kg (3500 lb), about as much as a car, and its 63-m (208-ft) wingspan is about 60% longer than Global Hawk’s – necessary to fit enough solar cells to lift that mass.

So what’s next for solar aircraft? A higher-density storage system than batteries would help by extending flight time. NASA tested a series of solar UAVs in the early ’00s, including Helios, which included an “experimental fuel cell system” that used solar power to regenerate its fuel, storing more energy per pound than batteries. Unfortunately, a crash in 2003 destroyed Helios, but a fuel-cell system remains a possible avenue of advancement. Surface-based lasers can also offer additional illumination for a power boost (also covered in Naval Drones’ post).

Increasing the efficiency of solar cells is another route. Aircraft using solar cells require large wings whose size and shape are driven in part by demands for enough surface area to power the aircraft. These designs limit maneuverability and high-performance (i.e. high-power-demand) attributes like sudden acceleration and changes in direction. Unfortunately, physics principles constrain just how much efficiency can increase. Solar Impulse uses cells with an efficiency of 22.7% — higher than most commercial modules in solar farms. But using only one kind of material in the cell to absorb light means it can harvest only part of the sun’s light, at maximum about 33% (something called the Shockley-Quiesser limit).

Multi-junction cells can capture more slices of the solar spectrum, but in practice their complex assembly limits them to two or three absorber materials. So far they are mostly used in spaceflight, where low weight is a bigger driver than low cost. Still, according to the U.S. National Renewable Energy Lab, the record triple-junction cell (without concentrators, which are another topic) has 35.8% efficiency. So assuming for the sake of estimation that these triple-junction cells weigh about the same per unit surface area (not true at present, according to Solar Impulse), they could reduce wing area by about 37%.  Or, depending on the requirements, they could produce 58% more power.

And power is the big difference between a solar airplane like HB-SIA and a fuel-burner like Global Hawk. HB-SIA’s electric engines produce a maximum of 30 kW (40 hp), whereas Global Hawk’s engine produces at peak 7600 lbs of thrust at a top speed of 357 mph, which works out to 5.4 MW (7200 hp). In part we could say that HB-SIA is more efficient, so it doesn’t need as much power, but on the other hand, Global Hawk can carry a 1360-kg (3000-lb) payload, whereas HB-SIA can carry… one human.

Doing the math shows the upper limit of improving power capture. The sun provides, at midday, 1.3 hp per square meter (of land surface). This handy figure gives you an idea of the maximum solar power wings of a given size could produce (with magical 100% efficient cells). Thus, performance improvements may come from vehicle lightweighting, rather than ratcheting up solar cell efficiency. For example, batteries make up one-quarter the total mass of HB-SIA (400 kg, or 800 lb). And while modern aircraft bodies are increasingly made of carbon fiber (instead of aluminum), companies such as Nanocomp and TE Connectivity are also beginning to manufacture data and power cables made of carbon nanotubes (CNTs) on the scale of miles. CNTs can match the conductivity of copper while saving ~70% of the weight.

Even if it doesn’t displace the combustion-engine in aviation when speed and heavy lift are required, solar power’s promise of nearly indefinite sustained flight is likely to expand its role in aeronautics in the near future.

Dr. Joel Abrahamson holds a PhD in chemical engineering from the Massachusetts Institute of Technology (MIT), where he created nanomaterials for lightweight, high-power electricity generators. He currently researches materials for thin-film, flexible solar cells at the University of Minnesota. The opinions and views expressed in this post are his alone and are presented in his personal capacity. They do not necessarily represent the views of the University of Minnesota.