Tag Archives: ethics

Ten Principles of Ethical Conduct

By Captain Mark Vandroff, USN

I recently read Dale R. Wilson’s well-written piece “Character is Crumbling in Our Leadership.” I was left, however, wondering about a definition of ethical behavior. Lockheed Martin lists “Do The Right Thing” as the first of its three core values.1 This is a noble sentiment, but how does one determine “The Right Thing?” To be fair to Lockheed Martin, their ethics webpage, on which their value statement is clearly articulated, provides links to several different company publications with more detailed rules for the conduct of company business and training with examples of good and bad ethical behavior. 

The federal government, including the Department of Defense (DoD), provides much of the same. For example, the Naval Sea System Command (NAVSEA) is attempting to make its sailors and civilian employees more ethically aware with its “Anchor Yourself In Ethics” campaign.2  This campaign focuses on awareness of the federal government’s “14 Principles Of Ethical Conduct.”3 In both cases, leaders seem to equate ethical behavior with compliance with an established set of rules. While related to the concepts of rule sets and professional conduct, ethical principles are something separate. It would certainly be unprofessional for an Assistant Secretary of the Navy to show up to work at the Pentagon in flip flops or for his Military Assistant to have his warfare pin on upside down, but neither would be unethical. I know of a Major Program Manager who knowingly violated the contracting rule on unauthorized commitments. Because he broke this rule, needed repair work was accomplished on a Navy ship in a timely manner allowing the ship to begin it basic training phase on time. The commitment was later ratified by an authorized contracting official. The program manager did not benefit financially, immediately informed his chain of command, and in the end the government did not suffer financially. His action broke rules, including one of the 14 Principles above; however, I would find very few who would describe his conduct as “unethical.”

If ethics is not merely following the rules, what is it? A good working definition might be that ethics are the processes and principles used to determine if an action is right or wrong. Even the words “right” and “wrong” are problematic. Using them in this context assumes the existence of some universal standard against which an action may be judged. Theologians and philosophers debate the origins and existence of such a standard.  Practitioners take a different stance. As the late Supreme Court Justice Potter Stewart described pornography in “Jacobellis vs. Ohio,” they “know it when they see it.”4 Apart from following established rule sets, ethical action involves honesty, transparency, compassion, dignity, and courage. As the Chief of Naval Operations put it in his recent letter to Flag Officers, “Words about values, no matter how eloquent, can only go so far. My experience is that, like so many parts of our language, these words have become overused, distorted, and diluted. Our behavior, as an organization and as individuals, must signal our commitment to the values we so often proclaim.”5 The question that I believe the CNO raises is how to take noble ideals and from them craft a usable set of principles people can use to evaluate their actions.

FireShot Capture 106 - - https___news.usni.org_wp-content_uploads_2016_05_CNO-PFOR.pdf#viewer.ac
CNO Admiral Richardson’s letter on core values and attributes. Click to read.

Roughly 2,300 years ago, Aristotle wrote ten enormous volumes of The Nicomachean Ethics. While it remains an important work on ethics to this day, it does have a certain lack of brevity. 1,400 years later, Moses Maimonides, a Jewish philosopher who was heavily influenced by Aristotle’s writings, did have the gift of brevity. He synthesized the theological implications of the Hebrew Bible and all the attendant writings of several hundred years of revered Rabbis into 13 principles of faith. While his principles were praised by many and criticized by some, their very publication sparked a healthy and needed debate within the Jewish thinking of the day. In the spirit of both Aristotle and Maimonides, I offer the following 10 principles of ethical conduct. They are not rules but principles, ways of measuring the rightness and wrongness of a given act. They are designed to apply to all whose profession involves the common defense, not solely to military personnel. I offer these 10 principles to the Pentagon bureaucrat, the defense industry executive, the Congressional staffer, and the journalist whose beat covers national security as well as to the Soldier, Sailor, Airman and Marine. My hope is that a spirited public debate of these principles will lead to a healthier understanding of what constitutes ethical conduct.

1. Actions must align with the legitimate interests of the stakeholders.

Everyone in the world of defense acts in the interest of someone else, often multiple people and/or groups, and only rarely is it a direct supervisor. A journalist has a responsibility to the owners of their media outlet to produce publishable content and an additional, sometimes competing interest, to their readers to provide content that is factual and relevant. A DoD Program Manager has a duty to produce items of military usefulness to the warfighter and also has a responsibility to the American taxpayer. A stakeholder is the entity in whose interest a person is bound by their position to act.  (In the law, this would be called a fiduciary/principal relationship.) When judging the ethics of an action, ask first, “are these actions furthering the interest of one or more legitimate stakeholders?”

2. Conflicting interests of various stakeholders must be balanced transparently.

An infantry officer calling for artillery fire must balance the need to protect the soldiers under their command (those soldiers are one legitimate stakeholder) with the need to prevent potential civilian casualties (those civilians are the unwitting other legitimate stakeholder). A Service Chief will have to balance the need to invest in the equipment of tomorrow’s force with the need to fund the operations and maintenance of force he leads today. Many situations will have rule sets for the balancing of these interests, from Rules of Engagement in the field to the Federal Acquisition Regulation in a contract award.  Beyond merely following the appropriate rule set, the decision-maker must be open and clear with themselves, their chain of command, and possibly others outside their organization about who the stakeholders are and how he or she is balancing their interests.

3. The financial benefits of an office can only come from legitimate sources, and must be openly communicated to all stakeholders.

This principle covers the innocent gift, the outright bribe, and everything in between. In most cases, there are easily understood rule sets to govern this behavior. However, even in a complicated case, the main principle is to take no money or other item of value in a manner not clearly known to all the relevant stakeholders. As an example, many journalists will earn additional income working as a ghostwriter. If a journalist covering the DoD and the defense industry ghostwrites a book or an article for a DoD or defense industry leader, that journalist’s readers have a right to know about how that may affect his or her reporting.   

4. Gain, in any form, personal, institutional, financial, or positional, only legitimately comes through excellence.

It is fine for colonels to want to become generals. There is no ethical violation in a business wanting to maximize its profit. Investors are one of the key stakeholder interests an industry leader must serve. However, gain must never be achieved by trick, fraud, or exploitation of personal relationships. Gain is achieved ethically when a competitor outperforms the competition. For example, many large acquisition programs fund government activities outside their program that advance the state of technology with the intent of eventual incorporation into that program. An O-6 major program manager might be tempted to fund projects favored by an influential flag/general officer even if the potential for program benefit is relatively low compared to other possible investments in an attempt to win a friend on possible future promotion boards. This action would violate no rules. It would be unethical because the major program manager is using the program’s resources for personal gain instead of acting in the interests of the program’s legitimate stakeholders.    

5. Established rule sets must be followed unless they are either patently unjust or are interfering with achieving a critical stakeholder need that cannot be fulfilled by acting within the rule set. When violated, they are always violated openly and transparently.

This is the encapsulation of the “Rosa Parks” rule; the defense professional’s guideline for civil disobedience.  Rules exist for a reason. An ethical person follows established rule sets unless extraordinary circumstances compel deviation. When those circumstances exist, the ethical person does not break rules in secret, for that would defeat the purpose of exposing the unjust or mission obstructing rule. If a person is breaking rules without telling anyone about it, that person may be presumed unethical. 

6. When people have been placed under a leader’s authority, that authority may not be used for personal gain.

This covers the proper interaction of a leader with their team. The leader’s team exists for the accomplishment of stakeholders’ interests, not the leader’s personal interests. For example, commanders of large activities have public affairs staff. That staff is there to promote the public’s knowledge of the organization, not the Commander personally.

7. Respect is due to the innate human dignity of every person.

This principle forms the basis of all personal interactions. People may be tasked, trained, hired, fired, disciplined, and rewarded only in ways that preserve their inherent dignity. Because all human beings possess this dignity, its preservation crosses all racial, ethnic, gender, and religious lines. It does not preclude intense training, preparation for stressful situations, or the correction of substandard performance.  It does, however, require that no person be intentionally humiliated, denigrated, or exploited. 

8. The truth must be provided to any stakeholder with a legitimate claim.

It would be too simple, and even inaccurate, to proclaim a principle like “never lie.”  Both war and successful business often require the art of deception. As an example, it has always been a legitimate form of deception to disguise the topside of a warship to make it appear to be some other type of vessel. In a business negotiation, there are legitimate reasons for keeping some items of information private. However, stakeholders that have a legitimate claim on the truth must be given the full, unabridged access to the best information and analysis when requested. Other stakeholders, with a lesser claim, may not be lied to but do not always have to be answered in full.  As an example, a DoD program manager cannot tell a Congressional Defense Committee staffer that “testing is going great” when asked about testing on a program that is suffering serious delays. That program manager may tell a reporter, “I don’t want to talk about that” or, “I have confidence in the contractor” when asked the same question. 

9. Do not assume bad intent without evidence.

The unethical person judges others by their actions and himself by his intent. The ethical person judges himself by his actions and other by their intent. Ethical people will understand that there will be honest differences of opinion among even seasoned practitioners. Just because someone comes to a different judgment does not mean that person is less competent or under a bad influence. For example, an investigator with an inspector general organization is assessing whether or not a trip was legitimately official, to be properly paid for with government funds, or a personal trip on which business was done only incidentally, such that government funding would be unauthorized. The given facts could logically support either conclusion. The investigator may have a personal interest in a finding of wrongdoing because it would be a demonstration of the investigator’s own thoroughness. Nonetheless, an ethical investigator will decline to find wrongdoing when the facts support either conclusion.               

10. An ethical person does not stand idle in the face of wrongdoing.

Great thinkers, from Aristotle, to Sir Winston Churchill, to Maya Angelou, recognized courage as the primary human virtue, because it is a necessary precursor to all other virtuous acts. Theoretically, a person may be able to behave ethically without courage in an environment free from temptation. However, such environments don’t exist in the world of the defense professional. To be ethical, to follow the first nine principles, one must have the courage to do so even when such action might be unpopular or dangerous.

At the end of The (seemingly endless) Nicomachean Ethics, Aristotle observes that both virtue and laws are needed to have a good society.  Similarly, ethical principles are not a replacement for solid, well understood, and faithfully executed rule sets.  A wise ethics attorney once counseled me, “there is no right way to do the wrong thing, but there are lots of wrong ways to do the right thing.”   These ethical principles are, for our actions, like a well-laid foundation to a house.  They are the necessary precursor to a sound structure of ethical conduct.  

Captain Mark Vandroff is the Program Manager for DDG-51 Class Shipbuilding. The views express herein are solely those of the author. They do not reflect the official positions of the Department of the Navy, Department of Defense, or any other U.S. Government agency.

1. Lockheed Martin Corporation, “Ethics – Lockheed Martin,” 6 July 2016.

2. Vice Admiral William H. Hilarides, “Anchor Yourself with Ethics – NAVSEA Ethics & Integrity,” Naval Sea Systems Command, 22 June 2015.

3. “The 14 General Principles of Ethical Conduct” 5 C.F.R §2635.101 (b).

4. Jacobellis v. Ohio. 378 U.S. 184 (1964).  The Oyez Project at IIT Chicago-Kent College of Law.

5.Admiral John M. Richardson, “Message to Flag Officers and Senior Civilians,” Department of the Navy, 12 May 2016.

Featured Image: MECHANICSBURG, Pa. (June 1, 2016) Naval Supply Systems Command (NAVSUP) employees learn about Character from U.S. Naval Academy Distinguished Military Professor of Ethics Capt. Rick Rubel, guest speaker for the NAVSUP leadership seminar series. (U.S. Navy photo by Dorie Heyer/Released)

Lethal Autonomy in Autonomous Unmanned Vehicles

Guest post written for UUV Week by Sean Welsh.

Should robots sink ships with people on them in time of war? Will it be normatively acceptable and technically possible for robotic submarines to replace crewed submarines?

These debates are well-worn in the UAV space. Ron Arkin’s classic work Governing Lethal Behaviour in Autonomous Robots has generated considerable attention since it was published six years ago in 2009. The centre of his work is the “ethical governor” that would give normative approval to lethal decisions to engage enemy targets. He claims that International Humanitarian Law (IHL) and Rules of Engagement can be programmed into robots in machine readable language. He illustrates his work with a prototype that engages in several test cases. The drone does not bomb the Taliban because they are in a cemetery and targeting “cultural property” is forbidden. The drone selects an “alternative release point” (i.e. it waits for the tank to move a certain distance) and then it fires a Hellfire missile at its target because the target (a T-80 tank) was too close to civilian objects.

Could such an “ethical governor” be adapted to submarine conditions? One would think that the lethal targeting decisions a Predator UAV would have to make above the clutter of land would be far more difficult than the targeting decisions a UUV would have to make. The sea has far fewer civilian objects in it. Ships and submarines are relatively scarce compared to cars, houses, apartment blocks, schools, hospitals and indeed cemeteries. According to the IMO there are only about 100,000 merchant ships in the world. The number of warships is much smaller, a few thousand.

Diagram of the ethical governer
Diagram of the ‘ethical governor’

There seems to be less scope for major targeting errors with UUVs. Technology to recognize shipping targets is already installed in naval mines. At its simplest, developing a hunter-killer UUV would be a matter of putting the smarts of a mine programmed to react to distinctive acoustic signatures into a torpedo – which has already been done. If UUV were to operate at periscope depth, it is plausible that object recognition technology (Treiber, 2010) could be used as warships are large and distinctive objects. Discriminating between a prawn trawler and a patrol boat is far easier than discriminating human targets in counter-insurgency and counter-terrorism operations. There are no visual cues to distinguish between regular shepherds in Waziristan who have beards, wear robes, carry AK-47s, face Mecca to pray etc. and Taliban combatants who look exactly the same. Targeting has to be based on protracted observations of behaviour. Operations against a regular Navy in a conventional war on the high seas would not have such extreme discrimination challenges.

A key difference between the UUV and the UAV is the viability of telepiloting. Existing communications between submarines are restricted to VLF and ELF frequencies because of the properties of radio waves in salt water. These frequencies require large antenna and offer very low transmission rates so they cannot be used to transmit complex data such as video. VLF can support a few hundred bits per second. ELF is restricted to a few bits per minute (Baker, 2013). Thus at the present time remote operation of submarines is limited to the length of a cable. UAVs by contrast can be telepiloted via satellite links. Drones flying over Afghanistan can be piloted from Nevada.

For practical purposes this means the “in the loop” and “on the loop” variants of autonomy would only be viable for tethered UUVs. Untethered UUVs would have to run in “off the loop” mode. Were such systems to be tasked with functions such as selecting and engaging targets, they would need something like Arkin’s ethical governor to provide normative control.

DoD policy directive 3000.09 (Department of Defense, 2012) would apply to the development of any such system by the US Navy. It may be that a Protocol VI of the Convention on Certain Conventional Weapons (CCW) emerges that may regulate or ban “off the loop” lethal autonomy in weapons systems. There are thus regulatory risks involved with projects to develop UUVs capable of offensive military actions.

Even so, in a world in which a small naval power such as Ecuador can knock up a working USV from commodity components for anti-piracy operations (Naval-technology.com, 2013), the main obstacle is not technical but in persuading military decision makers to trust the autonomous options. Trust of autonomous technology is a key issue. As Defense Science Board (2012) puts it:

A key challenge facing unmanned system developers is the move from a hardware-oriented, vehicle-centric development and acquisition process to one that addresses the primacy of software in creating autonomy. For commanders and operators in particular, these challenges can collectively be characterized as a lack of trust that the autonomous functions of a given system will operate as intended in all situations.

There is evidence that military commanders have been slow to embrace unmanned systems. Many will mutter sotto voce: to err is human but to really foul things up requires a computer. The US Air Force dragged their feet on drones and yet the fundamental advantages of unmanned aircraft over manned aircraft have turned out to be compelling in many applications. It is frequently said that the F-35 will be the last manned fighter the US builds. The USAF has published a roadmap detailing a path to “full autonomy” by 2049 (United States Air Force, 2009).

Similar advantages of unmanned systems apply to ships. Just as a UAV can be smaller than a regular plane, so a UUV can be smaller than a regular ship. This reduces requirements for engine size and elements of the aircraft that support human life at altitude or depth. UAVs do not need toilets, galleys, pressurized cabins and so on. In UUVs, there would be no need to generate oxygen for a crew and no need for sleeping quarters. Such savings would reduce operating costs and risks to the lives of crew. In war, as the Spanish captains said: victory goes to he who has the last escudo. Stress on reducing costs is endemic in military thinking and political leaders are highly averse to casualties coming home in flag-draped coffins. If UUVs can effectively deliver more military bang for less bucks and no risk to human crews, then they will be adopted in preference to crewed alternatives as the capabilities of vehicles controlled entirely by software are proven.

Such a trajectory is arguably as inevitable as that of Garry Kasparov vs Deep Blue. However in the shorter term, it is not likely that navies will give up on human crews. Rather UUVs will be employed as “force multipliers” to increase the capability of human crews and to reduce risks to humans. UUVs will have uncontroversial applications in mine counter measures and in intelligence and surveillance operations. They are more likely to be deployed as relatively short range weapons performing tasks that are non-lethal. Submarine launched USVs attached to their “mother” subs by tethers could provide video communications of the surface without the sub having to come to periscope depth. Such USVs could in turn launch small UAVs to enable the submarine to engage in reconnaissance from the air.  The Raytheon SOTHOC (Submarine Over the Horizon Organic Capabilities) launches a one-shot UAV from a launch platform ejected from the subs waste disposal lock . Indeed small UAVs such

AeroVironment Switchblade UUV
AeroVironment Switchblade UUV

as Switchblade (Navaldrones.com, 2015) could be weaponized with modest payloads and used to attack the bridges or rudders of enemy surface ships as well as to increase the range of the periscope beyond the horizon. Future aircraft carriers may well be submarine.

In such cases, the UUV, USV and UAV “accessories” to the human crewed submarine would increase capability and decrease risks. As humans would pilot such devices, there are no requirements for an “ethical governor” though such technology might be installed anyway to advise human operators and to take over in case the network link failed.

However, a top priority in naval warfare is the destruction or capture of the enemy. Many say that it is inevitable that robots will be tasked with this mission and that robots will be at the front line in future wars. The key factors will be cost, risk, reliability and capability. If military capability can be robotized and deliver the same functionality at similar or better reliability and at less cost and less risk than human alternatives, then in the absence of a policy prohibition, sooner or later it will be.

Sean Welsh is a Doctoral Candidate in Robot Ethics at the University of Canterbury. His professional experience includes  17 years working in software engineering for organizations such as British Telecom, Telstra Australia, Fitch Ratings, James Cook University and Lumata. The working title of Sean’s doctoral dissertation is “Moral Code: Programming the Ethical Robot.”

References

 Arkin, R. C. (2009). Governing Lethal Behaviour in Autonomous Robots. Boca Rouge: CRC Press.

Baker, B. (2013). Deep secret – secure submarine communication on a quantum level.   Retrieved 13th May, 2015, from http://www.naval-technology.com/features/featuredeep-secret-secure-submarine-communication-on-a-quantum-level/

Defense Science Board. (2012). The Role of Autonomy in DoD Systems. from http://fas.org/irp/agency/dod/dsb/autonomy.pdf

Department of Defense. (2012). Directive 3000.09: Autonomy in Weapons Systems.   Retrieved 12th Feb, 2015, from http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf

Navaldrones.com. (2015). Switchblade UAS.   Retrieved 28th May, 2015, from http://www.navaldrones.com/switchblade.html

Naval-technology.com. (2013). No hands on deck – arming unmanned surface vessels.   Retrieved 13th May, 2015, from http://www.naval-technology.com/features/featurehands-on-deck-armed-unmanned-surface-vessels/

Treiber, M. (2010). An Introduction to Object Recognition: Selected Algorithms for a Wide Variety of Applications. London: Springer.

United States Air Force. (2009). Unmanned Aircraft Systems Flight Plan 2009-2047.   Retrieved 13th May, 2015, from http://fas.org/irp/program/collect/uas_2009.pdf

Drones, Ethics, and The Indispensable Pilot

The on-going conversation about the ethics of drones (or of remotely piloted aircraft[1]) is quickly becoming saturated. The ubiquity of the United States’ remotely piloted aircraft program has arisen so suddenly that ethicists have struggled just to keep up. The last decade, though, has provided sufficient time for thinkers to grapple with the difficult questions involved in killing from thousands of miles away.

In a field of study as fertile as this one, cultivation is paramount, and distinctions are indispensable. Professor Gregory Johnson of Princeton offers a helpful lens through which to survey the landscape. Each argument about drone ethics is concerned with one of three things: The morality, legality, or wisdom of drone use.[2]

Arguments about the wisdom (or lack thereof) of drones typically make value judgments on drones based upon their efficacy.[3] One common example argues that, because of the emotional response drone strikes elicit in the targets’ family and friends, drone strikes may create more terrorists than they kill.

Legal considerations take a step back from the question of efficacy. These ask whether drone policies conform to standing domestic and international legal norms. These questions are not easily answered for two reasons. First, some argue that remote systems have changed the nature of war, requiring changes to the legal norms.[4] Second, the U.S. government is not forthcoming with details on its drone programs.[5]

The moral question takes a further step back even from the law. It asks, regardless of the law, whether drones are right or wrong–morally good, or morally bad. A great deal has been written on broad questions of drone morality, and sufficient summaries of it already exist in print.[6]

If there is a void in the literature, I think it is centered on the frequent failure to include the drone operator in the ethical analysis. That is, most ethicists who address the question of “unmanned” aircraft tend to draw a border around the area of operations (AOR) and consider in their analysis everything in it–enemy combatants, civilians, air power, special operations forces (SOF), tribal leaders, hellfire missiles, etc. They are also willing to take one giant step outside the AOR to include Washington–lawmakers, The Executive, military leaders, etc. Most analyses of the ethics of drones, then, include everyone involved except the operator.[7] This is problematic for a number of reasons discussed below.

Bradley Strawser, for example, argues in favor of remote weapons from a premise that leaders ought to reduce risk to their forces wherever possible. He therefore hangs his argument on the claim that drone pilots are not “present in the primary theater of combat.”[8] While this statement is technically correct, it is misleading. The pilot, while not collocated with the aircraft, plays a crucial role in the ethical analysis.

Sarah Kreps and John Kaag argue that the U.S.’s capability to wage war without risk, may make the decision to go to war too easy. Therefore, any decision to go to war under such circumstances may be unjust.[9] This view is contingent upon a war without risk, which fails to consider the operator, and the ground unit the operator supports.

Paul Kahn goes so far as to call remote warfare “riskless.” But suggesting that remote war is riskless supposes that at least one side in the conflict employes no people at all. Where there are people conducting combat operations, there is risk. Contrary to Kahn’s position, drones are controlled by people, in support of people, and thus, war (as we know it) is not riskless.

The common presupposition throughout these arguments, namely that remote war does not involve people in an ethically meaningful way, is detrimental to a fruitful discussion of the ethics of remote warfare for three reasons.

First, the world has not yet seen, and it may never see, a drone-only war. What that means is that even though the drone operator may face no risk to him or herself, the supported unit on the ground faces mortal risk.[10] The suggestion, then, that a remote warfare capability produces war without risk is empirically untenable.

Second, there exist in this world risks that are non-physical. Cases of psychological distress (both in the military and outside it) abound, and the case has been made in other fields that psychological wounds are as real as physical ones.[11] There have already been a small number of documented post-traumatic stress disorder (PTSD) cases among drone operators.[12] Though the number of cases may be small, consider what is being asked of these individuals. Unlike their counterparts, RPA crews are asked to take life for reasons other than self defense. It is possible, and I think plausible, to suggest that killing an enemy, in such a way that one cannot ground the justification of one’s actions in self-defense, may carry long-term, and latent, psychological implications. The psychological risk to drone operators is, then, present but indeterminate.

Finally, there is the often-neglected point that a government which chooses to conduct remote warfare from home changes the status of its domestic military bases. That government effectively re-draws the battlespace such that it includes the drone operators within its borders. RPA bases within the Continental United States (CONUS) become military targets that carry tremendous operational and tactical significance, and are thereby likely targets.

There is a fine point to be made here about the validity of military targets. According to international norms, any violent action carried out by a terror network is illegal. So what would be a valid military target for a state in wartime is still an illegal target for al Qaeda. Technically, then, a U.S. drone base cannot be called a valid military target for a terrorist organization, but the point here about risk is maintained if we consider such bases attractive targets. Because the following claims are applicable beyond current overseas contingency operations against terror networks, the remaining discussion will assume the validity of U.S. drone bases as targets.[13]

The just war tradition, and derivatively the international laws of war, recognize that collateral damage is acceptable as long as that damage does not exceed the military value of the target.[14] The impact of this fact on domestically operated drones is undeniable.

Suppose an F-15E[15] pilot is targeted by the enemy while she sleeps on a U.S. base in Afghanistan. The collateral damage will undoubtedly include other military members. Now suppose a drone operator is targeted while she sleeps in her home near a drone base in the U.S.. In this scenario, the collateral damage may include her spouse and children. If it can be argued that such a target’s military value exceeds the significance of the collateral damage (and given the success of the U.S. drone program, perhaps it can) then killing her, knowing that her family may also die, becomes legally permissible.[16] Nations with the ability to wage war from within their own domestic boundaries, then, ought to consider the consequences of doing so.[17]

There will be two responses to these claims. First, someone will object that the psychological effects on the drone operator are overstated. Suppose this objection is granted, for the moment. The world of remote warfare, though, is a dynamic one, and one must consider the relationship between technology and distance. The earths sphere creates a boundary to the physical distance from which one person can kill another person. If pilots are in the United States, and targets are in Pakistan, then the geometric boundary has already been reached.

It cannot be the case, now that physical distance has reached a maximum, that technology will cease to develop. Technology will continue to develop, and with that development, physical distance will not increase; but information transmission rates will. The U.S. Air Force is already pursuing high definition cameras,[18] wide area motion imagery sensors,[19] and increased bandwidth to transmit all this new data.[20]

If technology has driven the shooter (the drone pilot, in this case) as far from the weapons effects as Earths geometry allows, then future technological developments will not increase physical distance, but they will increase video quality, time on station and sensor capability. Now that physical distance has reached a boundary, future technological developments will exceed previously established limits. That is, the psychological distance between killers and those they kill will decrease.[21] 

The future of drone operations will see a resurgence of elements from the old wars. Crews will look in a mans face, seeing his eyes and his fearthe killer must shoot at a person and kill a specific individual.[22] Any claim that RPA pilots are not shooting at people, but only at pixels will become obsolete. The command, dont fire until you see the whites of their eyesmay soon become as meaningful in drone operations as it was at Breeds Hill in 1775.[23]

As this technology improves, the RPA pilots will see a target, not as mere pixels, but as a human, as a person, as a husband and father, as one who was alive, but is now dead. Increased psychological effects are inevitable.

A second objection will claim that, although RPA bases may make attractive targets, the global terror networks with whom the U.S. is currently engaged lack the capability to strike such targets. But this objection also views a dynamic world as though it were static. Even if the current capabilities of our enemies are knowable today, we cannot know what they will be tomorrow. Likewise, we cannot know where the next war will be, nor the capabilities of the next enemy. We have learned in this young century that strikes against the continental United States are still possible.

The question of whether drones are, or can be, ethical is far too big a question to be tackled in this brief essay. What we can know for certain, though, is that any serious discussion of the question must include the RPA pilot in its ethical analysis. Wars change. Enemies change. Tactics change. It would seem, though, that remotely piloted weapons will remain for the foreseeable future.

Joe Chapa is a veteran of the U.S. Air Force. He served as a pilot and instructor pilot in Oklahoma, Nevada and Missouri, and completed two deployments to Afghanistan and Europe. He earned a B.A in Philosophy from Boston University, an M.A. in Theological Studies from Liberty Baptist Theological Seminary and an M.A. in Philosophy from Boston College (anticipated 2014). The views expressed here are of the author, and do not necessarily reflect those of the Air Force, the DoD or the U.S. government.

[1] Throughout this essay, I will use the terms ‘remotely piloted aircraft’ and ‘drone’ synonymously. With these terms I am referring to U.S. aircraft which have a human pilot not collocated with the aircraft, which are capable of releasing kinetic ordnance.

[2] This distinction comes from a Rev. Michael J. McFarland, S.J. Center for Religion, Ethics, and Culture panel discussion held at The College of The Holy Cross. Released Mar 13, 2013. https://itunes.apple.com/us/institution/college-of-the-holy-cross/id637884273. (Accessed February 25, 2014).

[3] The following contain arguments on the wisdom of drones. Audrey Kurth-Cronin, “Why Drones Fail:When Tactics Drive Strategy,”Foreign Affairs,July/August 2013; Patterson, Eric & teresa Casale, “Targeting Terror: The Ethical and Practical Implications of Targeted Killing,”International Journal of Intelligence and Counterintelligence”18:4, 21 Aug 2006; and Jeff McMahan, “Preface” in Killing by Remote Control: The Ethics of an Unmanned Military, Bradley Strawser, ed., (Oxford: Oxford University Press, 2013).

[4] For example, Mark Bowden, “The Killing Machines,” The Atlantic (8/16/13): 3. Others disagree. See Matthew W. Hillgarth, “Just War Theory and Remote Military Technology: A Primer,” in Killing by Remote Control: The Ethics of an Unmanned Military, Bradley Strawser, ed. (Oxford: Oxford University Press, 2013): 27.

[5] Rosa Brooks, “The War Professor,” Foreign Policy, (May 23, 2013): 7.

[6] For an excellent overview of the on-going discussion of drone ethics, see Bradley Strawsers chapter Introduction: The Moral Landscape of Unmanned Weaponsin his edited book Killing By Remote Control (Oxford: Oxford University Press, 2013): 3-24.

[7] This point highlights the merits of the Air Force’s term ‘remotely piloted aircraft’ (RPA). The aircraft are not unmanned. Etymologically, the term “unmanned” most nearly means “autonomous.”  While there are significant ethical questions surrounding autonomous killing machines, they are distinct from the questions of remotely piloted killing machines. It is only because the popular term “drone” is so pervasive that I have decided to use both terms interchangeably throughout this essay.

[8] Bradley Strawser, “Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles,” Journal of Military Ethics 9, no. 4 (16 Dec 2010): 356.

[9] Though I do not have the space to develop it fully, this argument is well-grounded in the just war tradition, and is one of the stronger arguments against a military use of remote warfare technology.

[10] Since September eleventh, 2011, U.S. “drone strikes” have been executed under the Authorizatino for The Military Use of Force, signed by Congress in 2001. From a legal perspective, then, all drone strikes, even those outside Iraq and Afghanistan have been against targets who pose an imminnent threat to the United States. Thus, even any reported “targeted killings” in Yemen, Somalia, Pakistan, or elsewhere, were conducted in self-defense, and therefore involved risk.

[11] By way of example, consider cases of hate speech, bullying and ‘torture lite’ in Rae Langton, “Beyond Belief: Pragmatics in Hate Speech and Pornography,” in Speech & Harm: Controversies Over Free Speech, ed. Ishani Maitra and Mary Kate McGowan (Oxford: Oxford University Press, May, 2012), 76-77.; Isbani Maitra, “Subordinating Speech,” in Speech & Harcm: Controversies Over Free Speech, ed. Ishani Maitra and Mary Kate McGowan (Oxford: Oxford University Press, May, 2012), 96.; Jessica Wolfendale, “The Myth of ‘Torture Lite’,” Carnegie Council on Ethics in International Affairs (2009), 50.

[12] James Dao, “Drone Pilots Found to Get Stress Disorders Much as Those in Combat Do,” New York Times, (February 22, 2013).

[13] The question of whether organizations like al Qaeda are to be treated as enemy combatants (as though they were equivalent to states) or criminals remains open. For more on the distinction between combatants and criminals, see Michael L. Gross, “Assassination and Targeted Killing: Law Enforcement, Execution or Self-Defense?” Journal of Applied Philosophy, vol. 23, no. 3, (2006): 323-335.)

[14] Avery Plaw, Counting the Dead: The Proportionality of Predation in Pakistan,Bradley Strawser, ed. in Killing by Remote Control (Oxford: Oxford University Press, 2013): 135.

[15] A traditionally manned U.S. Air Force asset capable of delivering kinetic ordnance.

[16] This statement is only true of enemy states. As discussed above, all terror network targets are illegal targets.

[17] I have developed this argument more fully in “The Ethics of Remotely Piloted Aircraft” Air and Space Power Journal, Spanish Edition, vol. 25, no. 4, (2013): 23-33.

[18] Exhibit R-2, RDT&E Budget Item Justification, MQ-9 Development and Fielding, February 2012, (page 1). (http://www.dtic.mil/descriptivesum/Y2013/AirForce/stamped/0205219F_7_PB_2013.pdf) accessed 30 July 2013.

[19] Lance Menthe, Amado Cordova, Carl Rhodes, Rachel Costello, and Jeffrey Sullivan, The Future of Air Force Motion Imagery Exploitation Lessons from the Commercial World, Rand Project Air Force, (page iii). (http://www.rand.org/content/dam/rand/pubs/technical_reports/2012/RAND_TR1133.pdf) accessed 30 July 2013.

[20] Grace V. Jean, Remotely Piloted Aircraft Fuel Demand for Satellite BandwidthNational Defense Magazine,  July 2011. (http://www.nationaldefensemagazine.org/archive/2011/July/Pages/RemotelyPilotedAircraftFuelsDemandforSatelliteBandwidth.aspx) accessed 30 July 2013.

[21] Ibid, 97-98.

[22] Ibid, 119.

[23] George E. Ellis, Battle of Bunkers Hill, (Boston: Rockwell and Churchill, 1895), 70.

Remote Aviation Technology – What are We Actually Talking About?

This is the first article of our “Drone Week”, which has been slightly truncated by the Crimean Crisis.

In most ‘drone’ conferences, there comes an awkward moment when a panelist realizes that the category ‘drone’ has very little to do with the question that they’re asking.  To quote the Renaissance philosopher Inigo Montoya, “I don’t think that word means what you think it means.”  In order to improve the remote aviation technology discussion, we need to be clear what we’re actually talking about. 

What we should be talking about is ‘remote aviation technology,’ which is simply a fusion of the air and cyber domains through the ubiquitous technologies of datalinks, autopilots, and performance airframes.  The fundamental tension is not between risk and responsibility, the two things over which the pop-sci-strat ‘drone’ debate obsesses, but between latency and performance.  To the risk point, a military has a moral obligation to reduce risk to its warfighters, so reducing risk through tech is not new; to the responsibility point, professionalism and integrity are the roots for the warfighter’s seriousness about their duties, not risk.  We find that we’ve actually been dealing with these questions for a while – so we have some pretty effective models already, which we can use as soon as we get the definitions straight. 

First, we must take all the conceptual rocks out of the ‘drones’ rucksack.  We can say definitively what we aren’t talking about.  We are looking only for questions that are new or fundamentally altered by remote aviation technology: any discussion that can be understood through extant tech or literature probably should be.  What is not changed by the advent of remote aviation technology?

  • The ethics of airstrikes and targeting – kinetics are no more intrinsic to remote aviation than they are to manned aircraft.  The same weapons deployed from Reapers are also launched from Apaches and F-16s.  The idea of ‘drone strikes’ as distinct from ‘air strikes’ is a distraction.  The choice to apply force comes from a chain of command, not from a circuit board.
  • The effectiveness of air campaigns – calling persistent airpower a ‘drone campaign’ is as reductionist as calling landpower a ‘carbine campaign.’  Certainly, long-dwell sensor-shooter remote aircraft have greatly expanded the possibilities for persistent airpower, but AC-47 gunships conducted a major persistent air campaign over the Ho Chi Minh trail – we would do better to remember this historical precedent rather than treat the capability as new, strange, or different.    
  • The nature of sovereignty in the modern international system – There is some very difficult homework that remains to be done about how best to deal with the export of violence from ungoverned or poorly governed spaces, and about the conduct of conflict against global, networked non-state actors.  Though some answers to these Westphalian questions involve persistent remote air platforms, these questions are themselves not a function of the technology. For instance, the British used airpower in these ways well before the Second World War. 
  •  The cultural issues and experience of remote killing.  These questions are foregrounded by remote aviation technology, but they are not intrinsic to this technology.  Artillerists, SWOs and manned airmen similarly wrestle with these sorts of questions – this issue is as old as arrows and siege engines. 

With these big rocks removed, we find two things left in this analytical rucksack of ‘drones.’  At the bottom of the pack, there’s a pile of emotional sediment in the shape of scary killer robots, and autonomous, invincible sci-fi nightmares that make war risk-free at the cost of our humanity.  Using these fictions to reason about actual remote aircraft is much like using the Easter Bunny to think about the role of rabbits in ecosystems.  Since these tropes and this misguided inter-subjectivity drives much of the public pop-discourse, we are certainly not talking about this ontological flotsam.

This leaves only the aircraft themselves, which is precisely what we want.  We’ve argued in other works that, for most discussions, we should consider Predators, Reapers, Global Hawks, UCLASS and so on the same way we consider any other aircraft – by mission, not by control system.  E.g., for almost all intents and purposes, Reapers are persistent reconnaissance-attack aircraft.  Similarly, we generally don’t consider the F-16 and the C-17 as ‘the same thing’ because they both have fly-by-wire systems.  But sometimes it matters that they have fly-by-wire systems vice electro-hydraulic control cables – e.g., for example, during an EMP event.  And sometimes, it matters that a ‘fly-by-wireless’ control system drives the Predator, Reaper, Global Hawk, the BQ-8 (Modified B-24),  the SAGE F-106, the Sukhoi-15TM, and so on.

How, then, does a ‘fly-by-wireless’ system matter?  The presumed tension for this technology is risk vs. responsibility – long-range datalinks reduce risk to the pilot, and since the pilot has ‘no skin in the game,’ they are presumed to be less invested in their choices.  This is deeply problematic – a military has a moral imperative to reduce risk to its warfighters.  Secretary Gates’ continually and rightly obsessed over body armor, MEDEVAC, and other risk mitigation technologies – this was a testament to his integrity.

While it is certainly true that increasing distance reduces risk, this does not inherently change warrior’s perception of his or her own responsibility to the mission and to comrades.  A lack of responsibility about killing results from a lack of professionalism or integrity, poor training, or other personnel problems.  SSBN crews isolate their weapons from risk through technology, and are similarly distant from their potential acts of killing.  I trust that our submarine community sees their duties with the deadly seriousness that they deserve.  Risk reduction through technology is ubiquitous, and these reductions do not undermine warfighter responsibilities: this is not truly a tension.

Similarly, advocates of ‘supply-side war control’ cite this risk point – the theory being that, without having to put constituents at risk, policymakers will be more willing to go to war.  If the risk vs. responsibility logic plays out on a strategic level (and if this is so, it is due to the political construct of ‘drone warfare’ rather than the technology itself), this tension is better answered through accountability for strategic choices rather than by inducing risk on our warfighters.  Just as Creighton Abrams’ attempt to downgrade the Special Operations community did little to keep the United States out of small wars, this approach is unlikely to deter policymakers.  For jus ad bellum questions, it is far better to focus on the pen of policymakers than on the red button of warfighters; better to locate risk at the ballot-box than in than soldiers’ lives.     

These points are covered at length by BJ Strawser and his co-authors in Killing by Remote Control: air warfare has no special moral problems inherent to the technology.   So we will have to look further to understand how and why the tech matters. 

What, then, is the actual tension of remote aviation technology?  Latency versus performance.  On one hand, a ‘fly-by-wireless’ control system allows the aircraft to keep weighty, expensive and risky components of the aircraft on the ground, where the performance constraints are far less pressing.  Accordingly, without the limitations of a human body and without cost of life support systems, designs that would otherwise be impossible can be fielded.  This performance can be cashed out as:

  • Persistence: A long-dwell design, such as the Predator or the Reaper, allows for sorties much longer than crew rest would normally allow – these designs focus on optimizing persistence, typically at the expense of survivability in high-threat environments.  These aircraft share bloodlines with persistent sensor-shooter craft such as the Gunship. 
  • Survivability:  A survivable design, such as the Taranis, makes use of small size, stealth and high maneuverability.  Without the size requirements for human habitation, these craft have new tactical options that pair well with advanced tactical aircraft.  They are cousins to F-22 fifth generation fighters. 
  • Affordability:  A low-cost design best fits the traditional definition of ‘drone’ – like the Firebee, a semi-disposable aircraft intended for ‘dull, dirty and dangerous’ jobs.  Quad-copters and the proposed Amazon delivery ‘drones’ fit this category well – these generally perform simple tasks and are not economical to remotely pilot in the traditional direct sense.  Swarming adds a new twist to these ‘drones’ – distributed capabilities makes a flock of these vehicles capable in its own right as air players.  Notably, the risk-reduction logic applies best to these craft – a survivable or a persistent aircraft will generally be too costly to be used as disposable assets, but if a design is built to be cheap from the outset, then it can be used in these ways.  (The same logic applies to missiles, which could be themselves considered ‘drones.’) 

The downside is latency.  For ‘fly-by-wireless’ control systems to work, there must be a way to port human control and judgment to the craft.  In a manned aircraft, where the crew builds situational awareness in an expanding ‘bubble’ around the craft; in a remote craft, the crew must ‘drill’ from their control station, through a web of datalinks, into their craft.  The negative result of this process is that the remote aircraft will typically be slower than an equivalent manned aircraft; this is offset by the ease with which a remote aircraft can link to offboard assets for situational awareness.  Still, the fundamental problem of the link remains.  There are two approaches to solving this problem:

  • Physics: Increasing gain and decreasing distance both increase the strength of the link between the remote operator and the aircraft.  Conversely, a contested Electronic Warfare environment seeks to degrade this link.  Accordingly, in the ‘physics’ solution, we anticipate a world with airborne RPA pilots, who fly their craft from aboard a ‘mothership’ craft.  Such a world hearkens back to the idea of an interlocking B-17 ‘Combat Box’ formation.
  • Automation:  The second approach ‘bottles’ human judgment and agency into an algorithm, and sends the remote craft on its way with these instructions.  When the craft can no longer maintain link, it executes these algorithms, performs its mission, and returns to base (if possible.)  This is essentially what already happens with advanced missiles.  The difficulty of this approach is the risk of ‘complex failure,’ if the craft is asked to perform a task whose complexity exceeds these algorithms.  For precisely scripted missions, this approach works well; for ‘improvisational’ missions such as CAS, it falters. 

If latency vs. performance is the fundamental tension of this technology, then much of the contemporary debate misses the mark.  For example, ‘optionally manned’ aircraft are touted to bridge the gap between manned and remote craft.  From a risk-vs-responsibility frame, this makes perfect sense – if you want to send the craft on a high-risk mission, leave the pilot at home.  But from a latency-vs-performance frame, it recalls the old joke about Washington, DC: a town with Southern efficiency and Northern charm.  Since one cannot cash back in the weight of life support systems and the like when they leave the pilot on the ground, optionally manned aircraft have the latency of an RPA and the performance of a manned aircraft – the worst of both worlds.

‘Complement,’ as described by my friend and classmate Rich Ganske, is a much better answer.  If humans excel at judgment, and robots excel at math, then when the robots can do more math, it frees up the humans to do more judgment.  The partnership between humans and hardware – both onboard and offboard hardware – is, and long has been, the key to dominating the battlespace.  The natural contours of remotely-piloted aviation tech complement well the natural contours of directly-piloted aviation tech – they are each strong where the other is weak, and together are better than either is alone.  How does this look, in practice?  For two non-exhaustive examples: 

  • Aerial Dominance Campaign:  In this world, low-cost autonomous craft, much like the TACIT RAINBOW or countermeasures would complicate an adversary’s air defense tasks, while high-end survivable craft linked as ‘loyal wingmen’ to similarly survivable manned craft.   In this war, every aircraft is a squadron, and every pilot a combat squadron commander.  Accordingly, the art of socio-technical systems command begins to take precedence over technical tasks for the future aviator. 
  • Vertical Dominance Campaign: A persistent air campaign team would use both remote and manned aircraft jointly to vertically dominate a battlespace from a persistent air environment.  The manned and remote aircraft that inhabit this space sacrifice maneuverability and speed for endurance and payload.   The craft we most often associate with remote technology inhabit this world, but we do the discussion a disservice by assuming the vulnerabilities of persistent aircraft are inherent to the design of remote aircraft. 

We’ve described a number of things that are only orthogonally related to remote aviation technology: air strikes, air campaigns, sovereignty and remote killing.  Once we removed those rocks from our rucksack, we were left with ‘fly-by-wireless’ control system technology.  We wrestled with the supposed primary tension of the technology – risk vs. responsibility, which we reject.  Our proposed primary alternate tension is – latency vs. performance.  There are three ways to gain improved performance from a remote control system: persistence, survivability and affordability; each of these has strengths and weaknesses in different environments, and are generally in tension with each other.  There are two ways to solve the remote latency problem: physics, which may involve partnering manned aircraft, and automation, which has problems dealing with complexity.  Ultimately, we argue that the best answers pair manned and remotely piloted aircraft together. Remote aircraft add tremendous performance to the team, while manned aircraft provide essential situational awareness and judgment to complex combat. 

Dave Blair is an active duty officer in the United States Air Force and a PhD student at Georgetown University.