[go: up one dir, main page]

Academia.eduAcademia.edu
Killing in War: Responsibility, Liability and Lethal Autonomous Robots Heather M. Roff Abstract Arguments over moral responsibility and liability for killing during war typically fall into two camps: only the guilty may be killed or all combatants are considered morally equal and, thus, equally susceptible to kill and be killed. This debate, though, is now faced with the possibility of a new challenge: what if the combatants are not human beings, that is, not the traditional moral agents with whom we are used to analyzing? Is moral responsibility and liability for killing during war possible when the combatants are lethal autonomous robots (LARS)? Consider that both in law and ethics, therefore, the requirement for consent (that is making a decision that is voluntary, intentional and informed) is conceptually linked to the prerequisites for acting autonomously. Yet when attempting to understand where responsibility and liability for killing lies with respect to LARS, we are left adrift. For not only are robots automatically externally coerced in the form of programming, but roboticists, and even some ethicists, refuse to engage with the philosophical notion of autonomy when discussing or offering normative judgments about LARS. The first part of this chapter argues that we must engage with the traditional definitions and ideas about autonomy if we are to have any moral guidance when it comes to LARS. When we hand over the decision to target and to fire to a machine, we jeopardize the moral bedrock of just war theory, for we move from “who is responsible” to “is there any potential of responsibility?” LARS will never be truly autonomous in the philosophical sense of the word, and because of this, the degree to which they can act without human control threatens a cornerstone of just war theory: the moral equality of soldiers and the liability for killing. The final part counters objections that responsibility for LARS killing in war automatically lies with the software programmers, politicians and military commanders. 1. Introduction To operate in complex and uncertain environments, the autonomous system must be able to sense and understand the environment…The perception system must be able to perceive and infer the state of the environment from limited information and be able to assess the intent of other agents in the environment. This understanding is needed to provide future autonomous systems with the flexibility and adaptability for planning and executing missions in a complex, dynamic world. – United States Department of Defense (DoD) “Unmanned Systems Integrated Roadmap 2011-2036” The DoD’s goal of creating an autonomous system that is able to assess the intent of other agents on the battlefield may not be in the distant future, as we are already at a critical juncture for the development of autonomous systems. Recently, Georgia Tech researchers fielded two pilotless planes and one driverless vehicle that gathered information about a target, shared and communicated the information with each other and then located the target. L. G. Weiss, “Autonomous Robots in the Fog of War” IEEE Spectrum, August, 2011, pp. 31-57. Online. Available at HTTP <http://ieeeexplore.ieee.org/ielx5/6/5960134/05960163.pdf?tp=&arnumber=5960163&isnumberi=5960134>. These machines determined their own routes and achieved their goal without human direction. Yet what is to stop us from arming such autonomous and interoperable systems, thereby placing the power of life or death with them? Such a question is not far off base, as all branches of the U.S. military are seeking to do exactly that. We must, then, attempt to locate moral and legal responsibility for killing in war when the warfighter targeting and firing is no longer a human being but an autonomous machine. While many keep warning that such weapons are “a long way off,” and so any deep philosophical consideration can be shelved for a later date, Georgia Tech’s success and the DoD’s goals tell a different tale. Indeed, Rear Adm. Matthew Klunder notes that “autonomy” is on the docket for future naval achievements.” “Navy says autonomy is key to robotic submarines” Los Angeles Times 9 February, 2012. And the U.S. Air Force claims that “advances in AI [artificial intelligence] will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.” United States Air Force. “Unmanned Aircraft Systems Flight Plan 2009-2047” p. 41. Of course, the USAF also concedes that “authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions.” (Ibid, p. 41). Academics heatedly debate whether resolving such questions is even possible. Politicians and military officials merely gesture at such problems, without really grappling with the deep issues involved.” For instance, the U.S. Congress formed the “Congressional Unmanned Systems Caucus” to “seek fair and equitable solutions to challenges created by UAV [unmanned aerial vehicles] operations in the U.S. National Air Space (NAS). Yet the Caucus seems to place more weight on “acknowledging” the value of these systems and developing and producing more of them than fully engaging with any ethical or legal problems such weapons might create. Congressional Unmanned Systems Caucus “Mission and Main Goals.” Online. Available at HTTP <http://unmannedsystemscaucus.mckeon.house.gov/about/purpose-mission-goals.shtml> The future for fully autonomous weapons is nearer than we would like to admit. This chapter seeks to meet the challenge by addressing questions of moral responsibility and legal liability for killing in combat when the warfighter is not a human being but a Lethal Autonomous Robot (LAR). I argue that we must engage with the traditional definitions and ideas about autonomy if we are to have any moral guidance when it comes to LARs. Especially if we view LARs as not simply weapons but a class of combatants. Furthermore, I contend that when we hand over the decision to target and to fire to a machine, we jeopardize a moral bedrock of just war theory, for we move from the central question of “who is responsible” to “is there any potential of responsibility?” I suggest that LARs will never be truly autonomous in the philosophical sense of the word, and because of this, the degree to which they can act without human control threatens to undermine the moral equality of soldiers, thus placing any hope of accountability or responsibility for killing on jus ad bellum considerations. Finally, I counter objections that responsibility and liability for LARs automatically lies with the software programmers, politicians and military commanders. 2. Autonomy vs. Autonomous The possibility for moral responsibility derives from a capacity for freedom. That is, acts that agents undertake must be voluntary for those acts to be imputable. As Kant reminds us, “imputation (imputatio) in the moral sense is the judgment by which someone is regarded as the author (causa libera) of an action”. I. Kant The Metaphysics of Morals, in M. Gregor (ed.), Cambridge: Cambridge University Press, 2006, p. 19. By being an author of an action, there is no other cause (such as the will or act of another person or a physical hindrance) that can be attributed to it. When an agent decides to undertake an action, she exercises her will. That is, she exercises her “faculty of choosing”. I. Kant. The Grounding for the Metaphysics of Morals, J. Ellington (trns.), Hackett Publishing Co, 1993, p. 23. Only when one exercises this faculty, free of any other determining force, can we call that act “free.” The notion of acting free from any other determining force is for Kant crucial. The only ground that is allowed to move us to act is the moral law, or duty. But other considerations, such as hunger, thirst, desire, glory, fear, etc. are not sufficient for the act to be considered ‘free’ and thus moral. This freedom, in the philosophical (moral) sense, is called “autonomy.” Autonomy of the will is the property that the will has of being a law to itself (independently of any property of the objects of volition). The principle of autonomy is this: “Always choose in such a way that in the same volition the maxims of the choice are at the same time present as universal law. […] [T]he above principle of autonomy is the sole principle of morals” (Kant, The Grounding for the Metaphysics of Morals, p. 44-45). The idea of willing a maxim universally is one formulation of Kant’s categorical imperatives. To will a maxim universally means to will that all other agents in the world also will this same maxim of action. If one’s action would be frustrated by everyone doing the same thing, then the act (and the maxim) is considered morally wrong. Autonomy is the principle of morality because through it we require agents to do or forbear from particular actions. We can require this of agents by way of a reciprocal recognition of another’s freedom. For instance, I understand that I have the capacity to choose to do X, and I see that Jane, like me, also has projects she would like to pursue, so I reason that she too has the capacity to choose. Waldron attempts to understand the difference between the modern notions of “personal autonomy”, that is merely choosing projects for oneself regardless of the content of said project, and “moral autonomy,” or choosing personal projects in connection with a conception of “the good”. His argument finds that Kant’s theory of autonomy, and indeed even more contemporary liberal theories of autonomy, cannot keep both forms of autonomy separate. The discussion is rather detailed on both the Kantian and the contemporary accounts, but Waldron’s conclusion is that the capacity for choice, along with capacities for self-reflection, consciousness and the recognition that acting towards some conception of “the good” is bound up in both accounts. Contemporary liberal theorists thus draw off of Kantian premises, and these theorists cannot maintain a strict logical division between both accounts. Thus for our purposes in this essay, we can rely on Kant’s account without failing to acknowledge contemporary liberal accounts of those like Raz, Rawls or Dworkin. Cf. J. Waldron, “Moral Autonomy and Personal Autonomy” in in J. Christman and J. Anderson (eds) Autonomy and the Challenges to Liberalism, Cambridge: Cambridge University Press, 2005, pp. 308-314. I recognize that all other agents with this capacity are like me: they do not want their freedom or choices hindered. This brief example highlights Kant’s “fact of reason”. I cannot here enter into the debate about the coherence and use of Kant’s fact of reason, as it is outside the scope of this paper. Cf. Kant, Immanuel. M. Gregor (ed.) Critique of Practical Reason, Cambridge: Cambridge University Press, 2010, pp. 28-29. From this basic tenet, we can derive principles of morality that we can then attach moral evaluations of praise or blame. But the point is the same: for any action to be imputed to me (with the attending moral evaluations), that action must be freely undertaken. Moreover, by freely choosing an action, I am intending that act. It is not merely accidental. Robotic “autonomy” is, however, considered differently. Some view robotic autonomy as “the capacity to operate in the real-world environment without any form of external control”. P. Lin, G. Bekey and K. Abney. “Autonomous Military Robotics: Risk, Ethics, Design” US Department of Navy, Office of Naval Research, 2008, p. 4; R. Arkin, The Ethical Case for Unmanned Systems” Journal of Military Ethics, 9:4, 2010, pp. 332-431. C. Allen, G. Varner and J. Zinser “Prolegomena to Any Future Artificial Moral Agent” Journal of Experimental Artifical Intelligence, 12, 2000, pp. 251-261. This definition is rather broad and opens the door for differing levels of autonomous action, such as a plane landing and taking off on its own or elevators or trains operating without human conductors. This definition, though, misses more nuanced levels of uncontrolled action. Therefore others make the distinction between “automatic systems” and “autonomous” ones. Automatic systems, like the train mentioned above, “are fully preprogrammed and act repeatedly and independently of external influence or control,” and can be “self-steering or self-regulating” but cannot “define” or “dictate” their own paths. U.S. Department of Defense. (2011) “Unmanned Systems Integrated Roadmap FY 2011-2036.” Online. Available at HTTP <http://www.aviationweek.com/media/pdf/UnmannedHorizons/usroadmap2011.pdf>. “Autonomous” machines on the other hand, are “self-directed toward a goal in that they do not require outside control,” and are “able to make a decision on a set of rules and/or limitations” based on information that it deems important to the decision process. Ibid, p. 43. Making such decisions about rules, however, is accomplished by way of programming algorithms into a software program, where the machine can “learn” and thus “choose” which means to use or perhaps goals to pursue based on the different types of programming platforms and approaches and a series of information inputs over repeated experiences of interactions. For instance, there are “top down”, “bottom up” and “mixed” approaches to software programming and learning. Top down approaches provide a set of rules that a machine could not violate, while bottom up approaches provide no set of such rules but allow the machine to learn through experience. A mixed approach would allow the machine to learn, but then when confronted with a particularly defined case would either permit or forbid certain actions. Cf. W. Wallach, and C. Allen. Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, pp. 83-124. With an autonomous machine, the machine is merely finding a route that it deems correct, given its programming structures and experience. Machine autonomy is therefore fundamentally different than philosophical or moral autonomy. Machine autonomy is merely the ability to act in the world without someone or something immediately directing that action. Some do not hold this view, and view machine autonomy as an open question or that there might be potential precursors available to programmers to develop machines with intentionality and perhaps consciousness. James Moor, for instance, believes that in the future it might be possible to program machines to be virtuous agents, and thus evaluate their actions in moral terms. Presently, however, we can still evaluate or assess the actions of machines based on how well they perform their functions. The notion of programming a virtuous robot is also taken up by Lin, Abney and Bekey in their 2008 report. Others, like Luciano Floridi, believe that one can create artificial machine agents that can independently generate semantic content to symbols. This, one might argue, seems like a precursor to some form of intentionality and perhaps consciousness. As the ability to generate semantic content to symbols allows machines to learn and communicate. It is not my intention to enter the debate about this here; however, for our purposes we can respond to both sides rather briefly. For Moor, the ability to create or program virtuous robots appears to be a non sequitur. Programming a machine to “act virtuously” undermines the very notion of acting virtuously. One must choose to act a certain way – in Aristotle’s notion, the right way, at the right time with the right disposition. Programming a machine to do this vitiates its moral worth. Second, in response to Floridi, while we might say that the ability to create machines that learn and communicate by way of interacting with their environment is quite novel, but that does not mean they are autonomous in the moral sense either. Other types of beings, such as animals, interact with and learn from their environment. Moreover, they create ways of communicating with each other based on evolutionary criteria, but we would not want to call them morally autonomous. Cf. J. H. Moor, “The Nature, Importance, and Difficulty of Machines Ethics” IEEE Intelligent Systems, 31:4, 2006, pp. 18-21. L. Floridi The Philosophy of Information, Oxford: Oxford University Press, 2011. There is no discussion of intent, of consciousness, or of the type of freedom that admits of moral operators of praise or blame. The robotic notion of autonomy is radically minimalist, as it removes ethical evaluation by definitional fiat. Yet the ethical regulation of warfare is premised on the fact that the agents doing the fighting are moral agents, i.e. agents to whom responsibility for actions can be attributed. Principles of jus in bello (justice in war) are not merely about whether a soldier can identify a combatant from a noncombatant (the principle of distinction) or calculate whether the destruction and suffering imposed by a particular strategy is proportional to the military goal to be achieved (principle of proportionality). Rather, these principles are normative prescriptions about how a soldier ought to fight, and how we can praise or blame (or punish) him when he fails to abide by them. If we cannot hold soldiers morally accountable because they lack the very capacity for moral agency, then just war theory is threatened with, what Anthony Beaver’s calls, “ethical nihilism.” A. Beavers “Moral Machines and the Threat of Ethical Nihilism” in P. Lin, K. Abney and G. A. Bekey (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, MIT Press: 2012, pp. 333-344. This is because even if LARs are able to perform better than human beings in warfare, then their actions cannot be morally evaluated, and we must look for another locus of responsibility. R. Sparrow “Robotic Weapons and the Future of War” in Paolo Tripodi and Jessica Wolfendale (eds.) New Wars and New Soldiers: Military Ethics in the Contemporary World, Aldershot: Ashgate Press, 2012, p. 123. 3. LARS: Responsibility And Liability For Killing In War Granting that LARs are not moral agents because they are not autonomous in the moral sense presents us with a variety of problems. Foremost among them is where to place responsibility when the combatants are no longer human beings but machines. Traditional notions about responsibility for killing in war follows the logical division of jus ad bellum and jus in bello. Jus ad bellum stipulates the conditions for a war to be considered just (such as self-defense or defense of others), whereas jus in bello dictates the principles of just conduct during warfare (such as noncombatant immunity and proportionality). These principles are typically considered logically distinct because the way one prosecutes war can be evaluated separately from the decision to wage war. One can go to war for the right reasons, and still manage to violate the rights of others while doing so. Soldiers, therefore, are evaluated not on the decision to go to war, as that is beyond their purview, but on how they fight. Customarily, soldiers are held responsible for failing to uphold the principles of jus in bello, either by intentionally flouting them or by obeying unlawful commands. This idea is bound up with the principle of the moral equality of soldiers (MES). MES holds that soldiers are “morally equal.” That is, we cannot attribute responsibility for the war to them, as they are innocent of this decision, but only hold them responsible for their actions during war. We cannot blame soldiers for fighting for their country, even if they fight on an unjust side, as they “forced to fight” for various considerations (like duress or ignorance). Nevertheless, while ordinary soldiers on both sides ought to be considered morally equal in their blamelessness, we still hold that they are equally open to attack by virtue of their status as an active threat. Soldiers are combatants, and are considered a “dangerous class” because they are “trained to fight, provided with weapons, [and] required to fight on command”. M. Walzer,. Just and Unjust Wars: A Moral Argument with Historical Illustrations, 3rd edn, Basic Books, 2000, p. 144. Noncombatants, such as civilians, are not considered dangerous and so cannot be targeted. There is of course the objection that the Doctrine of Double Effect (DDE) permits civilians to be killed when their deaths are foreseeable but unintended effects of targeting a legitimate military target. DDE is a conceptual mine field that I cannot enter into here. At work here is not the notion of “innocence” or “guilt” but active threat. Civilians can certainly be considered “guilty” for supporting an unjust war, but that fact does not make them liable to be killed. Recently, MES’ validity has come under close scrutiny. Jeff McMahan holds the principle as false, arguing that ordinary soldiers on the unjust side of a conflict are in fact responsible for the decision to go to war, and there can be no division between jus ad bellum and jus in bello. Ordinary soldiers make conscious decisions to sign up for military service, or they allow themselves to be conscripted, and so find themselves liable to be killed on the basis of this decision. What all of this amounts to is that if one holds MES as false, then one can target combatants and noncombatants based on some notion of their responsibility in the decision to wage or the prosecution of the war. However, soldiers, and possibly some civilians, on the unjust side are liable to be killed due to their decision to wage or support an unjust war. The result is that soldiers and civilians on the just side of a conflict are not liable to attack, as they are morally innocent (or in McMahan’s terms “nonresponsible”). McMahan does argue that targeting civilians is not very effective for prosecuting war, and so civilians should typically not be liable to be killed. While I cannot here offer a lengthy retort, this argument to me seems false. Given McMahan’s principles, civilian support for the war seems to me no different than an ordinary soldier’s decision to support the war or allow himself to be conscripted. McMahan’s conclusion that civilians are not liable is not based on the same principles of moral responsibility, but rather the observation that “unlike unjust combatants, civilians do not generally pose a threat of wrongful harm. J. McMahan, Killing in War, Oxford: Oxford University Press, 2009, p. 225. This is in sharp contradiction to what McMahan states earlier that he has “explicitly rejected the view that posing a threat is the basis of liability to attack in war” (Ibid, p. 34). Thus, McMahan seems to be playing with two different sets of rules for combatants and noncombatants to reach his conclusions. Rodin comes to the opposite conclusion in his reading of MES. Rodin finds that MES, while false, does not necessarily mean that one is entitled to target civilians and have increased war rights. Rather, he argues that though MES is false, what follows are further restrictions on the conduct of war and that greater care must be taken to avoid targeting civilians. D. Rodin “The Moral Inequality of Soldiers” in D. Rodin and H. Shue (eds.) Just and Unjust Warriors, Oxford University Press, 2008, pp. 44-68. Yet what does MES have to do with LARs? Whether one holds MES conceptually true or merely true for practical purposes, the doctrine is functionally necessary. Walzer believes that MES is conceptually true, and Mapel has argued that both sides are equally made into “attackers”. McMahan and Rodin argue that MES is a conceptual fiction, but practically necessary for the laws of war to function. Cf. Walzer, Just and Unjust Wars; D. R. Mapel “Coerced Moral Agents? Individual Responsibility for Military Service” Journal of Political Philosophy, 6:2, 1998, pp. 171-189; McMahan, Killing in War, p. 108; McMahan, “The Ethics of Killing in War” Ethics, 114:1, 2004, p. 703; Rodin, “The Moral Inequality of Soldiers”. Soldiers on both sides of a conflict are deemed moral agents; that is they are in some manner free enough to be held accountable for their actions. Whether we hold them accountable for the decision to fight does not simultaneously affect whether we hold them accountable for upholding principles of jus in bello or the current laws of armed conflict. MES posits that soldiers are capable of making decisions and reflecting on their own autonomy, and thus we can only hold them responsible for those actions they choose to undertake. LARs, on the other hand, lack this capacity for moral autonomy, and thus responsibility. The only way that one could even begin to model the features of a moral agent would be to create an artificially intelligent machine. Artificial intelligence (AI), in this case would have to be considered “strong”, where the machine could match and possibly exceed human intelligence. We might also require it to have intentions, the capacity for reflection and “consciousness.” To program a machine to have such capacities, computer scientists attempt a variety of approaches. Krishnan identifies nine different programming attempts: top-down programming, bottom-up programming, expert systems, Cyc, neural networks, genetic algorithms, autonomous agent approach, nouvelle AI, and evolutionary robotics. Cf .Krishnan, Armin. Killer Robots, Aldershot: Ashgate Press, 2009, pp. 46-53. Andreas Matthias focuses on five different types of AI programming: symbolic systems, connectionism and neural networks, reinforcement learning, genetic algorithms, and genetic programing. Matthias, Andreas. “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata” Ethics and Information Technology, 6, 2004, 175-183. The current consensus, though, on achieving a level of strong AI is for systems to “learn” either by example, by trial and error, or by combining these mechanisms and “evolving.” Strong AIs in this instance tend to learn like human beings –through different patterns and experiences – and like human beings, their actions cannot be controlled or predicted. Andreas Matthias calls this effect the “responsibility gap.” Ibid. The responsibility gap is basically the result of creating learning machines. Programmers start off as coders, where the programmer has the capacity to control “the machine in every single detail”. Ibid., p. 181. In AI systems, however, the use of symbolic logic, neural networks, or reinforcement learning obscures the ability for the programmer to follow or control the flow of information within the system. Thus autonomous systems, like the kind the DoD aspires to create, “deprive [the programmer] of the spatial link between him and his product,” as the “agent [in this case a LAR] acts outside the observation horizon of its creator, who in the case of a fault, might be unable to intervene manually” to fix the problem. Ibid., p. 182. Santoro, Marino and Tamburrini argue that the “responsibility gap” is not actually a problem with regard to autonomous learning systems. They claim that legal frameworks designed around liability can distribute responsibility and costs to programmers and manufacturers of these systems. Unfortunately, their argument does little to address the instances of moral responsibility for these systems. Thus Matthias’ concern that effective control is necessary for the attribution of responsibility gains more ground when viewed from a moral, rather than a legal, standpoint. Cf. S. Matteo, D. Marino and G. Tamburrini “Learning Robots Interacting with Humans: From Epistemic Risk to Responsibility” AI & Society, 22, 2008, pp. 309-311. This is the expected result of this type of system and how the machine becomes autonomous. With the case of MES, a fully autonomous weapon that has the capacity to learn, evolve, match human intelligence and “assess the intent of other agents in the environment,” is to create a machine that can act upon the world, but does not act as a moral agent within it. Modeling the ability for a machine to act intelligently does not make that machine a moral agent. While the machine has “learned,” the underlying structure of the machine is still, in Kant’s terms, “determined.” And while the programmer lost the ability to control the machine, that does not change the machine’s moral status. We face, therefore, two perverse outcomes from creating a class of soldiers who are nonmoral agents. First, we cannot morally evaluate actions undertaken by them during war. This outcome follows Beaver’s charge of “ethical nihilism.” We might be able to say that a particular LAR adequately discriminated between combatants and noncombatants, but that holds no moral weight. The LAR cannot be praised or blamed for doing so, for attaching moral operators to such an evaluation would be like blaming a robotic pool cleaner because it failed to stay underwater and instead started sweeping the backyard. Since LARs are not moral agents, then by definition there can be no moral equality between soldiers on a battlefield (unless both sides employ LARs, then we might say there is a moral equality of soldiers in that both lack moral status). One result of this conclusion is that if we hold either the conceptual or practical arguments for MES to be true, then soldiers have a right of self-defense against their counterparts. However, if MES is not even possible because one side is not a moral agent, then justifications for self defense seem to fall apart. A lethal autonomous robot is not defending a “self”, and so lethal force cannot be justified on those grounds. The only apparent justification seems to be one of “collective” self-defense of a nation. However, this line of reasoning is fraught with its own problems. Cf. D. Rodin War and Self Defense, Oxford, Oxford University Press, 2002. Second, and more importantly, if we cannot attribute moral or legal responsibility to the agents on the battlefield, then we must step back from jus in bello considerations and return to jus ad bellum ones. In other words, if we cannot hold the warfighter responsible, we must either look to the manufactures/programmers, the military, political leaders, or perhaps even the civilian population. Such a conclusion might not seem to present us with a problem. We hold domestic populations and political and military leaders morally, and sometimes legally, responsible for wars fought all the time. Indeed, we sanction leaders (either through economic or legal means), tax populations, and try military leaders in domestic or international courts. We certainly praise or blame them for wars. However, as we will see in the next section, holding such people accountable for the actions of LARs is not so straightforward and actually poses us with the problem that holding anyone responsible may be impossible. 4. Responsibility And Liability If the combatants in war are incapable of moral responsibility, then we must decide who bears the responsibility for their actions. I will try to answer the question of who is morally as well as legally responsible by looking to just war theory and contemporary law, particularly through the legal constructs of vicarious liability. There are few authors who engage with the question of who is morally responsible for LARs (Sparrow being one); however, there are several that attempt to delineate the legal responsibility of those who deploy such weapons. For a good explanation of the differing positions and ways in which liability is traditionally attributed, as well as canvasing the most recent literature, Cf.: Pagallo, Ugo. “Robots of Just War: A Legal Perspective” Philosophy and Technology, 24, 2011, pp. 307-323. Vicarious liability is when “contributory fault, or some element of it is ascribed to one party, but liability is ascribed to a different party”. J. Feinberg Collective Responsibility, Journal of Philosophy, 65:21, 1968, p. 675 Thus while LARs may do the acting, and thus be the “cause” of harm, some other party will bear responsibility for the act. Much of the discussion about assigning vicarious responsibility for military robots focuses on software programmers and military commanders, but if we are to truly identify responsible parties, then we must also look to jus ad bellum considerations such as the role that political leaders and civilians play. 4.1 Software programmers Scholars typically note that even though LARs may achieve “autonomy” they will not be capable of anything more than a “quasi-agent” status. That is, they will “enjoy only partial rights and duties”. P. Lin, G. Bekey, K. Abney. “Autonomous Military Robotics: Risk, Ethics, Design”, p. 55. While granting LARs a “quasi-agent” status seems to assume their moral and legal standing already, many point to the first step in the process: the programmer. Only moral agents have rights and duties, even partial ones, and they have said rights against and duties towards other moral agents. As the programmer, or manufacturer, creates the system, responsibility for creating the machine and liability for negligence might be placed with him. Legal liability in this sense can be the result of a failure to take proper care, to avoid foreseeable risks or to warn. P. Lin, G. Bekey, K. Abney. “Autonomous Military Robotics: Risk, Ethics, Design” pp. 56-58. In other words, the programmer ought to take due care while creating a lethal machine, and if he has not, then the machine’s actions are attributable (in whole or in part) to him. Yet in the case of LARs, does the argument for programmers’ responsibility actually work? The answer is a resounding “no.” In the case of vicarious liability, there must exist a “special relationship” between the two parties, and in the case of LARs, there is no legally (or morally) relevant fact that situates the robot and its programmer in such a relationship. First, we cannot attribute vicarious responsibility by using a parent/child analogy for two reasons. One, the programmer is not considered the “guardian” of the robot. Two, and more importantly, the notion of parental liability can only apply when wrongful actions are the result of a parent’s negligent acts of control over the act of their child. The idea is that a parent must have a sufficient degree of control over the acts of her child. Yet as Matthias has pointed out, the programming structures that enable learning for autonomous systems creates a “responsibility gap” where the programmers do not have control over the system. And as Sparrow notes, holding programmers responsible for the acts of robots they cannot control would be like “holding parent responsible for the actions of their children once they have left their care”. R. Sparrow, “Killer Robots” Journal of Applied Philosophy, 24:1, 2007, p. 70. One might counter that we can still use the doctrine of Respondeat Superior to hold programmers responsible because this account does not presuppose control. Lokhorst and van den Hoven explicitly invoke this route. G-J, Lokhorst, and J. van den Hoven. “Responsibility for Military Robots” in P. Lin, K. Abney and G. A. Bekey (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, MIT Press, 2012, p. 151. In this relationship, an employer is held responsible for acts committed by an employee in the course of employment. F. B. Sayre, “Criminal Responsibility for the Acts of Another” Harvard Law Review, 43:5, 1930, p. 693. The employer is held liable for the employee’s actions even if there is no causal contribution or any direct order from the employer. Unfortunately, this also fails as a basis of liability for LARs. The modern notion of Respondeat Superior is not that of a master and slave where consent plays no part, but the particular relationship of where both parties consent to carry out certain roles and perform particular duties. The role of consent here is crucial. Even though the employer may not order or condone the action undertaken by the employee, or even be thought to control the acts of the employee, through a mutual agreement both parties undertook particular role responsibilities, and it is only in the case of the employee carrying out the scope of his employment that that employer is held liable. In the case of LARs, there is no mutual contract or agreement, as LARs cannot be said to ‘consent.’ Moreover, in this instance the sticky conceptual difference between tort and criminal law raises its head. Vicarious responsibility is typically a civil, not criminal matter. But in the case of LARs, violations of jus in bello would not be considered tortious acts but criminal ones. If we were to hold a principal liable, we would have to show that where no causal contribution to the harmful act were made, knowledge of the criminal act plus acquiesce to it would be required to attribute liability to the principal. Ibid., p. 706. This, of course, looks less like vicarious liability due to the epistemic constraints and more like a doctrine of command responsibility. 4.2 Military Commanders Perhaps we can attribute responsibility to military commanders. Indeed, the military doctrine of command responsibility is broad enough that some might claim that officers can be held morally and legally responsible for the actions of their subordinates, where the commander should have known what would happen. In re Yamashita, 327 U.S. 1 (1946) ; Case No. 72: German High Command Trial: Trial of Wilhelm Von Leeb and Thirteen Others, 12 U.N. War Crimes Comm’n, Law Reports of the Trials of War Criminals 1 (1949). Yamashita held that, though no direct evidence linked General Yamashita affirmatively to his subordinate’s crimes, he failed in his duties to control, prevent or punish his subordinates. Here one could rely on a framework similar to due care, in that failing to take due care, or foresee possible outcomes, one could be held responsible. Lin, Abney and Bekey take this route. They claim that an autonomous robot, but not a “Kantian-autonomous-robot” (a machine with moral autonomy) would be programmed such that it would be incapable of choosing its own ends, and thus violating the rules of war, and so any responsibility for the robots malfunction should be placed the military commanders. This response is unsatisfactory though as it appears that their argument implies the programming of unlawful orders, and not a rewriting of software by the machine itself. They seem to dismiss this possibility, though it is in fact not only possible but probably given autonomous systems. In the case of malfunction, then they claim it is a product liability issue. Cf. Lin, Bekey and Abney, “Autonomous Military Robotics: Risk, Ethics, Design”, p. 66. Unfortunately, this is not the case. Contemporary usage and prosecutions based on this doctrine require more than the vague epistemic “should have known.” Modern case law requires the “effective control” of subordinates by superiors. ICTR Statute Article 6(3, “The fact that any of the acts referred to in articles 2 to 4 of the present Statute was committed by a subordinate does not relieve his or her superior of criminal responsibility if he or she knew or had reason to know that the subordinate was about to commit such acts or had done so and the superior failed to take the necessary and reasonable measures to prevent such acts or to punish the perpetrators thereof.” Online. Available at HTTP <http://www.un.org/ictr/statute.html>. Where the ICTY Statute Article 7(3, “The fact that any of the acts referred to in articles 2 to 5 of the present Statute was committed by a subordinate does not relieve his superior of criminal responsibility if he knew or had reason to know that the subordinate was about to commit such acts or had done so and the superior failed to take the necessary and reasonable measures to prevent such acts or to punish the perpetrators thereof.” Online. Available at HTTP <http://www.icty.org/x/file/Legal%20Library/Statute/statute_sept08_en.pdf >. Command responsibility is premised on the fact that there is a superior-subordinate relationship, and the “test to determine whether a person is a superior is [..] one of ‘effective control’”. S. Boelaert-Suominen “Prosecuting Superiors for Crimes Committed by Subordinates: A Discussion of the First Significant Case Law Since the Second World War” Virginia Journal of International Law, Vol. 41: 4, 2001, p. 762. Superiors must have the “material ability to prevent and punish the commission of these offenses”. Ibid., footnote 71. There need not be formal documentation, rigid formalized hierarchies or direct orders to hold “superiors” to account, merely the ability to halt, prevent and punish a subordinate’s action. Since effective control is the primary criteria, it also serves to exculpate superiors from prosecution, where a “person who is formally a superior in the line of command may be excluded from criminal liability if that superior does not exercise actual control”. Ibid., p. 765. Looking to the case of LARs, the effective control criterion would actually exculpate leaders from legal responsibility because of the commanders’ inability to control the machines. Autonomous machines are “impossible” to control “by a human in real-time due to its processing speed and the multitude of operational variables involved”. Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata”, p. 183. Thus if control is an impossibility, then there is no way that a commander could “prevent or punish” a violation of jus in bello by machines. There can be no prevention because there can be no foresight as to what the machine will actually do, and there can be no “punishment” because punishment presupposes moral personhood and the capacity to suffer. Sparrow, “Killer Robots,” pp. 71-73. Lokhorst and van den Hoven object to Sparrow’s claim that punishment is necessary for responsibility. They charge that Sparrow’s argument that suffering is necessary for punishment is overstated. Robots could, they think, suffer (given strong AI), and that even if they could not suffer, punishment is not the only effective means of behavior modification. Other alternatives exist, and so could equally be employed. While Lokhorst and van den Hoven’s objections are noted, they miss the mark. Punishment is still about moral responsibility for a wrong imposed on another. If one is not a moral agent, then one cannot be punished – regardless of the question of suffering – otherwise any ‘harm’ done is mere harm, and does not comport with the concept of punishment. Cf. Lokhorst and van den Hoven, “Responsibility for Military Robots”, pp. 148-150. Indeed, even Lin, Abney and Bekey concede that punishing a robot makes little sense. P. Lin, G. Bekey, K. Abney. “Autonomous Military Robotics: Risk, Ethics, Design, p. 60. Moreover, Asaro also notes that punishment of autonomous systems seems doubtful. P. M. Asaro “A Body to Kick, but Still no Soul to Damn: Legal Perspectives on Robotics” in Patrick Lin, Keith Abney and George A. Bekey (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, MIT Press, 2012, p. 181. Punishment is an act done to moral agents who violate a right, and punishment is not about harm but guilt. I can harm someone without wronging him, and I can equally suffer harm without being wronged or punished. Moral responsibility for the deployment of LARs is also a difficult issue, and if we are to take any cues from the legal discussion it appears that military commanders also might not be held morally responsible for the acts of the machines. We might want to attribute some responsibility to them, as surely they decided when and where to deploy them through their strategies and tactics, but this level of moral responsibility is rather low. Perhaps they were ordered to deploy them (thus were under some sort of command or duress), or if they had no foresight into what the machine would do or believed that the machine would really act in accordance with the laws of war, then we would claim that they have mitigated moral responsibility due to epistemic constraints. They are excused in some way. 4.3 Civilians and Politicians If we cannot hold software programmers morally or legally responsible, and we might have difficulty assigning legal liability and moral responsibility to military commanders, then perhaps we should look to those individuals who make the decisions for the development, deployment and use of these weapons. In other words, perhaps we should look to the politicians who issue commands to the military and procure such weapons, as well as to those individuals who place the politicians in their offices. Civilians, after all, vote for these leaders, and they also pay taxes whereby the government has the material means to fund wars and procure weaponry. Thus we have two avenues left for ascribing responsibility for acts committed by LARs: collective responsibility of the civilian population and individual role responsibility of political leaders. Civilian responsibility for the acts committed by LARs, though, is a tough road to travel. Typically, in just war theory when any discussion of civilian responsibility enters the fray it is in regards to the decision to wage aggressive war, not in means used in war (just or otherwise). This is not to say we could not use this framework. In discussions about holding civilians responsible, the principle tends to follow Walzer’s line of reasoning: “The greater the possibility of free action in the command sphere, the greater the degree of guilt for evil deeds done in the name of everyone”. Walzer, Just and Unjust Wars, p. 298. But given that even in democratic societies such free action is either impossible or quite inefficacious, citizens have little to no effect on foreign policy decisions, as the recent decision by President Obama to assassinate Anwar al-Alwaki proves. One might counter that in free democratic societies, citizens ultimately vote for political officials, and so their decision to place such officials in office makes them morally responsible for the acts committed by those officials. Yet this too is a difficult pill to swallow. Voting is not an indication of anything more than mere preference. It does not obligate one to obey the laws of the land, nor does it mean that one undertakes the moral responsibility for another party’s actions. Simmons notes that voting is merely a sign of preference or approval, but does not obligate a citizen to obey the laws. When one votes this is not an act of consent, as consent requires that an act be voluntary, intentional and informed, and acts of tacit or implied consent require that the situation one is presented with is clearly one where consent is appropriate, there is a definite period of time when objections or dissent might be noted, and a specified period of time when dissent is no longer acceptable. Cf: Simmons, A. John. Moral Principles and Political Obligation, Princeton University Press, 1979. pp. 75-93. One might have no indication that a political official will deploy autonomous weapons in warfare at the time of casting one’s ballot, or have any indication that through the various avenues of defense spending that one’s tax dollars will end up buying this weaponry. Moreover, the time-honored excuse of non-culpable ignorance may also exculpate civilians from any sort of responsibility. They had no idea that the government was using this type of technology, or even if they did know that such weapons were used and existed, they were not informed of the consequences of such action, or perhaps they were intentionally misled by the government. In any case, holding civilians morally responsible for the actions of LARs is fraught with difficulty. We might be only able to say that select civilians are morally responsible, that is they lobby for the use of such weapons, have an effect on the policy outcomes, and publically support the policies. Yet even here there is little we could do other than express our approbation, and the degree of causal contribution is so low that we could not even hope to hold these individuals legally liable. Politicians, however, do seem the only solid locus of responsibility for the actions of LARs. Foreign policy elites, heads of state, and the like are the only ones truly making the decisions to deploy such weapons. They are the “source rather than the recipient of superior orders.” Walzer, Just and Unjust Wars, p. 291. They are assumed to have all of the available knowledge, much more than the ordinary citizen, and issue orders to military commanders. Thus we would assume that political leaders are aware of the uncontrollability of LARs, and so a decision to field them makes them morally and legally responsible for the machine’s actions. Unfortunately, such attribution of responsibility is tenuous. We might not be able to hold them legally liable given the current legal constraints on command responsibility and the lack of any international governance structures on autonomous weapons. We can use moral operators of praise or blame, but absent a shift in the requirement for effective control of subordinates and the creation of international arms regulation for autonomous weapons, legal responsibility will remain unachievable. 5. Conclusion The deployment of LARs in combat presents us with a never before seen challenge to just war theory. First, it divorces jus in bello judgments of responsibility from the behavior of combatants, as the combatants are no longer considered moral agents capable of moral standing. By doing so, it forces any evaluations of responsibility to jus ad bellum considerations of who decides to initiate war and to use LARs in combat. Instead of deciding whether political officials started an aggressive war, and thus can be charged with a crime of aggression, we must now discern whether those officials can be held morally and legally responsible for the conduct of autonomous machines. Moreover, due to the fact that “effective control” is a criteria in legal responsibility to cases of command responsibility and some instances of vicarious liability, the creation and use of LARs leaves us with the perverse outcome that no one can be held legally responsible for their actions. The landscape of just war has changed, and it is no longer true that “soldiers can never be transformed into mere instruments of war,” and that the “trigger is always part of the gun, not part of the man”. Ibid., p. 311. In the case of LARs this is the exact opposite: a mere instrument of war is now a soldier, and the trigger is part and parcel of it. References Asaro, P. M. “A Body to Kick, but Still no Soul to Damn: Legal Perspectives on Robotics” in Lin, P. K. Abney and G. A. Bekey (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, MIT Press, 2012, pp. 169-186. Arkin, R. (2010) The Ethical Case for Unmanned Systems” Journal of Military Ethics, Vol. 9, no 4: 332-431. Allen, C., G. Varner, and J. Zinser. “Prolegomena to Any Future Artificial Moral Agent” Journal of Experimental Artificial Intelligence, 12, 2000, pp. 251-261. Beavers, A. “Moral Machines and the Threat of Ethical Nihilism” in Lin, P. K. Abney and G. A. Bekey (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, MIT Press, 2012, pp. 333-344. Boelaert-Suominen, S. “Prosecuting Superiors for Crimes Committed by Subordinates: A Discussion of the First Significant Case Law Since the Second World War” Virginia Journal of International Law, 41:4, 2001, pp. 747-785. Congressional Unmanned Systems Caucus “Mission and Main Goals” Online. Available at HTTP <http://unmannedsystemscaucus.mckeon.house.gov/about/purpose-mission-goals.shtml>. Feinberg, J. Collective Responsibility, Journal of Philosophy, 65:21, 1968, pp. 674-688. Floridi, L. The Philosophy of Information, Oxford University Press, 2011. Hennigan, W.J. “Navy says autonomy is key to robotic submarines” Los Angeles Times February 9, 2012. Online. Available at HTPP <http://articles.latimes.com/2012/feb/10/business/la-fi-0210-drone-submarine-20120210>. International Criminal Tribunal for Rwanda: Online. Available at HTPP <http://www.un.org/ictr/statute.html>. International Criminal Tribunal for Former Yugoslavia: Online. Available at HTPP <http://www.icty.org/x/file/Legal%20Library/Statute/statute_sept08_en.pdf>. Kant, I. Critique of Practical Reason, Mary Gregor (ed.), Cambridge University Press, 2010. -- The Metaphysics of Morals, Mary Gregor (ed.), Cambridge University Press, 2006. -- The Grounding for the Metaphysics of Morals, James Ellington (trns.), Hackett Publishing Co, 1993. Krishnan, A. (2009) Killer Robots, Ashgate Press. Lin, P. G. Bekey and K. Abney. “Autonomous Military Robotics: Risk, Ethics, Design” US Department of Navy, Office of Naval Research. Lokhorst, G-J and J. van den Hoven. “Responsibility for Military Robots” in Lin, P. K. Abney and G. A. Bekey (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, MIT Press, 2012, pp. 145-156. Mapel, D. R. (1998) “Coerced Moral Agents? Individual Responsibility for Military Service” Journal of Political Philosophy, 6:2, 1998, pp. 171-189. Matthias, A. “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata” Ethics and Information Technology, 6, 2004, pp. 175-183. McMahan, J. Killing in War, Oxford University Press, 2009. -- “The Ethics of Killing in War” Ethics, 114:1, 2004, pp. 693-732. Moor, J. H. “The Nature, Importance, and Difficulty of Machines Ethics” IEEE Intelligent Systems, 31:4, 2006, pp. 18-21. Pagallo, U. “Robots of Just War: A Legal Perspective” Philosophy and Technology, 24, 2011, pp. 307-323. Rodin, D. “The Moral Inequality of Soldiers” in D. Rodin and H. Shue (eds.) Just and Unjust Warriors, Oxford University Press, 2008, pp. 44-68 -- War and Self Defense, Oxford University Press, 2000. Matteo, S., D. Marino and G. Tamburrini. “Learning Robots Interacting with Humans: From Epistemic Risk to Responsibility” AI & Society, 22, 2008, pp. 301-314. Sayre, F. B. “Criminal Responsibility for the Acts of Another” Harvard Law Review, 43:5, 1930, pp. 689-723. Simmons, A. J. Moral Principles and Political Obligation, Princeton University Press, 1979. Sparrow, R. “Robotic Weapons and the Future of War” in P. Tripodi and J. Wolfendale (eds.) New Wars and New Soldiers: Military Ethics in the Contemporary World, Aldershot: Ashgate Press, 2011. -- “Killer Robots” Journal of Applied Philosophy, 24:1, 2007, pp. 62-77. U.S. Department of Defense. “Unmanned Systems Integrated Roadmap FY 2011-2036”, 2011. Online. Available at HTPP <http://www.aviationweek.com/media/pdf/UnmannedHorizons/usroadmap2011.pdf>. Waldron, J. “Moral Autonomy and Personal Autonomy” in J. Christman and J. Anderson (eds.) Autonomy and the Challenges to Liberalism, Cambridge: Cambridge University Press, 2005, pp. 308-314. Wallach, W. and C. Allen. Moral Machines: Teaching Robots Right from Wrong, Oxford: Oxford University Press, 2009. Walzer, M. Just and Unjust Wars: A Moral Argument with Historical Illustrations, 3d edn, Basic Books, 2000. Weiss, L. G. “Autonomous Robots in the Fog of War” IEEE Spectrum, August, 2011, pp. 31-57. Online. Available at HTPP <http://ieeeexplore.ieee.org/ielx5/6/5960134/05960163.pdf?tp=&arnumber=5960163&isnumberi=5960134>. United States Air Force. “Unmanned Aircraft Systems Flight Plan 2009-2047” 2009. Online. Available at HTPP <http://www.govexec.com/pdfs/072309kp1.pdf>. Yamashita, (1946) 327 U.S. 1 Case No. 72: German High Command Trial: Trial of Wilhelm Von Leeb and Thirteen Others, 12 U.N. War Crimes Comm’n, Law Reports of the Trials of War Criminals 1, 1949. 7