Skip to main content
Lethal autonomous weapons, or sometimes referred to as "killer robots", present unique problems for the future of warfighting. Weapons that are capable of identifying and firing on targets without human control raise moral, legal and... more
Lethal autonomous weapons, or sometimes referred to as "killer robots", present unique problems for the future of warfighting.  Weapons that are capable of identifying and firing on targets without human control raise moral, legal and operational problems for states and their militaries.  Moreover, the use and proliferation of these weapons will likely have adverse effects on international peace and stability.    The book looks to each of these key areas: morality, law and operational considerations to argue that the harms of developing and fielding these weapons outweighs the potential benefits.    Looking to the current trajectory of weapons systems, the state of artificial intelligence and international relations theory, the book suggests that a preemptive ban on such weapons is the best way forward.  Following the current trends the attempt to "ban killer robots" in the United Nations Convention on Conventional Weapons, the monograph provides the most current state of the debate.  However, absent an international binding agreement, I suggest that it is incumbent upon private technology companies to forbear from creating lethal autonomous weapons.
This book argues that the duty to protect is best considered a "provisional duty of justice" in what amounts to a state of nature. The debate over how to categorize a duty of intervention as either an imperfect duty of benevolence is... more
This book argues that the duty to protect is best considered a "provisional duty of justice" in what amounts to a state of nature. The debate over how to categorize a duty of intervention as either an imperfect duty of benevolence is largely misguided largely due to a confusion about deontic categories. By first examining the classic Kantian taxonomy of duties, the work argues that Kant’s account is not consistent and that a "provisional" duty must be included in his framework. Next, it applies this reconstructed framework to the problem of R2P and argues that R2P is best considered a provisional duty of justice, that is a duty that is conditional on the capacity of individual actors in the international system. In order to move beyond the provisionality of protection R2P must be institutionalized. The author argues that duties of justice require juridical institutions for their fulfillment, and thus R2P requires the creation of the requisite executive, legislative and judicial authorities to move beyond its provisional status. Drawing on Kant’s political theory, the book argues that his concept of a "permissive law" authorizes the coercion of states into such an institution. Practically speaking, the United Nations Security Council should be the only agent to undertake the task of such coercion.
“is uninterested in dialogue with educated outsiders representing the subaltern . . . and who [are] unwilling to take his views seriously. A right-wing poster makes the bigot’s point perfectly: ‘It doesn’t matter what this sign says,... more
“is uninterested in dialogue with educated outsiders representing the subaltern . . . and who [are] unwilling to take his views seriously. A right-wing poster makes the bigot’s point perfectly: ‘It doesn’t matter what this sign says, you’ll call it racist anyway!’” (p. 22). I recently saw a related sign on a shop door responding to the Black Lives Matter movement: “All lives matter! Nuf said.” I assumed that I knew what the sign meant, but my confidence about “knowing” this troubled me. I thought about speaking to the shop owner(s), but did not do that, partly because “Nuf said” signaled that they were “uninterested in dialogue.” Was I any better? These considerations might complicate Bronner’s insight into a “cosmopolitan education” that would cultivate genuine mutual respect across cultures and identities (pp. 181–82). “Any new approach,” he says, “will need to navigate and integrate . . . cultural practices that foster a cosmopolitan sensibility; political action that provides recognition for the disenfranchised and the outsider,” while recognizing how class differences “cut across identity constructions” (p. 186). Here we arrive at a conclusion supported by all three books: Achieving racial justice and a “democratic refounding” in the United States cannot be left to existing political practices. Working toward these goals will require multifaceted cultural politics, along with “self-work” among American citizens, for a solid majority to become more deeply “awakened” to racism and bigotry.
This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. 5 (2): 286–296. http://dx.doi.org/10.1080/23269995.2015.1018665.
Groups of humans are often able to find ways to cooperate with one another in complex, temporally extended social dilemmas. Models based on behavioral economics are only able to explain this phenomenon for unrealistic stateless matrix... more
Groups of humans are often able to find ways to cooperate with one another in complex, temporally extended social dilemmas. Models based on behavioral economics are only able to explain this phenomenon for unrealistic stateless matrix games. Recently, multi-agent reinforcement learning has been applied to generalize social dilemma problems to temporally and spatially extended Markov games. However, this has not yet generated an agent that learns to cooperate in social dilemmas as humans do. A key insight is that many, but not all, human individuals have inequity averse social preferences. This promotes a particular resolution of the matrix game social dilemma wherein inequity-averse individuals are personally pro-social and punish defectors. Here we extend this idea to Markov games and show that it promotes cooperation in several types of sequential social dilemma, via a profitable interaction with policy learnability. In particular, we find that inequity aversion improves temporal ...
This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. 5 (2): 286–296. http://dx.doi.org/10.1080/23269995.2015.1018665.
Fifteen years on, the Responsibility to Protect (R2P) doctrine is still facing questions over its content, scope and attendant obligations. Recent conflicts in Syria, Ukraine and Iraq show, how, when and if states intervene is a matter of... more
Fifteen years on, the Responsibility to Protect (R2P) doctrine is still facing questions over its content, scope and attendant obligations. Recent conflicts in Syria, Ukraine and Iraq show, how, when and if states intervene is a matter of political will and calculation. Yet the question of political will remains largely unaddressed, and many ignore the conceptual and practical distance between stating that the international community should encourage and assist states to fulfill R2P obligations and requiring third parties to use diplomatic, humanitarian or ‘other’ means to protect populations when states fail to do so. I propose we acknowledge this distance and minimize it through covert action. Embracing the reality that some states cannot intervene due to political constraints entails that we can theorize about other ways to uphold R2P. Moreover, covert action involves a range of means and types of targets and is a flexible option for R2P.
We want artificial intelligence (AI) to be beneficial. This is the grounding assumption of most of the attitudes towards AI research. We want AI to be "good" for humanity. We want it to help, not hinder, humans. Yet what exactly... more
We want artificial intelligence (AI) to be beneficial. This is the grounding assumption of most of the attitudes towards AI research. We want AI to be "good" for humanity. We want it to help, not hinder, humans. Yet what exactly this entails in theory and in practice is not immediately apparent. Theoretically, this declarative statement subtly implies a commitment to a consequentialist ethics. Practically, some of the more promising machine learning techniques to create a robust AI, and perhaps even an artificial general intelligence (AGI) also commit one to a form of utilitarianism. In both dimensions, the logic of the beneficial AI movement may not in fact create "beneficial AI" in either narrow applications or in the form of AGI if the ethical assumptions are not made explicit and clear. Additionally, as it is likely that reinforcement learning (RL) will be an important technique for machine learning in this area, it is also important to interrogate how RL smu...
The following organisations are named on the report: Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier... more
The following organisations are named on the report: Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder.
To adequately estimate the beneficial and harmful effects of artificial intelligence (AI), we must first have a clear understanding of what AI is and what it is not. We need to draw important conceptual and definitional boundaries to... more
To adequately estimate the beneficial and harmful effects of artificial intelligence (AI), we must first have a clear understanding of what AI is and what it is not. We need to draw important conceptual and definitional boundaries to ensure we accurately estimate and measure the impacts of AI from both empirical and normative standpoints. This essay argues that we should not conflate AI with automation or autonomy but keep them conceptually separate. Moreover, it suggests that once we have a broad understanding of what constitutes AI, we will see that it can be applied to all sectors of the economy and in warfare. However, it cautions that we must be careful where we apply AI, for in some cases there are serious epistemological concerns about whether we have an appropriate level of knowledge to create such systems. Opening the aperture to include such questions allows us to further see that while AI systems will be deployed in a myriad of forms, with greater or lesser cognitive abil...
Introduction: Kant, Global Justice and R2P 1. Kantian Provisional Duties 2. Provisional Protection: R2P as a Provisional Duty 3. Kant's Permissive Laws 4. Permissible Coercion 5. Provisional to Peremptory: Institutionalizing a Duty to... more
Introduction: Kant, Global Justice and R2P 1. Kantian Provisional Duties 2. Provisional Protection: R2P as a Provisional Duty 3. Kant's Permissive Laws 4. Permissible Coercion 5. Provisional to Peremptory: Institutionalizing a Duty to Protect 6. Conclusion: R2P and the Real World - Libya and Syria
Abstract Recently, the United States Defense Advanced Project Agency (DARPA) hosted its “Robotics Challenge.” The explicit goal of this challenge is to develop robots capable of “executing complex tasks in dangerous, degraded, human... more
Abstract Recently, the United States Defense Advanced Project Agency (DARPA) hosted its “Robotics Challenge.” The explicit goal of this challenge is to develop robots capable of “executing complex tasks in dangerous, degraded, human engineered environments.” However, the competitors’ choice to build humanoid robots tells a different narrative. In particular, through the physical design choices, the giving of names and the tasking of roles, the competing teams perpetuated a gendered narrative. This narrative in turn reifies gendered norms of warfighting, and ultimately leads to an accretion of gendered practices in militaries, politics and society, despite contemporary attempts at minimizing these practices through policies of inclusion. I argue that though much work on gender and technology exists, the autonomous humanoid robot – the one currently sought by DARPA – is something entirely new, and must be addressed on its own terms. In particular, this machine exceeds even Haraway's conception of the post-human cyborg, and rather than emancipating human beings from gender hierarchy, further reifies its practices. Masculine humanoid robots will be deemed ideal warfighters, while feminine humanoid robots will be tasked with research or humanitarian efforts, thereby reinstituting gendered roles.
The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the... more
The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the principle of noncombatant immunity. However, much of the present debate fails to take into consideration the practical realties of contemporary armed conflict, particularly generating military objectives and the adherence to a targeting process. This paper argues that we must look to the targeting process if we are to gain a fuller picture of the consequences of creating or fielding lethal autonomous robots. This paper argues that once we look to how militaries actually create military objectives, and thus identify potential targets, we face an additional problem: the Strategic Robot Problem. The ability to create targeting lists using military doctrine and targeting processes is inherently strategic, and handing this capability over to a machine undermines existing command and control structures and renders the use for humans redundant. The Strategic Robot Problem provides prudential and moral reasons for caution in the race for increased autonomy in war.
Fifteen years on, the Responsibility to Protect (R2P) doctrine is still facing questions over its content, scope and attendant obligations. Recent conflicts in Syria, Ukraine and Iraq show, how, when and if states intervene is a matter of... more
Fifteen years on, the Responsibility to Protect (R2P) doctrine is still facing questions over its content, scope and attendant obligations. Recent conflicts in Syria, Ukraine and Iraq show, how, when and if states intervene is a matter of political will and calculation. Yet the question of political will remains largely unaddressed, and many ignore the conceptual and practical distance between stating that the international community should encourage and assist states to fulfill R2P obligations and requiring third parties to use diplomatic, humanitarian or ‘other’ means to protect populations when states fail to do so. I propose we acknowledge this distance and minimize it through covert action. Embracing the reality that some states cannot intervene due to political constraints entails that we can theorize about other ways to uphold R2P. Moreover, covert action involves a range of means and types of targets and is a flexible option for R2P.
1 Much of the debate over the moral permissibility of using autonomous weapons systems (AWS) focuses on issues related to their use during war (jus in bello), and whether those systems can uphold the principles of proportionality and... more
1 Much of the debate over the moral permissibility of using autonomous weapons systems (AWS) focuses on issues related to their use during war (jus in bello), and whether those systems can uphold the principles of proportionality and distinction. This essay, however, argues that we ought to consider how a state's portended use of AWS in conflict would affect jus ad bellum principles, particularly the principle of proportionality. The essay argues that even the clearest case of a defensive war against an unjust aggressor would prohibit going to war if the war was waged with AWS. The use of AWS to fight an unjust aggressor would adversely affect the ability for peaceful settlement and negotiations, as well as have negative second-order effects on the international system and third party states. In particular, the use of AWS by one state would likely start and arms race and proliferate weapons throughout the system.
Research Interests:
Exchange on autonomous weapons systems.
Research Interests:
Recently, the United States Defense Advanced Project Agency (DARPA) hosted its “Robotics Challenge.” The explicit goal of this challenge is to develop robots capable of “executing complex tasks in dangerous, degraded, human engineered... more
Recently, the United States Defense Advanced Project Agency (DARPA) hosted its “Robotics Challenge.” The explicit goal of this challenge is to develop robots capable of “executing complex tasks in dangerous, degraded, human engineered environments.” However, the competitor’s choice to build humanoid robots tells a different narrative. In particular, through the physical design choices, the giving of names and the tasking of roles, the competing teams perpetuated a gendered narrative. This narrative in turn reifies gendered norms of warfighting, and ultimately leads to an accretion of gendered practices in militaries, politics and society, despite contemporary attempts at minimizing these practices through policies of inclusion. I argue that though much work on gender and technology exists, the autonomous humanoid robot – the one currently sought by DARPA – is something entirely new, and must be addressed on its own terms. In particular, that this machine exceeds even Haraway’s (1990) conception of the post-human cyborg, and rather than emancipating human beings from gender hierarchy further reifies their practices. Masculine humanoid robots will be deemed ideal warfighters, while feminine humanoid robots will be tasked with research or humanitarian efforts, thereby reinstituting gendered roles.
The present debate over the creation and potential deployment of lethal autonomous weapons, or “killer robots,” is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the... more
The present debate over the creation and potential deployment of lethal autonomous weapons, or “killer robots,” is garnering more and more attention.  Much of the argument revolves around whether such machines would be able to uphold the principle of noncombatant immunity. However, much of the present debate fails to take into consideration the practical realties of contemporary armed conflict, particularly generating military objectives and the adherence to a targeting process.  This paper argues that we must look to the targeting process, if we are going to gain a fuller picture of the consequences of creating or fielding lethal autonomous robots.  This paper argues that once we look to how militaries actually create military objectives, and thus identify potential targets, we face an additional problem: the Strategic Robot Problem.  For the ability to create targeting lists using military doctrine and targeting processes is inherently strategic, and handing this capability over to a machine undermines existing command and control structures and renders the use for humans redundant.  The Strategic Robot Problem provides prudential and moral reasons for caution in the race for increased autonomy in war.
Pop up talk at the Cybersecurity Initiative launch at New America Foundation
You can see my presentation for the AI futures conference from Jan. 2015 in San Juan, Puerto Rico.  The conference was put on by the Future of Life Institute and supported generously by Jaan Tallinn.
Panel on Lethal Autonomous Weapons
Oxford University's Ethics of Law and Armed Conflict Center annual workshop 2014
Patrick Tucker from Defense One pingback's regarding my China Dual Use strategy blog.
Interview with Dominic Laurie about killer robots and the AI letter calling for a ban.
Interview on the accidental killing of American citizens by drones.
An interview at the Hague Institute of Global Justice
An interview at the Hague Institute of Global Justice
An interview at the Hague Institute of Global Justice
Testimony for "Mapping Autonomy" session at the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons at the United Nations Convention on Conventional Weapons
Research Interests:
Side Event talk given at the 2016 UN Convention on Conventional Weapons Informal Meeting of Experts, Geneva April 11, 2016.
Research Interests:
My testimony to the United Nations on the operational and technical issues regarding autonomous weapons.
Traditionally, students in an American political thought course examine liberal, conservative and radical ideologies. The typical pedagogy is to trace the history of these ideologies by reading the great historical texts, making sure to... more
Traditionally, students in an American political thought course examine liberal, conservative and radical ideologies. The typical pedagogy is to trace the history of these ideologies by reading the great historical texts, making sure to counterbalance each point consistently. This course, however, is constructed differently. Students in this course will not follow the traditional trajectory. Instead, students will follow a specific American ideal-"that all men are created equal"-throughout the philosophical and political debates in American history. By the end of this course, students should have familiarity with a few of the major historical figures in American political thought, and they should also have a deeper appreciation of what it means to construct, criticize and defend American political, philosophical and moral ideals from a variety of standpoints. This course is not so much about the "isms" of American political theory; it is rather about American political principles. America is a diverse country, with many voices, opinions, ideologies and viewpoints; this course seeks to include some of those voices. We will begin by reading foundational texts that support the principle "all men are created equal." These include Common Sense, Rights of Man, the Federalist Papers, and the Declaration of Independence.
For the 2015-2016 year, I will be a nonresidential fellow in New America Foundations Cybersecurity Initiative.  I am very excited about this great opportunity and look forward to getting out some great work on cyber soon!
Roundtable Discussion about banning and regulating lethal autonomous weapons in the 2015 Bulletin of Atomic Scientists
I argue that companies ought not to be permitted to hack back when they have been hacked.
Bessma Momani and I argue that the Syrian crisis will never look like the Libyan intervention due also to tactical, not merely political, considerations (2011).
Bessma Momani and I argue about the morality of robotic warfare. (2011)
The project addresses the relationships between artificial intelligence (AI), weapons systems and society. In particular, the project provides a framework for meaningful human control (MHC) of autonomous weapons systems. In international... more
The project addresses the relationships between artificial intelligence (AI), weapons systems and society. In particular, the project provides a framework for meaningful human control (MHC) of autonomous weapons systems. In international discussions, a number of governments and organizations adopted MHC as a tool for approaching problems and potential solutions raised by autonomous weapons. However, the content of MHC was left open. While useful for policy reasons, the international community, academics and practioners are calling for further work on this issue. This project responds to that call by bringing together a multidisciplinary and multi-stakeholder team to address key questions. For example, we question the values associated with MHC, what rules should inform the design of the systems ”both in software and hardware”and how existing and currently developing weapons systems advance possible relationships between human control, autonomy and AI. To achieve impact across academic, industry and policy arenas, we will produce academic publications, policy briefs, an open access database on 'semi-autonomous' weapons, and will sponsor multi-sector stakeholder discussions on how human values can be maintained as systems develop. Furthermore, the organization Article 36 will channel outputs directly into the international diplomatic community to achieve impact in international legal and policy forums.
Talk presented in March 2016 at Magdalene College Oxford, Oxford Consortium for Human Rights on the implication of autonomous weapons, artificial intelligence and cyber
Research Interests:
Talk given to FHI May 2016
Research Interests:
The preceding chapter argued that if we created AWS capable of complying with contemporary targeting doctrine, we would be creating strategic actors and not merely force multipliers. This chapter takes a view from a different direction.... more
The preceding chapter argued that if we created AWS capable of complying with contemporary targeting doctrine, we would be creating strategic actors and not merely force multipliers. This chapter takes a view from a different direction. In particular, this chapter asks: how would the use of AWS challenge a right to self-defense under jus ad bellum? Given that AWS have no " self " to defend, since they are not moral agents and are incapable of being killed or harmed, does their use change or limit our justification to use lethal force? This chapter argues that the ability to use AWS in the stay of human warfighters does challenge justifications to use lethal force on two fronts. First, it proscribes militaries from using lethal force in response to attacks against their robotic warfighters. If there is no lethal threat, one cannot justify using lethal force in response. This radical asymmetry, in turn, affects the way in which collectivities may justify using force on grounds of a right of national self-defense. In other words, the potential to use AWS to fight wars affects our jus ad bellum proportionality calculations, even in the face of attack against them, the result of which is a rather perverse effect: that the possession and the ability to use AWS prohibits their use. If there is no lethal threat, there can be no use of lethal force to respond. The argument proceeds in four sections.