Lethal autonomous weapons, or sometimes referred to as "killer robots", present unique problems f... more Lethal autonomous weapons, or sometimes referred to as "killer robots", present unique problems for the future of warfighting. Weapons that are capable of identifying and firing on targets without human control raise moral, legal and operational problems for states and their militaries. Moreover, the use and proliferation of these weapons will likely have adverse effects on international peace and stability. The book looks to each of these key areas: morality, law and operational considerations to argue that the harms of developing and fielding these weapons outweighs the potential benefits. Looking to the current trajectory of weapons systems, the state of artificial intelligence and international relations theory, the book suggests that a preemptive ban on such weapons is the best way forward. Following the current trends the attempt to "ban killer robots" in the United Nations Convention on Conventional Weapons, the monograph provides the most current state of the debate. However, absent an international binding agreement, I suggest that it is incumbent upon private technology companies to forbear from creating lethal autonomous weapons.
This book argues that the duty to protect is best considered a "provisional duty of justice" in w... more This book argues that the duty to protect is best considered a "provisional duty of justice" in what amounts to a state of nature. The debate over how to categorize a duty of intervention as either an imperfect duty of benevolence is largely misguided largely due to a confusion about deontic categories. By first examining the classic Kantian taxonomy of duties, the work argues that Kant’s account is not consistent and that a "provisional" duty must be included in his framework. Next, it applies this reconstructed framework to the problem of R2P and argues that R2P is best considered a provisional duty of justice, that is a duty that is conditional on the capacity of individual actors in the international system. In order to move beyond the provisionality of protection R2P must be institutionalized. The author argues that duties of justice require juridical institutions for their fulfillment, and thus R2P requires the creation of the requisite executive, legislative and judicial authorities to move beyond its provisional status. Drawing on Kant’s political theory, the book argues that his concept of a "permissive law" authorizes the coercion of states into such an institution. Practically speaking, the United Nations Security Council should be the only agent to undertake the task of such coercion.
“is uninterested in dialogue with educated outsiders representing the subaltern . . . and who [ar... more “is uninterested in dialogue with educated outsiders representing the subaltern . . . and who [are] unwilling to take his views seriously. A right-wing poster makes the bigot’s point perfectly: ‘It doesn’t matter what this sign says, you’ll call it racist anyway!’” (p. 22). I recently saw a related sign on a shop door responding to the Black Lives Matter movement: “All lives matter! Nuf said.” I assumed that I knew what the sign meant, but my confidence about “knowing” this troubled me. I thought about speaking to the shop owner(s), but did not do that, partly because “Nuf said” signaled that they were “uninterested in dialogue.” Was I any better? These considerations might complicate Bronner’s insight into a “cosmopolitan education” that would cultivate genuine mutual respect across cultures and identities (pp. 181–82). “Any new approach,” he says, “will need to navigate and integrate . . . cultural practices that foster a cosmopolitan sensibility; political action that provides recognition for the disenfranchised and the outsider,” while recognizing how class differences “cut across identity constructions” (p. 186). Here we arrive at a conclusion supported by all three books: Achieving racial justice and a “democratic refounding” in the United States cannot be left to existing political practices. Working toward these goals will require multifaceted cultural politics, along with “self-work” among American citizens, for a solid majority to become more deeply “awakened” to racism and bigotry.
This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. ... more This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. 5 (2): 286–296. http://dx.doi.org/10.1080/23269995.2015.1018665.
Groups of humans are often able to find ways to cooperate with one another in complex, temporally... more Groups of humans are often able to find ways to cooperate with one another in complex, temporally extended social dilemmas. Models based on behavioral economics are only able to explain this phenomenon for unrealistic stateless matrix games. Recently, multi-agent reinforcement learning has been applied to generalize social dilemma problems to temporally and spatially extended Markov games. However, this has not yet generated an agent that learns to cooperate in social dilemmas as humans do. A key insight is that many, but not all, human individuals have inequity averse social preferences. This promotes a particular resolution of the matrix game social dilemma wherein inequity-averse individuals are personally pro-social and punish defectors. Here we extend this idea to Markov games and show that it promotes cooperation in several types of sequential social dilemma, via a profitable interaction with policy learnability. In particular, we find that inequity aversion improves temporal ...
This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. ... more This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. 5 (2): 286–296. http://dx.doi.org/10.1080/23269995.2015.1018665.
Fifteen years on, the Responsibility to Protect (R2P) doctrine is still facing questions over its... more Fifteen years on, the Responsibility to Protect (R2P) doctrine is still facing questions over its content, scope and attendant obligations. Recent conflicts in Syria, Ukraine and Iraq show, how, when and if states intervene is a matter of political will and calculation. Yet the question of political will remains largely unaddressed, and many ignore the conceptual and practical distance between stating that the international community should encourage and assist states to fulfill R2P obligations and requiring third parties to use diplomatic, humanitarian or ‘other’ means to protect populations when states fail to do so. I propose we acknowledge this distance and minimize it through covert action. Embracing the reality that some states cannot intervene due to political constraints entails that we can theorize about other ways to uphold R2P. Moreover, covert action involves a range of means and types of targets and is a flexible option for R2P.
We want artificial intelligence (AI) to be beneficial. This is the grounding assumption of most o... more We want artificial intelligence (AI) to be beneficial. This is the grounding assumption of most of the attitudes towards AI research. We want AI to be "good" for humanity. We want it to help, not hinder, humans. Yet what exactly this entails in theory and in practice is not immediately apparent. Theoretically, this declarative statement subtly implies a commitment to a consequentialist ethics. Practically, some of the more promising machine learning techniques to create a robust AI, and perhaps even an artificial general intelligence (AGI) also commit one to a form of utilitarianism. In both dimensions, the logic of the beneficial AI movement may not in fact create "beneficial AI" in either narrow applications or in the form of AGI if the ethical assumptions are not made explicit and clear. Additionally, as it is likely that reinforcement learning (RL) will be an important technique for machine learning in this area, it is also important to interrogate how RL smu...
The following organisations are named on the report: Future of Humanity Institute, University of ... more The following organisations are named on the report: Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder.
Lethal autonomous weapons, or sometimes referred to as "killer robots", present unique problems f... more Lethal autonomous weapons, or sometimes referred to as "killer robots", present unique problems for the future of warfighting. Weapons that are capable of identifying and firing on targets without human control raise moral, legal and operational problems for states and their militaries. Moreover, the use and proliferation of these weapons will likely have adverse effects on international peace and stability. The book looks to each of these key areas: morality, law and operational considerations to argue that the harms of developing and fielding these weapons outweighs the potential benefits. Looking to the current trajectory of weapons systems, the state of artificial intelligence and international relations theory, the book suggests that a preemptive ban on such weapons is the best way forward. Following the current trends the attempt to "ban killer robots" in the United Nations Convention on Conventional Weapons, the monograph provides the most current state of the debate. However, absent an international binding agreement, I suggest that it is incumbent upon private technology companies to forbear from creating lethal autonomous weapons.
This book argues that the duty to protect is best considered a "provisional duty of justice" in w... more This book argues that the duty to protect is best considered a "provisional duty of justice" in what amounts to a state of nature. The debate over how to categorize a duty of intervention as either an imperfect duty of benevolence is largely misguided largely due to a confusion about deontic categories. By first examining the classic Kantian taxonomy of duties, the work argues that Kant’s account is not consistent and that a "provisional" duty must be included in his framework. Next, it applies this reconstructed framework to the problem of R2P and argues that R2P is best considered a provisional duty of justice, that is a duty that is conditional on the capacity of individual actors in the international system. In order to move beyond the provisionality of protection R2P must be institutionalized. The author argues that duties of justice require juridical institutions for their fulfillment, and thus R2P requires the creation of the requisite executive, legislative and judicial authorities to move beyond its provisional status. Drawing on Kant’s political theory, the book argues that his concept of a "permissive law" authorizes the coercion of states into such an institution. Practically speaking, the United Nations Security Council should be the only agent to undertake the task of such coercion.
“is uninterested in dialogue with educated outsiders representing the subaltern . . . and who [ar... more “is uninterested in dialogue with educated outsiders representing the subaltern . . . and who [are] unwilling to take his views seriously. A right-wing poster makes the bigot’s point perfectly: ‘It doesn’t matter what this sign says, you’ll call it racist anyway!’” (p. 22). I recently saw a related sign on a shop door responding to the Black Lives Matter movement: “All lives matter! Nuf said.” I assumed that I knew what the sign meant, but my confidence about “knowing” this troubled me. I thought about speaking to the shop owner(s), but did not do that, partly because “Nuf said” signaled that they were “uninterested in dialogue.” Was I any better? These considerations might complicate Bronner’s insight into a “cosmopolitan education” that would cultivate genuine mutual respect across cultures and identities (pp. 181–82). “Any new approach,” he says, “will need to navigate and integrate . . . cultural practices that foster a cosmopolitan sensibility; political action that provides recognition for the disenfranchised and the outsider,” while recognizing how class differences “cut across identity constructions” (p. 186). Here we arrive at a conclusion supported by all three books: Achieving racial justice and a “democratic refounding” in the United States cannot be left to existing political practices. Working toward these goals will require multifaceted cultural politics, along with “self-work” among American citizens, for a solid majority to become more deeply “awakened” to racism and bigotry.
This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. ... more This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. 5 (2): 286–296. http://dx.doi.org/10.1080/23269995.2015.1018665.
Groups of humans are often able to find ways to cooperate with one another in complex, temporally... more Groups of humans are often able to find ways to cooperate with one another in complex, temporally extended social dilemmas. Models based on behavioral economics are only able to explain this phenomenon for unrealistic stateless matrix games. Recently, multi-agent reinforcement learning has been applied to generalize social dilemma problems to temporally and spatially extended Markov games. However, this has not yet generated an agent that learns to cooperate in social dilemmas as humans do. A key insight is that many, but not all, human individuals have inequity averse social preferences. This promotes a particular resolution of the matrix game social dilemma wherein inequity-averse individuals are personally pro-social and punish defectors. Here we extend this idea to Markov games and show that it promotes cooperation in several types of sequential social dilemma, via a profitable interaction with policy learnability. In particular, we find that inequity aversion improves temporal ...
This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. ... more This is a reply to:Finn, Peter D. 2015. “Franz Jagerstatter as social critic.” Global Discourse. 5 (2): 286–296. http://dx.doi.org/10.1080/23269995.2015.1018665.
Fifteen years on, the Responsibility to Protect (R2P) doctrine is still facing questions over its... more Fifteen years on, the Responsibility to Protect (R2P) doctrine is still facing questions over its content, scope and attendant obligations. Recent conflicts in Syria, Ukraine and Iraq show, how, when and if states intervene is a matter of political will and calculation. Yet the question of political will remains largely unaddressed, and many ignore the conceptual and practical distance between stating that the international community should encourage and assist states to fulfill R2P obligations and requiring third parties to use diplomatic, humanitarian or ‘other’ means to protect populations when states fail to do so. I propose we acknowledge this distance and minimize it through covert action. Embracing the reality that some states cannot intervene due to political constraints entails that we can theorize about other ways to uphold R2P. Moreover, covert action involves a range of means and types of targets and is a flexible option for R2P.
We want artificial intelligence (AI) to be beneficial. This is the grounding assumption of most o... more We want artificial intelligence (AI) to be beneficial. This is the grounding assumption of most of the attitudes towards AI research. We want AI to be "good" for humanity. We want it to help, not hinder, humans. Yet what exactly this entails in theory and in practice is not immediately apparent. Theoretically, this declarative statement subtly implies a commitment to a consequentialist ethics. Practically, some of the more promising machine learning techniques to create a robust AI, and perhaps even an artificial general intelligence (AGI) also commit one to a form of utilitarianism. In both dimensions, the logic of the beneficial AI movement may not in fact create "beneficial AI" in either narrow applications or in the form of AGI if the ethical assumptions are not made explicit and clear. Additionally, as it is likely that reinforcement learning (RL) will be an important technique for machine learning in this area, it is also important to interrogate how RL smu...
The following organisations are named on the report: Future of Humanity Institute, University of ... more The following organisations are named on the report: Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder.
You can see my presentation for the AI futures conference from Jan. 2015 in San Juan, Puerto Rico... more You can see my presentation for the AI futures conference from Jan. 2015 in San Juan, Puerto Rico. The conference was put on by the Future of Life Institute and supported generously by Jaan Tallinn.
Testimony for "Mapping Autonomy" session at the 2016 Informal Meeting of Experts on Lethal Autono... more Testimony for "Mapping Autonomy" session at the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons at the United Nations Convention on Conventional Weapons
Side Event talk given at the 2016 UN Convention on Conventional Weapons Informal Meeting of Exper... more Side Event talk given at the 2016 UN Convention on Conventional Weapons Informal Meeting of Experts, Geneva April 11, 2016.
My testimony to the United Nations on the operational and technical issues regarding autonomous w... more My testimony to the United Nations on the operational and technical issues regarding autonomous weapons.
Traditionally, students in an American political thought course examine liberal, conservative and... more Traditionally, students in an American political thought course examine liberal, conservative and radical ideologies. The typical pedagogy is to trace the history of these ideologies by reading the great historical texts, making sure to counterbalance each point consistently. This course, however, is constructed differently. Students in this course will not follow the traditional trajectory. Instead, students will follow a specific American ideal-"that all men are created equal"-throughout the philosophical and political debates in American history. By the end of this course, students should have familiarity with a few of the major historical figures in American political thought, and they should also have a deeper appreciation of what it means to construct, criticize and defend American political, philosophical and moral ideals from a variety of standpoints. This course is not so much about the "isms" of American political theory; it is rather about American political principles. America is a diverse country, with many voices, opinions, ideologies and viewpoints; this course seeks to include some of those voices. We will begin by reading foundational texts that support the principle "all men are created equal." These include Common Sense, Rights of Man, the Federalist Papers, and the Declaration of Independence.
For the 2015-2016 year, I will be a nonresidential fellow in New America Foundations Cybersecurit... more For the 2015-2016 year, I will be a nonresidential fellow in New America Foundations Cybersecurity Initiative. I am very excited about this great opportunity and look forward to getting out some great work on cyber soon!
Roundtable Discussion about banning and regulating lethal autonomous weapons in the 2015 Bulletin... more Roundtable Discussion about banning and regulating lethal autonomous weapons in the 2015 Bulletin of Atomic Scientists
Bessma Momani and I argue that the Syrian crisis will never look like the Libyan intervention due... more Bessma Momani and I argue that the Syrian crisis will never look like the Libyan intervention due also to tactical, not merely political, considerations (2011).
The project addresses the relationships between artificial intelligence (AI), weapons systems and... more The project addresses the relationships between artificial intelligence (AI), weapons systems and society. In particular, the project provides a framework for meaningful human control (MHC) of autonomous weapons systems. In international discussions, a number of governments and organizations adopted MHC as a tool for approaching problems and potential solutions raised by autonomous weapons. However, the content of MHC was left open. While useful for policy reasons, the international community, academics and practioners are calling for further work on this issue. This project responds to that call by bringing together a multidisciplinary and multi-stakeholder team to address key questions. For example, we question the values associated with MHC, what rules should inform the design of the systems ”both in software and hardware”and how existing and currently developing weapons systems advance possible relationships between human control, autonomy and AI. To achieve impact across academic, industry and policy arenas, we will produce academic publications, policy briefs, an open access database on 'semi-autonomous' weapons, and will sponsor multi-sector stakeholder discussions on how human values can be maintained as systems develop. Furthermore, the organization Article 36 will channel outputs directly into the international diplomatic community to achieve impact in international legal and policy forums.
Talk presented in March 2016 at Magdalene College Oxford, Oxford Consortium for Human Rights on t... more Talk presented in March 2016 at Magdalene College Oxford, Oxford Consortium for Human Rights on the implication of autonomous weapons, artificial intelligence and cyber
The preceding chapter argued that if we created AWS capable of complying with contemporary target... more The preceding chapter argued that if we created AWS capable of complying with contemporary targeting doctrine, we would be creating strategic actors and not merely force multipliers. This chapter takes a view from a different direction. In particular, this chapter asks: how would the use of AWS challenge a right to self-defense under jus ad bellum? Given that AWS have no " self " to defend, since they are not moral agents and are incapable of being killed or harmed, does their use change or limit our justification to use lethal force? This chapter argues that the ability to use AWS in the stay of human warfighters does challenge justifications to use lethal force on two fronts. First, it proscribes militaries from using lethal force in response to attacks against their robotic warfighters. If there is no lethal threat, one cannot justify using lethal force in response. This radical asymmetry, in turn, affects the way in which collectivities may justify using force on grounds of a right of national self-defense. In other words, the potential to use AWS to fight wars affects our jus ad bellum proportionality calculations, even in the face of attack against them, the result of which is a rather perverse effect: that the possession and the ability to use AWS prohibits their use. If there is no lethal threat, there can be no use of lethal force to respond. The argument proceeds in four sections.
Uploads
Books by Heather M Roff
Papers by Heather M Roff