Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
The growing use of military AI has amplified an already heated debate in which proponents and opponents of lethal autonomous weapons clash over the legal, ethical, and practical upshots of this new technology. Yet these debates still lag... more
The growing use of military AI has amplified an already heated debate in which proponents and opponents of lethal autonomous weapons clash over the legal, ethical, and practical upshots of this new technology. Yet these debates still lag behind accelerated efforts to replace human decision making with AI wherever possible in military operations. This chapter argues that such developments in military AI reflect a prioritisation of 'know-how' over 'know-what', which in turn jeopardises not only global security, but also the very integrity of human ethical reasoning. In particular, the chapter tracks present-day forays into full lethal autonomy in weapons systems, noting their deleterious impact on the ability of military personnel to take responsibility for acts of technologically-mediated violence, whether intended or accidental. The chapter closes by noting important linkages between the 'know-how' perspective and the private sector, arguing that the growing prevalence of such a perspective is likely to lessen the restraint on harm in warfare going forward.
Research Interests:
DRAFT CHAPTER: Over a decade's worth of discussions on the ethical and legal implications of autonomous weapons systems have yielded limited results. Complicating matters somewhat is the fact that, to date, the debate is marred by... more
DRAFT CHAPTER: Over a decade's worth of discussions on the ethical and legal implications of autonomous weapons systems have yielded limited results. Complicating matters somewhat is the fact that, to date, the debate is marred by unhelpful conflations, imprecision of terms and a lack of agreement about the meaning of key terms. It begins with the term 'autonomy' and stretches to concepts such as trust and responsibility. This is perhaps not surprising given the speed with which the technology develops and the fact that the debate is inherently trans-disciplinary in nature. Trust, for example, has very different layers of meaning in an engineering context than it has from the perspective of a social scientist, psychologist or philosopher. Increasingly, however, the forms of the discussions on moral agency and responsibility take on the markers of technical discourse, even if they are held within a philosophical register. In such discussions, both human agency and machine agency are read through a technological register wherein functional equivalences are drawn between the two, to make one fit the other. In this chapter, I examine these discourses and their logical foundations and argue that rather than help make sense of the specific demand of moral agency and responsibility in the context of LAWS, this takes us further away from understanding moral concerns as distinctly human social concerns, and further into the terrain of thinking about ethics as a purely technological problem that can be solved with more attentiveness to technology rather than to human relations. This creates critical blind spots in the debate on moral responsibility for LAWS.
Thinking about ‘posthuman security’ is no easy task. To begin with, it requires a clear notion of what we mean by ‘posthuman’. There are various projects underway to understand what this term can or should signal, and what it ought to... more
Thinking about ‘posthuman security’ is no easy task. To begin with, it requires a clear notion of what we mean by ‘posthuman’. There are various projects underway to understand what this term can or should signal, and what it ought to comprise. To bring a broadened understanding of ‘security’ into the mix complicates matters further. In this essay, I argue that a focus on the relation of the human to new technologies of war and security provides one way in which IR can fruitfully engage with contemporary ideas of posthumanism.
New technologies in communications and networking have shaped the way political movements can be mobilised and coordinated in important ways. Recent uprisings have shown dramatically how a people can communicate its cause effectively... more
New technologies in communications and networking have shaped the way political movements can be mobilised and coordinated in important ways. Recent uprisings have shown dramatically how a people can communicate its cause effectively beyond borders, through online social networking channels and mobile phone technologies. Hannah Arendt, as an eminent scholar of power and politics in the modern era, offers a relevant lens with which to theoretically examine the implications and uses of online social networks and their impact on politics as praxis. This article creates an account of how Arendt might have evaluated virtual social networks in the context of their potency to create power, spaces and possibilities for political action. With an Arendtian lens the article examines whether these virtual means of ‘shared appearances’ facilitate or frustrate efforts in the formation of political power and the creation of new beginnings. Based on a contemporary reading of her writings, the artic...
In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and... more
In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener’s warning that “machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous,” I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.
Across the world, militaries are racing to acquire and develop new capabilities based on the latest in machine learning, neural networks, and artificial intelligence (AI). In this paper, I argue that the shift into military AI is shaping... more
Across the world, militaries are racing to acquire and develop new capabilities based on the latest in machine learning, neural networks, and artificial intelligence (AI). In this paper, I argue that the shift into military AI is shaping human behaviour in heretofore unacknowledged and morally significant ways. Following Anders, I argue that as the human becomes digitally co-machinistic (mitmaschinell), they are compelled to adopt a logic of speed and optimisation in their ethical reasoning. The consequence of this is a form a moral de-skilling, whereby military personnel working with digital infrastructures and interfaces become less able to act and decide as moral agents. This is an especially concerning development when it comes to the conduct of war, where the moral stakes could not be higher.
Our contemporary condition is deeply infused with scientific-technological rationales. These influence and shape our ethical reasoning on war, including the moral status of civilians and the moral choices available to us. In this article,... more
Our contemporary condition is deeply infused with scientific-technological rationales. These influence and shape our ethical reasoning on war, including the moral status of civilians and the moral choices available to us. In this article, I discuss how technology shapes and directs the moral choices available to us by setting parameters for moral deliberation. I argue that technology has moral significance for just war thinking, yet this is often overlooked in attempts to assess who is liable to harm in war and to what extent. This omission produces an undue deference to technological authority, reducing combatants, civilians and scenarios to data points. If we are to develop a maximally restrictive framework for harming civilians in war, which in my view should be a goal of just war thinking, then it is imperative that the scientific-technological dimension of contemporary war is given due attention.
As military technologies progress at a pace that challenges human cognitive and reasoning capacities, it is becoming ever more difficult to appraise the ethics of their use. In this article, I argue that the contours of ethical killing... more
As military technologies progress at a pace that challenges human cognitive and reasoning capacities, it is becoming ever more difficult to appraise the ethics of their use. In this article, I argue that the contours of ethical killing are shaped and constrained by a medical discourse that has its basis in a deeper regime of techno-biopolitical expertise. Narratives and representations of drones as surgical, ethical and wise instruments for counter-terrorism activities rely not only on the rendering neutral of both technology and practice, but also on a conflation of technology with practice as a biopolitical necessity. In this conflation, I argue, the practice of targeted killing is adiaphorized. Images and metaphors of the body politic turn drone-strikes into a form of medicine that experts prescribe as a means of treating or preventing political cancers, diseases and illnesses. Ethics, in turn, is treated as a primarily technical matter – something to be technologically clarified...
Artificial Intelligence as a buzzword and a technological development is presently cast as the ultimate ‘game changer’ for economy and society; a technology of which we cannot be the master, but which nonetheless will have a pervasive... more
Artificial Intelligence as a buzzword and a technological development is presently cast as the ultimate ‘game changer’ for economy and society; a technology of which we cannot be the master, but which nonetheless will have a pervasive influence on human life. The fast pace with which the multi-billion dollar AI industry advances toward the creation of human-level intelligence is accompanied by an increasingly exaggerated chorus of the ‘incredible miracle’, or the ‘incredible horror’, intelligent machines will constitute for humanity, as the human is gradually replaced by a technologically superior proxy, destined to be configured as a functional (data) component at best, a relic at worst. More than half a century ago, Günther Anders sketched out this path toward technological obsolescence, and his work on ‘Promethean shame’ and ‘Promethean discrepancy’ provides an invaluable means with which to recognise and understand the relationship of the modern human to his/her technological pr...
The growing use of military AI has amplified an already heated debate in which proponents and opponents of lethal autonomous weapons clash over the legal, ethical, and practical upshots of this new technology. Yet these debates still lag... more
The growing use of military AI has amplified an already heated debate in which proponents and opponents of lethal autonomous weapons clash over the legal, ethical, and practical upshots of this new technology. Yet these debates still lag behind accelerated efforts to replace human decision making with AI wherever possible in military operations. This chapter argues that such developments in military AI reflect a prioritisation of 'know-how' over 'know-what', which in turn jeopardises not only global security, but also the very integrity of human ethical reasoning. In particular, the chapter tracks present-day forays into full lethal autonomy in weapons systems, noting their deleterious impact on the ability of military personnel to take responsibility for acts of technologically-mediated violence, whether intended or accidental. The chapter closes by noting important linkages between the 'know-how' perspective and the private sector, arguing that the growing prevalence of such a perspective is likely to lessen the restraint on harm in warfare going forward.
In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener's warning that "machines can and... more
In this article, I explore the (im)possibility of human control and question the presupposition that we can be morally adequately or meaningfully in control over AI-supported LAWS. Taking seriously Wiener's warning that "machines can and do transcend some of the limitations of their designers and that in doing so they may be both effective and dangerous," I argue that in the LAWS human-machine complex, technological features and the underlying logic of the AI system progressively close the spaces and limit the capacities required for human moral agency.
Across the world, militaries are racing to acquire and develop new capabilities based on the latest in machine learning, neural networks, and artificial intelligence (AI). In this article, I argue that the shift into military AI is... more
Across the world, militaries are racing to acquire and develop new capabilities based on the latest in machine learning, neural networks, and artificial intelligence (AI). In this article, I argue that the shift into military AI is shaping human behaviour in heretofore unacknowledged and morally significant ways. Following Anders, I argue that as the human becomes digitally co-machinistic (mitmaschinell), they are compelled to adopt a logic of speed and optimisation in their ethical reasoning. The consequence of this is a form a moral de-skilling, whereby military personnel working with digital infrastructures and interfaces become less able to act and decide as moral agents. This is an especially concerning development when it comes to the conduct of war, where the moral stakes could not be higher.
The ongoing conflict in the war on terrorism puts two emblematic modes of violence into sharp relief: the drone, as an ostensibly rational, clinical and measured weapon of war, and suicide bombings, frequently portrayed as the horrid... more
The ongoing conflict in the war on terrorism puts two emblematic
modes of violence into sharp relief: the drone, as an ostensibly
rational, clinical and measured weapon of war, and suicide bombings,
frequently portrayed as the horrid deeds of fanatics. In this
article, I seek to challenge this juxtaposition and instead suggest
that both modalities of killing are part of the same technologicallymediated
ecology of violence. To do this, I examine the materialsemiotic
assemblage of the drone and of the suicide bomber,
paying attention to the technological production of each mode
of violence, as well as the narratives that render each figure
intelligible in the war on terrorism. I argue that the strongly
divergent narratives found in Western discourse serve as a politically
expedient sense–making device, whereby suicide bombing is
pathologised, thereby justifying ever more intrusive violent acts
with seemingly rational technologies like the drone. Rather than
“solving” the problem of terrorism, this creates counter-productive,
or iatrogenic, effects, in which technological mediation escalates
rather than diminishes cycles of violence. By way of response,
I suggest that a better understanding of the relational nature of
violence in the war on terrorism might be gained by reading the
two not as antithetical figures, but instead as operating in the
same technological key.
Research Interests:
As military technologies progress at a pace that challenges human cognitive and reasoning capacities, it is becoming ever more difficult to appraise the ethics of their use. In this article, I argue that the contours of ethical killing... more
As military technologies progress at a pace that challenges human cognitive and reasoning capacities, it is becoming ever more difficult to appraise the ethics of their use. In this article, I argue that the contours of ethical killing are shaped and constrained by a medical discourse that has its basis in a deeper regime of techno-biopolitical expertise. Narratives and representations of drones as surgical, ethical and wise instruments for counter-terrorism activities rely not only on the rendering neutral of both technology and practice, but also on a conflation of technology with practice as a biopolitical necessity. In this conflation, I argue, the practice of targeted killing is adiaphorized. Images and metaphors of the body politic turn drone-strikes into a form of medicine that experts prescribe as a means of treating or preventing political cancers, diseases and illnesses. Ethics, in turn, is treated as a primarily technical matter – something to be technologically clarified and administered from an expert space beyond the zone of ethical contestation. As long as this is the case, ethics will remain but a cog in our new killing machines.
Research Interests:
New technologies in communications and networking have shaped the way political movements can be mobilised and coordinated in important ways. Recent uprisings have shown dramatically how a people can communicate its cause effectively... more
New technologies in communications and networking have shaped the way political movements can be mobilised and coordinated in important ways. Recent uprisings have shown dramatically how a people can communicate its cause effectively beyond borders, through online social
networking channels and mobile phone technologies. Hannah Arendt, as an eminent scholar of power and politics in the modern era, offers a relevant lens with which to theoretically examine the implications and uses of online social networks and their impact on politics as praxis. This article creates an account of how Arendt might have evaluated virtual social networks in the context of their potency to create power, spaces and possibilities for political action. With an Arendtian lens the article examines whether these virtual means of ‘shared appearances’ facilitate or frustrate efforts in the formation of political power and the creation of new beginnings. Based on a contemporary reading of her writings, the article concludes that Arendt’s own assessment of online social networks, as spheres for political action, would likely have been very critical.
Research Interests:
Torturing the human body is in principle located at the opposite end of the liberal credo. The human as a site of inviolability thus stands in stark contrast to the human as a site of information-gathering through torture in modernity.... more
Torturing the human body is in principle located at the opposite end of the liberal credo. The human as a site of inviolability thus stands in stark contrast to the human as a site of information-gathering through torture in modernity. How is it possible then that the two spectres begin to coexist in 21st liberal society? In seeking to analyse this paradox, the
precarious basis of law in the protection of human rights and the human body is as important to consider as the biopolitical technologies that in liberal society facilitate the dichotomous
categorization of human – inhuman within the human species in the name of the security of a population and the drawing of exclusionary boundaries that render the bodies of some
fundamentally violable. While humanity in modernity seeks to be inclusive in the aspirational concept of international human rights, it remains hitherto a concept that lacks in both political
and philosophical reality.
Research Interests:
As military technologies progress at a pace that challenges human cognitive and reasoning capacities and in ways that leaves little room to adequately contemplate and assess the very real consequences of the often virtualised and... more
As military technologies progress at a pace that challenges human cognitive and reasoning capacities and in ways that leaves little room to adequately contemplate and assess the very real consequences of the often virtualised and abstracted means of conducting acts of war, the ethics of the use of these technologies remain obscure, occluded by the shadow of
technological acceleration. The call to clarify the ethics of recent military technologies such as UAVs is loud and widespread and yet, it seems, remains unanswered to date. This paper
examines and critiques how the lethal use of unmanned military technologies – specifically UAVs - is framed, depicted, justified and legitimized as ethical within a (post)modern context. Narratives and representations of UAVs as surgical, ethical and wise instruments for counter-terrorism activities not only rely on the rendering neutral of both, technology and
practice, but also on a conflation of technology with practice as a biopolitical necessity. I argue in this paper that, in this conflation, and within the biopolitical mandate, the practice
of targeted killing is adiaphorised and shifted outside the possibility for meaningful ethical contestation. The ethics that are produced in such a context rely on medical metaphors that posit the body politic and the resulting political practices in terms of prophylaxis and ethics becomes a technical subject.
Research Interests:
New technologies in communications and networking have shaped the way political power can be generated in important ways. Recent popular uprisings have shown dramatically how a people can communicate its cause effectively beyond borders... more
New technologies in communications and networking have shaped the way political power can be generated in important ways. Recent popular uprisings have shown dramatically how a people can communicate its cause effectively beyond borders
through social networking channels and mobile phone technologies. Hannah Arendt, as one of the most eminent scholars of power and politics in the modern era, would surely have a word or two to say about the implications and uses of social networks and modern means of communication and their impact on politics as praxis. This paper creates an account of how Hannah Arendt might have conceived of virtual social networks and modern means of communications technology in the context of creating spaces and possibilities for political action. It explores whether Arendt might have considered these virtual means of “shared appearances” to either facilitate or frustrate efforts in the formation of political power and the creation of new beginnings. Based on a contemporary reading of her writings, the paper considers the pros and cons of modern means of communications and their impact on both popular power and state power, and consequently offers an analysis of the use
and instrumentality of such means for effective political action.
Research Interests:
Our contemporary condition is deeply infused with scientific-technological rationales. These influence and shape our moral reasoning on war, including the moral status of civilians. In this paper, I discuss how technology shapes and... more
Our contemporary condition is deeply infused with scientific-technological rationales. These influence and shape our moral reasoning on war, including the moral status of civilians. In this paper, I discuss how technology shapes and directs the moral choices available to us by setting parameters for moral deliberation. I argue that technology is an actant of moral significance to just war thinking, yet this is often overlooked in attempts to assess who is liable to harm in war and to what extent. This omission produces an undue deference to technological authority, reducing combatants, civilians, and scenarios to data points. If we are to develop a maximally restrictive framework for harming civilians in war, which I think should be a goal of just war thinking, then it is imperative that the scientific-technological dimension of contemporary war is given due attention.
Research Interests:
Political responses to 9/11 have bolstered an approach to ethics that gives priority to the translation of abstract principles into concrete dilemmas, rather than one that sees ethics as a transient encounter. Yet both politics and ethics... more
Political responses to 9/11 have bolstered an approach to ethics that gives priority to the translation of abstract principles into concrete dilemmas, rather than one that sees ethics as a transient encounter. Yet both politics and ethics arise precisely from this encounter and the responsibility that emerges through it. Unlike applied ethics, an ethics of encounter must consider the unpredictability that arises out of political and social situations, each of which require unique interpretations of plural contexts. Traditional discourses of political theory struggle to think ethics and political action in these terms. In order to overcome these limits and think anew the ethics of encounter, this paper turns to an unlikely resource: musical improvisation. It argues that the principles and practices of radical improvisation in music provide new avenues for theorising contingency, alterity and potentiality. Musical improvisation embraces infinite contingency via an interactive dialogue that encompasses both the sonic and the corporeal. In so doing, it provides an idea and an instantiation of an ethics that is grounded in the encounter, and that might challenge hegemonic orders of practical ethics and the politics these produce.
Research Interests:
As automated military technologies advance with unprecedented speed and enthusiasm, war becomes an increasingly virtual affair. Yet this virtual reality stands in stark contrast to the very embodied reality of those on the receiving end... more
As automated military technologies advance with unprecedented speed and enthusiasm, war becomes an increasingly virtual affair. Yet this virtual reality stands in stark contrast to the very embodied reality of those on the receiving end of such modern military technologies. At work here is a process that turns war from a social human activity into a trans-or post-human endeavour in which machines are said to outpace man by a mile. The heated debates over autonomous military technologies – such as drones, killer robots, and enhanced cyborg soldiers – reflect this shift. They also, however, reveal a deep unease regarding the role of the human in a context of war that increasingly relies on technology as an authority. In this article, I unpack the contemporary human-technology complex and interrogate the place and role of the 'failing' or 'fallible' human in relation to his Promethean creations. I argue that the emergent narrative of technology-driven warfare produces an essentially dissociative form of human subjectivity, which frames the act of war as inherently ethical while at the same time considerably diminishing the space for meaningful ethical and political contestation.
Research Interests: