Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Robotic Ethics

2007
...Read more
Robotic Ethics Glenn Rikowski, London, 18th June 2007 Introduction: the Ethics of Human-Robot Interaction When I studied philosophy at the University of East Anglia over 30 years ago, I was not that keen on Moral Philosophy. Since then, I have tended to avoid articles on “ethical issues” in newspapers such as The Guardian. However, when I taught philosophy A-level at Epping Forest College (from 1990-94), the students liked the Ethics theme most of all – even though in my view I taught it worse than any of the other options. All that apparent soul-searching over animal rights, abortion and so on just left me irritated as people tried to justify their own prejudices on the basis of some ethical theory or another. Delving into morality and ethics never seemed to get anywhere much, in my view. Now it seems we have a whole new set of ethical issues to deal with: the ethics of human-robot interaction (see BBC News, 2007; Evans, 2007; and Davoudi, 2007). Just when we seem to have so much trouble in how we relate to each other and animals, we now have to consider how to treat our robotic acquaintances and friends. Them and Us The government of South Korea has drawn up a ‘code of ethics to prevent human abuse of robots – and vice versa’ (Lovgen, 2007, p.1). 1
Why South Korea? Well, South Korea is one of the leading countries in the manufacture of robots, and its government wants to have a robot in every South Korean household by 2020 (Ibid.). The South Korean Ministry of Commerce, Industry and Energy noted in early March, on the significance of a Robot Ethics Charter, that: “The move anticipates the day when robots, particularly intelligent service robots, could become a part of daily life as greater technological advancements are made” (Lovgen, 2007, p.1). The Robot Ethics Charter or Code was to be released to the general public this year (BBC News, 2007, p.1); in April, in Rome, apparently (BBC News, 2007, p.2) – though I’ve not noticed anything in the press about it yet. Robots will play an increasingly important part in our everyday lives, it seems. Not only will they help us in the home, but they will also be soldier-substitutes in wars, and perform routine surgery by 2018 (BBC News, 2007, p.1). In Japan, surveillance robots can be bought today for home safety: burglars beware! Robbie the robot has you in his sights! Samsung has developed ‘sentry robots’ which act as borer guards; they have a ‘500-metre range and a “shoot-to-kill” policy’ (Davoudi, 2007). They might be ‘deployed on the border between North and South Korea’ (Ibid.). There are three major themes in the robot ethics debate. First: there is the issue of robots harming humans – with the issue of ‘who is to blame’? The designers, the humans affected (especially if they are wrong-doers, or doing something silly), or the robots themselves? Secondly, if robots are to be programmed to be “moral” there are tricky questions regarding what moral codes they are to be given, and 2
Robotic Ethics   Glenn Rikowski, London, 18th June 2007   Introduction: the Ethics of Human-Robot Interaction   When I studied philosophy at the University of East Anglia over 30 years ago, I was not that keen on Moral Philosophy. Since then, I have tended to avoid articles on “ethical issues” in newspapers such as The Guardian. However, when I taught philosophy A-level at Epping Forest College (from 1990-94), the students liked the Ethics theme most of all – even though in my view I taught it worse than any of the other options. All that apparent soul-searching over animal rights, abortion and so on just left me irritated as people tried to justify their own prejudices on the basis of some ethical theory or another. Delving into morality and ethics never seemed to get anywhere much, in my view. Now it seems we have a whole new set of ethical issues to deal with: the ethics of human-robot interaction (see BBC News, 2007; Evans, 2007; and Davoudi, 2007). Just when we seem to have so much trouble in how we relate to each other and animals, we now have to consider how to treat our robotic acquaintances and friends.   Them and Us   The government of South Korea has drawn up a ‘code of ethics to prevent human abuse of robots – and vice versa’ (Lovgen, 2007, p.1). Why South Korea? Well, South Korea is one of the leading countries in the manufacture of robots, and its government wants to have a robot in every South Korean household by 2020 (Ibid.). The South Korean Ministry of Commerce, Industry and Energy noted in early March, on the significance of a Robot Ethics Charter, that:   “The move anticipates the day when robots, particularly intelligent service robots, could become a part of daily life as greater technological advancements are made” (Lovgen, 2007, p.1).   The Robot Ethics Charter or Code was to be released to the general public this year (BBC News, 2007, p.1); in April, in Rome, apparently (BBC News, 2007, p.2) – though I’ve not noticed anything in the press about it yet.   Robots will play an increasingly important part in our everyday lives, it seems. Not only will they help us in the home, but they will also be soldier-substitutes in wars, and perform routine surgery by 2018 (BBC News, 2007, p.1). In Japan, surveillance robots can be bought today for home safety: burglars beware! Robbie the robot has you in his sights! Samsung has developed ‘sentry robots’ which act as borer guards; they have a ‘500-metre range and a “shoot-to-kill” policy’ (Davoudi, 2007). They might be ‘deployed on the border between North and South Korea’ (Ibid.).   There are three major themes in the robot ethics debate. First: there is the issue of robots harming humans – with the issue of ‘who is to blame’? The designers, the humans affected (especially if they are wrong-doers, or doing something silly), or the robots themselves? Secondly, if robots are to be programmed to be “moral” there are tricky questions regarding what moral codes they are to be given, and the priority of the moral principles they have in their robotic heads. Finally, there is the embarrassing issue of robot abuse. For example: the issues of sex with robots (would they have an ‘age of consent’?), rape, physical abuse and so on – the possibilities are mind-boggling.   Asimov’s Laws of Robotics   Isaac Asimov was a famous science fiction writer. Projecting into thefuture, he saw that ethical codes regarding human-robot interaction would be required, and in the light of this he formulated three ‘laws of robotics’; fundamental principles that grounded the ethics of human-robot interaction. These ‘laws’ were as follows:   A robot may not injure a human being or, through inaction, allow a human being to come to harm   A robot must obey orders given it by human beings except where such orders would conflict with the First Law   A robot must protect its own existence as long as such protection does not conflict with the First or Second Law (Evans, 2007, pp.1-2).   Thus, it seems in the Third Law that ‘robot rights’ come at the bottom of the pile. Certainly, this is likely to cause the most heart-searching, for Evans (2007) has noted that Asimov’s Three Laws of Robotics fail to protect robots adequately:   “If robots can feel pain, should they be granted certain rights? If robots develop emotions, as some experts think they will, should they be allowed to marry humans? Should they be allowed to own property? These questions might sound far-fetched, but debates over animal rights would have seemed equally far-fetched to many people just a few decades ago. Now, however, such questions are part of mainstream public debate” (p.2).   Scientists themselves are not necessarily supporting robot rights, however, for:   “The scientists attacked a [UK] government-commissioned report last year that supported the idea that artificially intelligent robots should have human rights” (Davoudi, 2007).   After all, the issue regarding animal rights is that they have rights consonant with their status as animals; not that they have ‘human’ right. Thus, presumably robots should have ‘robotic rights’ – not human ones. Perhaps humans will be more able to relate to robots and their rights and ethics as we become ever more like a ‘capitorg’, or ‘capitalist organism’ (see Rikowski, 2011). Capital’s Robots   It is important to keep an eye on the constitution of society, and perhaps pay less attention to mush and froth of the “ethical” debate. The key point is that as we live in a capitalist society; these robots will be basically capital’s robots. First, robots have been employed in the capitalist labour process for some time. Many years ago, 1988 I believe, I took a group of students round the Ford Dagenham plant. We saw some of the robots, though there were not that many there as compared with Japanese car plants of the day. One image in particular sticks in my mind. This was where in one cage a robot was spraying car parts, and in the cage next to it a man was doing the same. It just looked so demoralising. Anyway, I would wager that robot rights will not be allowed to encroach on the rights of capital. Will robots get public holidays? Will they have trade union rights? In capitalist society, robots are brought into factories to cut socially necessary labour-time and to boost productivity. Human representatives of capital will surely fight against robot rights at work when these can be seen to affect their capacity to increase relative surplus-value production. It is ironic that, as workers’ rights in the UK have taken a series of blows in the last 25 years, with the UK having some of the most draconian labour laws, we should be considering the rights of robots at work. But my guess is that any rights robots get will have no jurisdiction once a robot goes through the factory gate and begins work.   Secondly, robots will be commodities – to be bought and sold. Some will be sold to ‘die’ in imperialist wars; some will be unwaged slaves in factories and offices, whilst others will have more secure lives as household pets and servants. However, certain similarities between the sale of robots and the sale of another, unique and human commodity – labour-power – will become stark and challenging for human representatives of capital, and a source of shame and anger for those who labour in capital’s workplaces. Furthermore, as robots become made in factories by other robots, the parallels between this and human labour power being socially produced in capital’s education and training factories (schools, universities and so on) become disturbing and revealing for human representatives of both capital and labour.     References   BBC News (2007) Robotic age poses ethical dilemma, BBC News (Technology), 7th March: http://news.bbc.co.uk/2/hi/technology/6425927.stm   Davoudi, S. (2007) Scientists call for robot ethics debate, Financial Times, 24th April, p.5.   Evans, D. (2007) The ethical dilemmas of robotics, BBC News (Technology), 9th March: http://news.bbc.co.uk/2/hi/technology/6432307.stm   Lovgren, S. (2007) Robot Code of Ethics to Prevent Android Abuse, Protect Humans, National Geographic News, 16th March: http://news.nationalgeographic.com/news/2007/03/070316-robot-ethics.html Rikowski, G. (2011) Capitorg: Education and the Constitution of the Human in Contemporary Society, A paper prepared for the Praxis & Pedagogy Research Seminar, The Graduate School of Creative Arts and Media (GradCAM), Dublin, Ireland, 25th May 2011, online at Academia: http://www.academia.edu/5985145/Capitorg_Education_and_the_Constitution_of_the_Human_in_Contemporary_Society Dr. Glenn Rikowski All that is Solid for Glenn Rikowski: http://rikowski.wordpress.com PAGE 1