Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Science and Engineering Ethics
Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism2019 •
Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory-'ethical behaviourism'-which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven't done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of 'procreative beneficence' towards robots.
Connection Science
Can robots be responsible moral agents? And why should we care?In this chapter, I ask whether we can coherently conceive of robots as moral agents and as moral patients. I answer both questions negatively but conditionally: for as long as robots lack certain features, they can be neither moral agents nor moral patients. These answers, of course, are not new. They have, yet, recently been the object of sustained critical attention (Coeckelbergh 2014; Gunkel 2014). The novelty of this contribution, then, resides in arriving at these precise answers by way of arguments that avoid these recent challenges. This is achieved by considering the psychological and biological bases of moral practices and arguing that the relevant differences in such bases are sufficient, for the time being, to exclude robots from adopting, both, an active and a passive moral role.
2016 •
for Plenary Lecture The prospect of intelligent robotic agents taking increasingly significant roles in our human society suggests that it would be prudent for robot designers to ensure that robot behavior is governed by some sort of morality and ethics — that robots should be trustworthy. But what would this actually mean? Research in artificial intelligence has profited from, and contributed to, the study of human cognition, including the fields of cognitive science and cognitive neuroscience. Likewise, we might hope that efforts to design moral and ethical systems for robots will both draw upon, and contribute to, a deeper understanding of morality, ethics, and trust among human beings [18,11]. Many of the benefits of society come from cooperation, which in turn depends on trust between cooperating partners. “Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another.” [15] For an intel...
Robots and Moral Obligations. In: What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016/TRANSOR 2016, 290
Robots and moral obligations2016 •
Using Roger Crisp's [1] arguments for well-being as the ultimate source of moral reasoning, this paper argues that there are no ultimate, non-derivative reasons to program robots with moral concepts such as moral obligation, morally wrong or morally right. Although these moral concepts should not be used to program robots, they are not to be abandoned by humans since there are still reasons to keep using them, namely: as an assessment of the agent, to take a stand or to motivate and reinforce behaviour. Because robots are completely rational agents they don't need these additional motivations, they can suffice with a concept of what promotes well-being. How a robot knows which action promotes well-being to the greatest degree is still up for debate, but a combination of top-down and bottom-up approaches seem to be the best way.
This is the text of a lecture that I delivered at Tilburg University on the 24th of September 2019. It was delivered as part of the 25th Anniversary celebrations for TILT (Tilburg Institute for Law, Technology and Society). The lecture is based on my longer academic article ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’ but contains some new ideas. I also include a follow up section responding to criticisms from the audience on the evening of the lecture as well as some criticisms I received online after posting. My thanks to all those involved in organizing the event (Aviva de Groot, Merel Noorman and Silvia de Conca in particular)
Interacting with Computers
Sometimes it's hard to be a robot: A call for action on the ethics of abusing artificial agents2008 •
Derecho Penal económico. Parte Especial.
Participación en asociaciones delictivas y en asociaciones criminales(arts.292-295 CP)2024 •
Wortverbindungen - mehr oder weniger fest
Duden 11 - Nutzungserfahrungen aus der DaF-Perspektive2004 •
The American Historical Review
Nancy Mitchell. Jimmy Carter in Africa: Race and the Cold War2017 •
Surgical Neurology
Effects of medical and surgical treatment on cerebral perfusion and cognition in patients with chronic cerebral ischemia1990 •
The Social Science Journal
Toward an understanding of the endogenous nature of group identification in games2013 •
2019 •
Revista Iberoamericana de Automática e Informática Industrial RIAI
Concepción, Desarrollo y Avances en el Control de Navegación de Robots Submarinos Paralelos: El Robot Remo-I2009 •
2017 •
Das Amerika-Monopol: Vorstellung und Wirklichkeit des spanischen Kolonialhandels, ed. by Martin Biersack, Eberhard Crailsheim, Klemens Kaps
Buchhandelsmonopol. Die Hoheit über das gedruckte Wort zur Kolonialzeit2024 •
Infant observation
A time to see and a time to think: therapy and observation with mothers and their infants2010 •
REVISTA BIOMÉDICA
Algunos aspectos etológicos de la comunicación química en ratas y ratones de laboratorio2002 •