Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3514094.3534140acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

Causal Framework of Artificial Autonomous Agent Responsibility

Published: 27 July 2022 Publication History

Abstract

Recent empirical work on people's attributions of responsibility toward artificial autonomous agents (such as Artificial Intelligence agents or robots) has delivered mixed findings. The conflicting results reflect differences in context, the roles of AI and human agents, and the domain of application. In this article, we outline a causal framework of responsibility attribution which integrates these findings. It outlines nine factors that influence responsibility attribution - causality, role, knowledge, objective foreseeability, capability, intent, desire, autonomy, and character. We propose a framework of responsibility that outlines the causal relationships between the nine factors and responsibility. To empirically test the framework we discuss some initial findings and outline an approach to using serious games for causal cognitive research on responsibility attribution. Specifically, we propose a game that uses a generative approach to creating different scenarios, in which participants can freely inspect different sources of information to make judgments about human and artificial autonomous agents.

References

[1]
Ryan Abbott. 2020. The reasonable robot: artificial intelligence and the law. Cambridge University Press.
[2]
CC Abt. 1970. Serious Games. e Viking Press. New York City, New York, USA, st edition (1970).
[3]
Mark D Alicke. 2000. Culpable control and the psychology of blame. Psycholog- ical bulletin 126, 4 (2000), 556.
[4]
Mark D Alicke, David R Mandel, Denis J Hilton, Tobias Gerstenberg, and David A Lagnado. 2015. Causal conceptions in social explanation and moral evaluation: A historical tour. Perspectives on Psychological Science 10, 6 (2015), 790--812.
[5]
Hal Ashton and Matija Franklin. 2022. The corrupting influence of AI as boss and adversary. Unpublished Manuscript (2022).
[6]
Hal Ashton, Matija Franklin, and David Lagnado. 2022. Testing a definition of intent for AI in a legal setting. Unpublished Manuscript (2022).
[7]
Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2018. The moral machine experiment. Nature 563, 7729 (2018), 59--64.
[8]
Edmond Awad, Sydney Levine, Max Kleiman-Weiner, Sohan Dsouza, Joshua B Tenenbaum, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2018. Blaming humans in autonomous vehicle accidents: Shared responsibility across levels of automation. arXiv preprint arXiv:1803.07170 (2018).
[9]
Edmond Awad, Sydney Levine, Max Kleiman-Weiner, Sohan Dsouza, Joshua B Tenenbaum, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2020. Drivers are blamed more than their automated cars when both make mistakes. Nature human behaviour 4, 2 (2020), 134--143.
[10]
Jaime Banks. 2019. A perceived moral agency scale: development and validation of a metric for humans and social machines. Computers in Human Behavior 90 (2019), 363--371.
[11]
H Clark Barrett, Alexander Bolyanatz, Alyssa N Crittenden, Daniel MT Fessler, Simon Fitzpatrick, Michael Gurven, Joseph Henrich, Martin Kanovsky, Geoff Kushnick, Anne Pisor, et al. 2016. Small-scale societies exhibit fundamental variation in the role of intentions in moral judgment. Proceedings of the National Academy of Sciences 113, 17 (2016), 4688--4693.
[12]
David W Bates, Suchi Saria, Lucila Ohno-Machado, Anand Shah, and Gabriel Escobar. 2014. Big data in health care: using analytics to identify and manage high-risk and high-cost patients. Health affairs 33, 7 (2014), 1123--1131.
[13]
Lori S Bennear and Jonathan B Wiener. 2019. Adaptive Regulation: Instrument Choice for Policy Learning over Time. Obtenido de Universidad de Harvard: https://www.hks.harvard.edu (2019).
[14]
Yochanan E Bigman and Kurt Gray. 2018. People are averse to machines making moral decisions. Cognition 181 (2018), 21--34.
[15]
Eric Bogert, Aaron Schecter, and Richard T Watson. 2021. Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific reports 11, 1 (2021), 1--9.
[16]
Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan. 2020. The moral psychology of AI and the ethical opt-out problem. Oxford University Press, Oxford, UK.
[17]
Lemuria Carter and France Bélanger. 2005. The utilization of e-government services: citizen trust, innovation and acceptance factors. Information systems journal 15, 1 (2005), 5--25.
[18]
Noah Castelo, Maarten W Bos, and Donald R Lehmann. 2019. Task-dependent algorithm aversion. Journal of Marketing Research 56, 5 (2019), 809--825.
[19]
Fiery Cushman. 2008. Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition 108, 2 (2008), 353--380.
[20]
Benedict GC Dellaert, Suzanne B Shu, Theo A Arentze, Tom Baker, Kristin Diehl, Bas Donkers, Nathanael J Fast, Gerald Häubl, Heidi Johnson, Uma R Karmarkar, et al. 2020. Consumer decisions with artificially intelligent voice assistants. Marketing Letters 31, 4 (2020), 335--347.
[21]
Berkeley J Dietvorst and Daniel M Bartels. 2021. Consumers object to algorithms making morally relevant tradeoffs because of algorithms' consequentialist decision strategies. Journal of Consumer Psychology (2021).
[22]
Ralf Dörner, Stefan Göbel, Wolfgang Effelsberg, and Josef Wiemeyer. 2016. Serious games. Springer.
[23]
Robin Antony Duff. 2007. Answering for crime: Responsibility and liability in the criminal law. Bloomsbury Publishing.
[24]
Madeleine Clare Elish. 2019. Moral crumple zones: Cautionary tales in human- robot interaction (pre-print). Engaging Science, Technology, and Society (pre-print) (2019).
[25]
Amitai Etzioni and Oren Etzioni. 2016. Designing AI systems that obey our laws and values. Commun. ACM 59, 9 (2016), 29--31.
[26]
Amitai Etzioni and Oren Etzioni. 2016. Keeping AI legal. Vand. J. Ent. & Tech. L. 19 (2016), 133.
[27]
Matija Franklin, Edmond Awad, and David Lagnado. 2021. Blaming automated vehicles in difficult situations. Iscience 24, 4 (2021), 102252.
[28]
Caleb Furlough, Thomas Stokes, and Douglas J Gillan. 2021. Attributing blame to robots: I. The influence of robot autonomy. Human Factors 63, 4 (2021), 592--602.
[29]
Patrick Gamez, Daniel B Shank, Carson Arnold, and Mallory North. 2020. Artificial virtue: The machine question and perceptions of moral character in artificial moral agents. AI & SOCIETY 35, 4 (2020), 795--809.
[30]
David Gefen, Elena Karahanna, and Detmar W Straub. 2003. Trust and TAM in online shopping: An integrated model. MIS quarterly (2003), 51--90.
[31]
Mark A Geistfeld. 2017. A roadmap for autonomous vehicles: State tort liability, automobile insurance, and federal safety regulation. Calif. L. Rev. 105 (2017), 1611.
[32]
Joshua C Gellers. 2020. Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Edition 1). Routledge.
[33]
Tobias Gerstenberg, Anastasia Ejova, and David Lagnado. 2011. Blame the skilled. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 33.
[34]
Tobias Gerstenberg and David A Lagnado. 2010. Spreading the blame: The allocation of responsibility amongst multiple agents. Cognition 115, 1 (2010), 166--171.
[35]
Tobias Gerstenberg, Tomer D Ullman, Jonas Nagel, Max Kleiman-Weiner, David A Lagnado, and Joshua B Tenenbaum. 2018. Lucky or clever? From expectations to responsibility judgments. Cognition 177 (2018), 122--141.
[36]
Donald E Gibson and Scott J Schroeder. 2003. Who ought to be blamed? The effect of organizational roles on blame and credit attributions. International Journal of Conflict Management (2003).
[37]
Noah J Goodall. 2016. Can you program ethics into a self-driving car? IEEE Spectrum 53, 6 (2016), 28--58.
[38]
Heather M Gray, Kurt Gray, and Daniel M Wegner. 2007. Dimensions of mind perception. science 315, 5812 (2007), 619--619.
[39]
Kurt Gray, Liane Young, and Adam Waytz. 2012. Mind perception is the essence of morality. Psychological inquiry 23, 2 (2012), 101--124.
[40]
Joseph Y Halpern. 2016. Actual causality. MiT Press.
[41]
César A Hidalgo, Diana Orghian, Jordi Albo Canals, Filipa De Almeida, and Natalia Martin. 2021. How humans judge machines. MIT Press.
[42]
Chien-Ju Ho, Aleksandrs Slivkins, and Jennifer Wortman Vaughan. 2016. Adap- tive contract design for crowdsourcing markets: Bandit algorithms for repeated principal-agent problems. Journal of Artificial Intelligence Research 55 (2016), 317--359.
[43]
Fatimah Ishowo-Oloko, Jean-François Bonnefon, Zakariyah Soroye, Jacob Crandall, Iyad Rahwan, and Talal Rahwan. 2019. Behavioural evidence for a transparency--efficiency tradeoff in human--machine cooperation. Nature Machine Intelligence 1, 11 (2019), 517--521.
[44]
Robert A Jacobs and John K Kruschke. 2011. Bayesian learning theory applied to human cognition. Wiley Interdisciplinary Reviews: Cognitive Science 2, 1 (2011), 8--21.
[45]
Deborah G Johnson and Mario Verdicchio. 2019. AI, agency and responsibility: the VW fraud case and beyond. Ai & Society 34, 3 (2019), 639--647.
[46]
Moritz Jörling, Robert Böhm, and Stefanie Paluch. 2019. Service robots: Drivers of perceived responsibility for service outcomes. Journal of Service Research 22, 4 (2019), 404--420.
[47]
Eun-Sung Kim. 2020. Deep learning and principal--agent problems of algorithmic governance: The new materialism perspective. Technology in Society 63 (2020), 101378. https://doi.org/10.1016/j.techsoc.2020.101378
[48]
Richard Kim, Max Kleiman-Weiner, Andrés Abeliuk, Edmond Awad, Sohan Dsouza, Joshua B Tenenbaum, and Iyad Rahwan. 2018. A computational model of commonsense moral decision making. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 197--203.
[49]
Lara Kirfel and David Lagnado. 2021. Causal judgments about atypical actions are influenced by agents' epistemic states. Cognition 212 (2021), 104721.
[50]
Lara Kirfel and David Lagnado. 2021. Changing Minds-Epistemic Interventions in Causal Reasoning. (2021).
[51]
Max Kleiman-Weiner, Tobias Gerstenberg, Sydney Levine, and Joshua B Tenen- baum. 2015. Inference of Intention and Permissibility in Moral Decision Making. In CogSci.
[52]
Max Kleiman-Weiner, Rebecca Saxe, and Joshua B Tenenbaum. 2017. Learning a commonsense moral theory. Cognition 167 (2017), 107--123.
[53]
Markus Kneer and Michael T Stuart. 2021. Playing the blame game with robots. In Companion of the 2021 ACM/IEEE international conference on human-robot interaction. 407--411.
[54]
Nils Köbis, Jean-François Bonnefon, and Iyad Rahwan. 2021. Bad machines corrupt good morals. Nature Human Behaviour 5, 6 (2021), 679--685.
[55]
Fedwa Laamarti, Mohamad Eid, and Abdulmotaleb El Saddik. 2014. An overview of serious games. International Journal of Computer Games Technology 2014 (2014).
[56]
Francesca Lagioia and Giovanni Sartor. 2020. Ai systems under criminal law: a legal analysis and a regulatory perspective. Philosophy & Technology 33, 3 (2020), 433--465.
[57]
DA Lagnado and Tobias Gerstenberg. 2015. A difference-making framework for intuitive judgments of responsibility. Oxford studies in agency and responsibility 3 (2015), 213--241.
[58]
David A Lagnado and Shelley Channon. 2008. Judgments of cause and blame: The effects of intentionality and foreseeability. Cognition 108, 3 (2008), 754--770.
[59]
David A Lagnado and Tobias Gerstenberg. 2017. Causation in legal and moral reasoning. Oxford handbook of causal reasoning (2017), 565--602.
[60]
David A Lagnado, Tobias Gerstenberg, and Ro'i Zultan. 2013. Causal responsi- bility and counterfactuals. Cognitive science 37, 6 (2013), 1036--1073.
[61]
John Lee and Neville Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243--1270.
[62]
John D Lee and Neville Moray. 1994. Trust, self-confidence, and operators' adaptation to automation. International journal of human-computer studies 40, 1 (1994), 153--184.
[63]
Anat Lior. 2020. AI entities as AI agents: Artificial intelligence liability and the AI Respondeat Superior Analogy. Mitchell Hamline Law Review 46, 5 (2020), 2.
[64]
Christian List. 2021. Group agency and artificial intelligence. Philosophy & Technology 34, 4 (2021), 1213--1242.
[65]
Christian List, Philip Pettit, et al. 2011. Group agency: The possibility, design, and status of corporate agents. Oxford University Press.
[66]
Peng Liu, Run Yang, and Zhigang Xu. 2019. How safe is safe enough for self- driving vehicles? Risk analysis 39, 2 (2019), 315--325.
[67]
Poornima Madhavan and Douglas A Wiegmann. 2007. Similarities and differ- ences between human--human and human--automation trust: an integrative review. Theoretical Issues in Ergonomics Science 8, 4 (2007), 277--301.
[68]
Bertram F Malle. 2001. Intention: A Folk-Conceptual Analysis. Intentions and intentionality: Foundations of social cognition (2001), 45.
[69]
Bertram F Malle, Steve Guglielmo, and Andrew E Monroe. 2014. A theory of blame. Psychological Inquiry 25, 2 (2014), 147--186.
[70]
Bertram F Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. 2015. Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 117--124.
[71]
Andreas Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology 6, 3 (2004), 175--183.
[72]
Alison McIntyre. 2004. Doctrine of double effect. (2004).
[73]
Ryan M McManus and Abraham M Rutchick. 2019. Autonomous vehicles and the attribution of moral responsibility. Social psychological and personality science 10, 3 (2019), 345--352.
[74]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1--38.
[75]
Judea Pearl. 2009. Causality. Cambridge university press.
[76]
Stephen R Perry. 2000. Loss, agency, and responsibility for outcomes: Three conceptions of corrective justice. Philosophy of law 6 (2000), 546--559.
[77]
Walt L Perry. 2013. Predictive policing: The role of crime forecasting in law enforcement operations. Rand Corporation.
[78]
David A Pizarro and David Tannenbaum. 2012. Bringing character back: How the motivation to evaluate character influences judgments of moral blame. (2012).
[79]
Warren S Quinn. 1989. Actions, intentions, and consequences: The doctrine of double effect. Philosophy & Public Affairs (1989), 334--351.
[80]
Iyad Rahwan, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W Crandall, Nicholas A Christakis, Iain D Couzin, Matthew O Jackson, et al . 2019. Machine behaviour. Nature 568, 7753 (2019), 477--486.
[81]
Matt Richtel. 2013. How big data is playing recruiter for specialized workers. New York Times (2013), 1--7.
[82]
Y Rogers, H Sharp, and J Preece. 2002. Interaction design: Beyond human- computer interaction, jon wiley & sons. Inc (2002).
[83]
Jon Roozenbeek and Sander Van der Linden. 2019. Fake news game confers psy- chological resistance against online misinformation. Palgrave Communications 5, 1 (2019), 1--10.
[84]
Stephan Schlögl, Claudia Postulka, Reinhard Bernsteiner, and Christian Ploder. 2019. Artificial intelligence tool penetration in business: Adoption, challenges and fears. In International Conference on Knowledge Management in Organiza- tions. Springer, 259--270.
[85]
Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2017. Psychological roadblocks to the adoption of self-driving vehicles. Nature Human Behaviour 1, 10 (2017), 694--696.
[86]
Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2021. How safe is safe enough? Psychological mechanisms underlying extreme safety demands for self-driving cars. Transportation research part C: emerging technologies 126 (2021), 103069.
[87]
Francis X Shen, Morris B Hoffman, Owen D Jones, and Joshua D Greene. 2011. Sorting guilty minds. NYUL rev. 86 (2011), 1306.
[88]
Jenifer Z Siegel, Molly J Crockett, and Raymond J Dolan. 2017. Inferences about moral character moderate the impact of consequences on blame and praise. Cognition 167 (2017), 201--211.
[89]
Steven A Sloman and David Lagnado. 2015. Causality in thought. Annual review of psychology 66 (2015), 223--247.
[90]
Stephen C Slota, Kenneth R Fleischmann, Sherri Greenberg, Nitin Verma, Brenna Cummings, Lan Li, and Chris Shenefiel. 2021. Many hands make many fingers to point: challenges in creating accountable AI. AI & SOCIETY (2021), 1--13.
[91]
Michael T Stuart and Markus Kneer. 2021. Guilty Artificial Minds: Folk Attribu- tions of Mens Rea and Culpability to Artificially Intelligent Agents. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1--27.
[92]
Tara Qian Sun and Rony Medaglia. 2019. Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare. Government Information Quarterly 36, 2 (2019), 368--383.
[93]
Danielle Swanepoel. 2021. Does Artificial Intelligence Have Agency? In The Mind-Technology Problem. Springer, 83--104.
[94]
Kevin Tobia, Aileen Nielsen, and Alexander Stremitzer. 2021. When does physician use of AI increase liability? Journal of Nuclear Medicine 62, 1 (2021), 17--21.
[95]
Jacob Turner. 2018. Robot rules: Regulating artificial intelligence. Springer.
[96]
Nicole A Vincent. 2009. Responsibility: Distinguishing virtue from capacity. Polish Journal of Philosophy 3, 1 (2009), 111--126.
[97]
Nicole A Vincent. 2011. A structured taxonomy of responsibility concepts. In Moral responsibility. Springer, 15--35.
[98]
Adam Waytz and Michael I Norton. 2014. Botsourcing and outsourcing: Robot, British, Chinese, and German workers are for thinking-not feeling-jobs. Emotion 14, 2 (2014), 434.
[99]
C Westcott and David Lagnado. 2019. The AI will see you now: Judgments of responsibility at the intersection of artificial intelligence and medicine (Master's thesis). Unpublished Manuscript (2019).
[100]
Glanville Williams. 1987. Oblique intention. The Cambridge Law Journal 46, 3 (1987), 417--438.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
July 2022
939 pages
ISBN:9781450392471
DOI:10.1145/3514094
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. attribution
  2. blame
  3. causal cognition
  4. responsibility

Qualifiers

  • Research-article

Conference

AIES '22
Sponsor:
AIES '22: AAAI/ACM Conference on AI, Ethics, and Society
May 19 - 21, 2021
Oxford, United Kingdom

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)242
  • Downloads (Last 6 weeks)26
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)The Moral Psychology of Artificial IntelligenceAnnual Review of Psychology10.1146/annurev-psych-030123-11355975:1(653-675)Online publication date: 18-Jan-2024
  • (2024)Not in Control, but Liable? Attributing Human Responsibility for Fully Automated Vehicle AccidentsEngineering10.1016/j.eng.2023.10.00833(121-132)Online publication date: Feb-2024
  • (2024)Making It Possible for the Auditing of AI: A Systematic Review of AI Audits and AI AuditabilityInformation Systems Frontiers10.1007/s10796-024-10508-8Online publication date: 2-Jul-2024
  • (2024)Challenge of Criminal Imputation for Negligence Crime Involving AI to the Traditional Criminal Imputation TheoryPrinciple of Criminal Imputation for Negligence Crime Involving Artificial Intelligence10.1007/978-981-97-0722-5_1(1-24)Online publication date: 25-Feb-2024
  • (2023)People devalue generative AI’s competence but not its advice in addressing societal and personal challengesCommunications Psychology10.1038/s44271-023-00032-x1:1Online publication date: 15-Nov-2023

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media