Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3491102.3517732acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

Published: 28 April 2022 Publication History

Abstract

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

Supplemental Material

MP4 File
Talk Video
Transcript for: Talk Video

References

[1]
Klauw Abbink and Benedikt Herrmann. 2011. The Moral Costs of Nastiness. Economic Inquiry 49, 2 (2011), 631–633.
[2]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 1–18.
[3]
Nadia Adnan, Shahrina Md Nordin, Mohamad Ariff bin Bahruddin, and Murad Ali. 2018. How trust can drive forward the user acceptance to the technology? In-vehicle technology for autonomous vehicle. Transportation research part A: policy and practice 118 (2018), 819–836.
[4]
Carlos Alós-Ferrer and Federica Farolfi. 2019. Trust Games and Beyond. Frontiers in Neuroscience 13 (2019), 887.
[5]
Michael Anderson, Susan Leigh Anderson, and Chris Armen. 2004. Towards machine ethics. In AAAI-04 workshop on agent publishers: theory and practice, San Jose, CA. AAAI Press, Phoenix, Arizona, USA, 2–7.
[6]
Theo Araujo, Natali Helberger, Sanne Kruikemeier, and Claes H De Vreese. 2020. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY 35, 3 (2020), 611–623.
[7]
Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2018. The moral machine experiment. Nature 563, 7729 (2018), 59–64.
[8]
Annette Baier. 1986. Trust and antitrust. ethics 96, 2 (1986), 231–260.
[9]
Judith Baker. 1987. Trust and rationality. Pacific philosophical quarterly 68, 1 (1987), 1–13.
[10]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. The AAAI Press, Palo Alto, California USA, 2–11.
[11]
BJÖRN BARTLING and URS FISCHBACHER. 2012. Shifting the Blame: On Delegation and Responsibility. The Review of Economic Studies 79, 1 (2012), 67–87.
[12]
Ghassan F Bati and Vivek K Singh. 2018. “Trust Us” Mobile Phone Use Patterns Can Predict Individual Trust Propensity. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–14.
[13]
Christopher W Bauman, A Peter McGraw, Daniel M Bartels, and Caleb Warren. 2014. Revisiting external validity: Concerns about trolley problems and other sacrificial dilemmas in moral psychology. Social and Personality Psychology Compass 8, 9 (2014), 536–554.
[14]
Jeremy Bentham. 1996. The collected works of Jeremy Bentham: An introduction to the principles of morals and legislation. Clarendon Press, Oxford, United Kingdom.
[15]
Benedikt Berger, Martin Adam, Alexander Rühr, and Alexander Benlian. 2021. Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn. Business & Information Systems Engineering 63, 1 (2021), 55–68.
[16]
Iris Bohnet, Fiona Greig, Benedikt Herrmann, and Richard Zeckhauser. 2008. Betrayal Aversion: Evidence from Brazil, China, Oman, Switzerland, Turkey, and the United States. American Economic Review 98, 1 (March 2008), 294–310.
[17]
Iris Bohnet and Richard Zeckhauser. 2004. Trust, risk and betrayal. Journal of Economic Behavior & Organization 55, 4(2004), 467–484.
[18]
Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan. 2019. The trolley, the bull bar, and why engineers should care about the ethics of autonomous cars [point of view]. Proc. IEEE 107, 3 (2019), 502–504.
[19]
Jason W Burton, Mari-Klara Stein, and Tina Blegind Jensen. 2020. A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making 33, 2 (2020), 220–239.
[20]
Sidney Callahan. 1988. The role of emotion in ethical decisionmaking. Hastings Center Report 18, 3 (1988), 9–14.
[21]
Noah Castelo, Maarten W Bos, and Donald R Lehmann. 2019. Task-dependent algorithm aversion. Journal of Marketing Research 56, 5 (2019), 809–825.
[22]
Danton S Char, Nigam H Shah, and David Magnus. 2018. Implementing machine learning in health care—addressing ethical challenges. The New England journal of medicine 378, 11 (2018), 981.
[23]
Jin Chen, Cheng Chen, Joseph B. Walther, and S. Shyam Sundar. 2021. Do You Feel Special When an AI Doctor Remembers You? Individuation Effects of AI vs. Human Doctors on User Experience. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 299, 7 pages. https://doi.org/10.1145/3411763.3451735
[24]
Chun-Wei Chiang and Ming Yin. 2021. You’d Better Stop! Understanding Human Reliance on Machine Learning Models under Covariate Shift. In 13th ACM Web Science Conference 2021. Association for Computing Machinery, New York, NY, USA, 120–129.
[25]
Markus Christen, Darcia Narvaez, Julaine D Zenk, Michael Villano, Charles R Crowell, and Daniel R Moore. 2021. Trolley dilemma in the sky: Context matters when civilians and cadets make remotely piloted aircraft decisions. PLoS one 16, 3 (2021), e0247273.
[26]
Dan Conway, Fang Chen, Kun Yu, Jianlong Zhou, and Richard Morris. 2016. Misplaced Trust: A Bias in Human-Machine Trust Attribution–In Contradiction to Learning Theory. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 3035–3041.
[27]
Mary L Cummings. 2006. Integrating ethics in design through the value-sensitive design approach. Science and engineering ethics 12, 4 (2006), 701–715.
[28]
Berkeley J Dietvorst, Joseph P Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology: General 144, 1 (2015), 114.
[29]
Virginia Dignum. 2018. Ethics in artificial intelligence: introduction to the special issue., 3 pages.
[30]
Steven E Dilsizian and Eliot L Siegel. 2014. Artificial intelligence in medicine and cardiac imaging: harnessing big data and advanced computing to provide personalized medical diagnosis and treatment. Current cardiology reports 16, 1 (2014), 441.
[31]
Raymond Duch, Wojtek Przepiorka, and Randolph Stevenson. 2015. Responsibility attribution for collective decision makers. American Journal of Political Science 59, 2 (2015), 372–389.
[32]
Gerd Gigerenzer Eduard Brandstaetter and Ralph Hertwig. 2006. The Priority Heuristic: Making Choices Without Trade-Offs. Psychological Review 113, 2 (2006), 409–462.
[33]
Ziv Epstein, Sydney Levine, David G Rand, and Iyad Rahwan. 2020. Who gets credit for AI-generated art?Iscience 23, 9 (2020), 101515.
[34]
Mike Farjam. 2019. On whom would I want to depend; humans or computers?Journal of Economic Psychology 72 (2019), 219–228.
[35]
Ethan Fast and Eric Horvitz. 2017. Long-term trends in the public perception of artificial intelligence. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 31. The AAAI Press, Palo Alto, California USA, 963–969.
[36]
Joel E. Fischer, Chris Greenhalgh, Wenchao Jiang, Sarvapali D. Ramchurn, Feng Wu, and Tom Rodden. 2021. In-the-loop or on-the-loop? Interactional arrangements to support team coordination with a planning agent. Concurrency and Computation: Practice and Experience 33, 8(2021), e4082. https://doi.org/10.1002/cpe.4082 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/cpe.4082e4082 cpe.4082.
[37]
Philippa Foot. 2002. Moral Dilemmas: and other topics in moral philosophy. Clarendon Press, Oxford, United Kingdom.
[38]
Donelson R Forsyth, Linda E Zyzniewski, and Cheryl A Giammanco. 2002. Responsibility diffusion in cooperative collectives. Personality and Social Psychology Bulletin 28, 1 (2002), 54–65.
[39]
Thomas Franke, Christiane Attig, and Daniel Wessel. 2019. A personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale. International Journal of Human–Computer Interaction 35, 6(2019), 456–467.
[40]
Nathan G. Freier, Elia J. Nelson, Amanda Rotondo, and Wai Kay Kong. 2009. The Moral Accountability of a Personified Agent: Young Adults’ Conceptions. Association for Computing Machinery, New York, NY, USA, 4609–4614. https://doi.org/10.1145/1520340.1520708
[41]
Batya Friedman, Peter Kahn, and Alan Borning. 2002. Value sensitive design: Theory and methods. Technical Report 2-12. University of Washington.
[42]
Anna-Katharina Frison, Philipp Wintersberger, Andreas Riener, Clemens Schartmüller, Linda Ng Boyle, Erika Miller, and Klemens Weigl. 2019. In UX We Trust: Investigation of Aesthetics and Usability of Driver-Vehicle Interfaces and Their Impact on the Perception of Automated Driving. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–13.
[43]
Felix Gille, Anna Jobin, and Marcello Ienca. 2020. What we talk about when we talk about trust: theory of trust for AI in healthcare. Intelligence-Based Medicine 1 (2020), 100001.
[44]
Jan Gogoll and Julian F Müller. 2017. Autonomous cars: in favor of a mandatory ethics setting. Science and engineering ethics 23, 3 (2017), 681–700.
[45]
Jan Gogoll and Matthias Uhl. 2018. Rage against the machine: Automation in the moral domain. Journal of Behavioral and Experimental Economics 74 (2018), 97–103.
[46]
Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. 2018. When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research 62 (2018), 729–754.
[47]
Peter A Graham. 2010. In defense of objectivism about moral obligation. Ethics 121, 1 (2010), 88–115.
[48]
Nina Grgić-Hlača, Christoph Engel, and Krishna P. Gummadi. 2019. Human Decision Making with Machine Assistance: An Experiment on Bailing and Jailing. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 178 (Nov. 2019), 25 pages. https://doi.org/10.1145/3359280
[49]
Holger A Haenssle, Christine Fink, Roland Schneiderbauer, Ferdinand Toberer, Timo Buhl, Andreas Blum, A Kalloo, A Ben Hadj Hassen, Luc Thomas, A Enk, 2018. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Annals of oncology 29, 8 (2018), 1836–1842.
[50]
John R. Hamman, George Loewenstein, and Roberto A. Weber. 2010. Self-Interest through Delegation: An Additional Rationale for the Principal-Agent Relationship. The American Economic Review 100, 4 (2010), 1826–1846.
[51]
Russell Hardin. 1993. The street-level epistemology of trust. Politics & society 21, 4 (1993), 505–529.
[52]
Gilbert Harman. 1975. Moral relativism defended. The Philosophical Review 84, 1 (1975), 3–22.
[53]
Katherine Hawley. 2014. Trust, distrust and commitment. Noûs 48, 1 (2014), 1–20.
[54]
Maria Hedlund and Erik Persson. 2021. Expert responsibility in AI development. Proceedings of the International Conference of Public Policy (ICPP5) 5 (2021), 1–24.
[55]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407–434.
[56]
Charles A. Holt and Susan K. Laury. 2002. Risk Aversion and Incentive Effects. American Economic Review 92, 5 (December 2002), 1644–1655.
[57]
Richard Holton. 1994. Deciding to trust, coming to believe. Australasian journal of philosophy 72, 1 (1994), 63–76.
[58]
Joo-Wha Hong and Dmitri Williams. 2019. Racism, responsibility and autonomy in HCI: Testing perceptions of an AI agent. Computers in Human Behavior 100 (2019), 79–84.
[59]
Michael Horowitz and Paul Scharre. 2015. An introduction to autonomy in weapon systems. Technical Report. Center for A New American Security.
[60]
Rasheed Hussain and Sherali Zeadally. 2018. Autonomous cars: Research results, issues, and future challenges. IEEE Communications Surveys & Tutorials 21, 2 (2018), 1275–1313.
[61]
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. Association for Computing Machinery, New York, NY, USA, 624–635.
[62]
Mohammad Hossein Jarrahi. 2018. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons 61, 4 (2018), 577–586.
[63]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (2019), 389–399.
[64]
Deborah G Johnson and Mario Verdicchio. 2019. AI, agency and responsibility: the VW fraud case and beyond. Ai & Society 34, 3 (2019), 639–647.
[65]
Michael S Josephson and Wes Hanson. 2002. Making ethical decisions. Josephson Institute of ethics Marina del Rey, CA, Los Angeles, CA, USA.
[66]
Ekaterina Jussupow, Izak Benbasat, and Armin Heinzl. 2020. Why are we Averse towards Algorithms? A Comprehensive Literature Review on Algorithmic Aversion. In Proceedings of the 28th European Conference on Information Systems (ECIS). ECIS, Marrakech, Morocco, 1–16.
[67]
Guy Kahane, Jim AC Everett, Brian D Earp, Lucius Caviola, Nadira S Faber, Molly J Crockett, and Julian Savulescu. 2018. Beyond sacrificial harm: A two-dimensional model of utilitarian psychology.Psychological Review 125, 2 (2018), 131.
[68]
Serhiy Kandul and Oliver Kirchkamp. 2018. Do I care if others lie? Current and future effects when lies can be delegated. Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics) 74, C (2018), 70–78.
[69]
Immanuel Kant. 1785. Groundwork of the metaphysics of morals.Cambridge, Cambridge, United Kingdom.
[70]
Nikos I Karacapilidis and Costas P Pappis. 1997. A framework for group decision support systems: Combining AI tools and OR techniques. European Journal of Operational Research 103, 2 (1997), 373–388.
[71]
Aria Khademi and Vasant Honavar. 2020. Algorithmic bias in recidivism prediction: A causal perspective (student abstract). In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. The AAAI Press, Palo Alto, California USA, 13839–13840.
[72]
Lorraine Kisselburgh, Michel Beaudouin-Lafon, Lorrie Cranor, Jonathan Lazar, and Vicki L Hanson. 2020. HCI Ethics, Privacy, Accessibility, and the Environment: A Town Hall Forum on Global Policy Issues. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–6.
[73]
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R Sunstein. 2018. Discrimination in the Age of Algorithms. Journal of Legal Analysis 10 (2018), 113–174.
[74]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017)(Leibniz International Proceedings in Informatics (LIPIcs), Vol. 67), Christos H. Papadimitriou (Ed.). Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany, 43:1–43:23.
[75]
Markus Kneer and Michael T Stuart. 2021. Playing the Blame Game with Robots. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, New York, NY, United States, 407–411.
[76]
Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-User Expectations of AI Systems. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300641
[77]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
[78]
Min Hun Lee, Daniel P. Siewiorek, Asim Smailagic, Alexandre Bernardino, and Sergi Bermúdez i Badia. 2021. A Human-AI Collaborative Approach for Clinical Decision Making on Rehabilitation Assessment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 392, 14 pages. https://doi.org/10.1145/3411764.3445472
[79]
Min Kyung Lee, Nina Grgić-Hlača, Michael Carl Tschantz, Reuben Binns, Adrian Weller, Michelle Carney, and Kori Inkpen. 2020. Human-centered approaches to fair and responsible AI. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–8.
[80]
Zhuying Li, Yan Wang, Wei Wang, Stefan Greuter, and Florian ’Floyd’ Mueller. 2020. Empowering a Creative City: Engage Citizens in Creating Street Art through Human-AI Collaboration. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3334480.3382976
[81]
Gabriel Lima and Meeyoung Cha. 2020. Descriptive AI Ethics: Collecting and Understanding the Public Opinion, In Ethics in Design Workshop. arXiv preprint arXiv:2101.05957 1, 1–6.
[82]
Gabriel Lima, Nina Grgić-Hlača, and Meeyoung Cha. 2021. Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–17.
[83]
Patrick Lin. 2016. Why ethics matters for autonomous cars. Springer, Berlin, Heidelberg, Heidelberg, Germany, 69–85.
[84]
Jennifer M Logg, Julia A Minson, and Don A Moore. 2019. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151 (2019), 90–103.
[85]
Michele Loi and Markus Christen. 2019. How to include ethics in machine learning research. ERCIM News 116, 3 (2019), 5.
[86]
Zhuoran Lu and Ming Yin. 2021. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–16.
[87]
Stefanie M. Faas, Johannes Kraus, Alexander Schoenhals, and Martin Baumann. 2021. Calibrating Pedestrians’ Trust in Automated Vehicles: Does an Intent Display in an External HMI Support Trust Calibration and Safe Crossing Behavior?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–17.
[88]
John Mackie. 1990. Ethics: Inventing right and wrong. Penguin UK, London, United Kingdom.
[89]
Bertram F Malle and Daniel Ullman. 2021. A multidimensional conception and measure of human-robot trust. In Trust in Human-Robot Interaction. Elsevier, Amsterdam, the Netherlands, 3–25.
[90]
Andreas Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology 6, 3 (2004), 175–183.
[91]
Victoria McGeer. 2008. Trust, hope and empowerment. Australasian Journal of Philosophy 86, 2 (2008), 237–254.
[92]
Ree M Meertens and Rene Lion. 2008. Measuring an individual’s tendency to take risks: the risk propensity scale 1. Journal of Applied Social Psychology 38, 6 (2008), 1506–1520.
[93]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54, 6 (2021), 1–35.
[94]
Stephanie M Merritt, Heather Heimbaugh, Jennifer LaChapell, and Deborah Lee. 2013. I trust it, but I don’t know why: Effects of implicit attitudes toward automation on trust in an automated system. Human factors 55, 3 (2013), 520–534.
[95]
Leila Methnani, Andrea Aler Tubella, Virginia Dignum, and Andreas Theodorou. 2021. Let Me Take Over: Variable Autonomy for Meaningful Human Control. Frontiers in Artificial Intelligence 4 (2021), 133.
[96]
John Stuart Mill. 1861. 1998. Utilitarianism, edited with an introduction by Roger Crisp. Oxford University Press, New York, NY, USA.
[97]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
[98]
Matteo Monti. 2019. Automated journalism and freedom of information: Ethical and juridical problems related to AI in the press field. Opinio Juris in Comparatione 1 (2019), 2018.
[99]
Rajatish Mukherjee, Gerdur Jonsdottir, Sandip Sen, and Partha Sarathi. 2001. Movies2go: an online voting based movie recommender system. In Proceedings of the fifth international conference on Autonomous agents, Vol. 5. Association for Computing Machinery, New York, NY, United States, 114–115.
[100]
Clifford Mynatt and Steven J Sherman. 1975. Responsibility attribution in groups and individuals: A direct test of the diffusion of responsibility hypothesis.Journal of Personality and Social Psychology 32, 6(1975), 1111.
[101]
Saeid Nahavandi. 2017. Trusted autonomy between humans and robots: Toward human-on-the-loop in robotics and autonomous systems. IEEE Systems, Man, and Cybernetics Magazine 3, 1 (2017), 10–17.
[102]
David T. Newman, N. Fast, and Derek Harmon. 2020. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes 160 (2020), 149–167.
[103]
Evangelos Niforatos, Adam Palma, Roman Gluszny, Athanasios Vourvopoulos, and Fotis Liarokapis. 2020. Would You Do It?: Enacting Moral Dilemmas in Virtual Reality for Understanding Ethical Decision-Making. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376788
[104]
Mahsan Nourani, Chiradeep Roy, Jeremy E Block, Donald R Honeycutt, Tahrima Rahman, Eric Ragan, and Vibhav Gogate. 2021. Anchoring Bias Affects Mental Model Formation and User Reliance in Explainable AI Systems. In 26th International Conference on Intelligent User Interfaces. Association for Computing Machinery, New York, NY, USA, 340–350.
[105]
International Review of the Red Cross. 2019. Artificial intelligence and machine learning in armed conflict: A human-centred approach. Technical Report 102. ICRC.
[106]
Independent High-Level Expert Group on Artificial Intelligence. 2020. Ethics Guidelines for Trustworthy AI. Technical Report. Council of Europe.
[107]
Nora Osmani 2020. The Complexity of Criminal Liability of AI Systems. Masaryk University Journal of Law and Technology 14, 1(2020), 53–82.
[108]
David Owens. 2017. Trusting a Promise and Other Things. Oxford University Press, Oxford, United Kingdom, 214–29.
[109]
Hyanghee Park, Daehwan Ahn, Kartik Hosanagar, and Joonhwan Lee. 2021. Human-AI Interaction in Human Resource Management: Understanding Why Employees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 154, 15 pages. https://doi.org/10.1145/3411764.3445304
[110]
Philip Pettit. 1995. The cunning of trust. Philosophy & Public Affairs 24, 3 (1995), 202–225.
[111]
Azzurra Pini, Jer Hayes, Connor Upton, and Medb Corcoran. 2019. AI Inspired Recipes: Designing Computationally Creative Food Combos. In CHI EA ’19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems(Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3290607.3312948
[112]
Marianne Promberger and Jonathan Baron. 2006. Do patients trust computers?Journal of Behavioral Decision Making 19, 5 (2006), 455–468.
[113]
Martin Ragot, Nicolas Martin, and Salomé Cojean. 2020. Ai-generated vs. human artworks. a perception bias towards artificial intelligence?. In Extended abstracts of the 2020 CHI conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 1–10.
[114]
Ilana Ritov and Jonathan Baron. 1992. Status-quo and omission biases. Journal of Risk and Uncertainty 5 (1992), 49–61.
[115]
Kit T Rodolfa, Hemank Lamba, and Rayid Ghani. 2021. Empirical observation of negligible fairness–accuracy trade-offs in machine learning for public policy. Nature Machine Intelligence 3, 10 (2021), 896–904.
[116]
Rocío Sánchez-Salmerón, José L Gómez-Urquiza, Luis Albendín-García, María Correa-Rodríguez, María Begoña Martos-Cabrera, Almudena Velando-Soriano, and Nora Suleiman-Martos. 2022. Machine learning methods applied to triage in emergency services: A systematic review. International Emergency Nursing 60 (2022), 101109.
[117]
Filippo Santoni de Sio and Jeroen Van den Hoven. 2018. Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI 5 (2018), 15.
[118]
Anuschka Schmitt, Thiemo Wambsganss, Matthias Söllner, and Andreas Janson. 2021. Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice. In International Conference on Information Systems (ICIS), Vol. 1. ICIS, Austin, Texas, 1–17.
[119]
Donghee Shin. 2021. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies 146 (2021), 102551.
[120]
Matthias Söllner, Axel Hoffmann, Holger Hoffmann, and Jan Marco Leimeister. 2012. How to use behavioral research insights on trust for HCI system design. In CHI’12 Extended Abstracts on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1703–1708.
[121]
Mark Spranca, Elisa Minsk, and Jonathan Baron. 1991. Omission and commission in judgment and choice. Journal of Experimental Social Psychology 27, 1 (1991), 76–105. https://doi.org/10.1016/0022-1031(91)90011-T
[122]
S Shyam Sundar. 2008. The MAIN model: A heuristic approach to understanding technology effects on credibility. MacArthur Foundation Digital Media and Learning Initiative, Chicago, IL, USA.
[123]
S Shyam Sundar and Jinyoung Kim. 2019. Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 1–9.
[124]
S Shyam Sundar, Jinyoung Kim, Mary Beth Rosson, and Maria D Molina. 2020. Online privacy heuristics that predict information disclosure. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–12.
[125]
Steven C Sutherland, Casper Harteveld, and Michael E Young. 2015. The role of environmental predictability and costs in relying on automation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 2535–2544.
[126]
the Committee of Ministers of the Council of Europe. 2020. Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems. Technical Report. Council of Europe.
[127]
Thomas Theodoridis, Vassilios Solachidis, Kosmas Dimitropoulos, Lazaros Gymnopoulos, and Petros Daras. 2019. A survey on AI nutrition recommender systems. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. Association for Computing Machinery, New York, NY, United States, 540–546.
[128]
Judith Jarvis Thomson. 1976. Killing, letting die, and the trolley problem. The Monist 59, 2 (1976), 204–217.
[129]
Neil Thurman, Judith Moeller, Natali Helberger, and Damian Trilling. 2019. My friends, editors, algorithms, and I: Examining audience attitudes to news selection. Digital Journalism 7, 4 (2019), 447–469.
[130]
Daniel W Tigard. 2021. Responsible AI and moral responsibility: a common appreciation. AI and Ethics 1, 2 (2021), 113–117.
[131]
Peter M Todd and Gerd Gigerenzer. 2000. Précis of simple heuristics that make us smart. Behavioral and brain sciences 23, 5 (2000), 727–741.
[132]
Suzanne Tolmeijer, Ujwal Gadiraju, Ramya Ghantasala, Akshit Gupta, and Abraham Bernstein. 2021. Second Chance for a First Impression? Trust Development in Intelligent System Interaction. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. Association for Computing Machinery, New York, NY, USA, 77–87.
[133]
Suzanne Tolmeijer, Markus Kneer, Cristina Sarasua, Markus Christen, and Abraham Bernstein. 2020. Implementations in machine ethics: a survey. ACM Computing Surveys (CSUR) 53, 6 (2020), 1–38.
[134]
Amos Tversky and Daniel Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5 (1992), 297–323.
[135]
Kailas Vodrahalli, Tobias Gerstenberg, and James Zou. 2021. Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions. CoRR abs/2107.07015(2021), 1–34. arxiv:2107.07015https://arxiv.org/abs/2107.07015
[136]
Michael A Wallach, Nathan Kogan, and Daryl J Bem. 1964. Diffusion of responsibility and level of risk taking in groups.The Journal of Abnormal and Social Psychology 68, 3(1964), 263.
[137]
Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneiderman, Yuanchun Shi, and Qianying Wang. 2020. From human-human collaboration to Human-AI collaboration: Designing AI systems that can work together with people. In Extended abstracts of the 2020 CHI conference on human factors in computing systems. Association for Computing Machinery, New York, NY, USA, 1–6.
[138]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300831
[139]
Ori Weisel and Shaul Shalvi. 2015. The collaborative roots of corruption. Proceedings of the National Academy of Sciences 112, 34(2015), 10651–10656.
[140]
Anja Wölker and Thomas E Powell. 2021. Algorithms in the newsroom? News readers’ perceived credibility and selection of automated journalism. Journalism 22, 1 (2021), 86–103.
[141]
Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-Examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376301
[142]
Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019. Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes. Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300468
[143]
Karen Yeung. 2020. Recommendation of the council on artificial intelligence (oecd). International Legal Materials 59, 1 (2020), 27–34.
[144]
Scot D. Yoder. 1998. The Nature of Ethical Expertise. The Hastings Center Report 28, 6 (1998), 11–19. http://www.jstor.org/stable/3528262
[145]
Qiaoning Zhang, X Jessie Yang, and Lionel Peter Robert. 2020. Expectations and trust in automated vehicles. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–9.
[146]
Yunfeng Zhang, Rachel KE Bellamy, Moninder Singh, and Q Vera Liao. 2020. Introduction to AI Fairness. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–4.

Cited By

View all
  • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
  • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 14-Nov-2024
  • (2024)Facing LLMs: Robot Communication Styles in Mediating Health Information between Parents and Young AdultsProceedings of the ACM on Human-Computer Interaction10.1145/36870368:CSCW2(1-37)Online publication date: 8-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
April 2022
10459 pages
ISBN:9781450391573
DOI:10.1145/3491102
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 April 2022

Check for updates

Badges

  • Honorable Mention

Author Tags

  1. Ethical AI
  2. Human-AI Collaboration
  3. Responsibility
  4. Trust

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Swiss National Science Foundation
  • Armasuisse Science and Technology (S+T)

Conference

CHI '22
Sponsor:
CHI '22: CHI Conference on Human Factors in Computing Systems
April 29 - May 5, 2022
LA, New Orleans, USA

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,179
  • Downloads (Last 6 weeks)248
Reflects downloads up to 01 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Human-AI collaboration is not very collaborative yet: a taxonomy of interaction patterns in AI-assisted decision making from a systematic reviewFrontiers in Computer Science10.3389/fcomp.2024.15210666Online publication date: 6-Jan-2025
  • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 14-Nov-2024
  • (2024)Facing LLMs: Robot Communication Styles in Mediating Health Information between Parents and Young AdultsProceedings of the ACM on Human-Computer Interaction10.1145/36870368:CSCW2(1-37)Online publication date: 8-Nov-2024
  • (2024)Good Intentions, Risky Inventions: A Method for Assessing the Risks and Benefits of AI in Mobile and Wearable UsesProceedings of the ACM on Human-Computer Interaction10.1145/36765078:MHCI(1-28)Online publication date: 24-Sep-2024
  • (2024)Everyday Life Challenges and Augmented Realities: Exploring Use Cases For, and User Perspectives on, an Augmented Everyday LifeProceedings of the Augmented Humans International Conference 202410.1145/3652920.3652921(52-62)Online publication date: 4-Apr-2024
  • (2024)Enhancing AI-Assisted Group Decision Making through LLM-Powered Devil's AdvocateProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645199(103-119)Online publication date: 18-Mar-2024
  • (2024)Investigating and Designing for Trust in AI-powered Code Generation ToolsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658984(1475-1493)Online publication date: 3-Jun-2024
  • (2024)Toward Tone-Aware Explanations in Recommender SystemsProceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3627043.3659572(261-266)Online publication date: 22-Jun-2024
  • (2024)Practising Appropriate Trust in Human-Centred AI DesignExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650825(1-8)Online publication date: 11-May-2024
  • (2024)Spiritual AI: Exploring the Possibilities of a Human-AI Interaction Beyond Productive GoalsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650743(1-8)Online publication date: 11-May-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media