Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3514094.3534165acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Open access

How Does Predictive Information Affect Human Ethical Preferences?

Published: 27 July 2022 Publication History

Abstract

Artificial intelligence (AI) has been increasingly involved in decision making in high-stakes domains, including loan applications, employment screening, and assistive clinical decision making. Meanwhile, involving AI in these high-stake decisions has created ethical concerns on how to balance different trade-offs to respect human values. One approach for aligning AIs with human values is to elicit human ethical preferences and incorporate this information in the design of computer systems. In this work, we explore how human ethical preferences are impacted by the information shown to humans during elicitation. In particular, we aim to provide a contrast between verifiable information (e.g., patient demographics or blood test results) and predictive information (e.g., the probability of organ transplant success). Using kidney transplant allocation as a case study, we conduct a randomized experiment to elicit human ethical preferences on scarce resource allocation to understand how human ethical preferences are impacted by the verifiable and predictive information. We find that the presence of predictive information significantly changes how humans take into account other verifiable information in their ethical preferences. We also find that the source of the predictive information (e.g., whether the predictions are made by AI or human doctors) plays a key role in how humans incorporate the predictive information into their own ethical judgements.

Supplementary Material

MP4 File (aies122.mp4)
Video Presentation for the Paper "How Does Predictive Information Affect Human Ethical Preferences?"

References

[1]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica, May 23, 2016 (2016), 139--159.
[2]
Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2019. The Moral Machine Experiment. Nature (2019).
[3]
Abdullah Awaysheh, Jeffrey Wilcke, François Elvinger, Loren Rees, Weiguo Fan, and Kurt L Zimmerman. 2019. Review of medical decision support and machinelearning methods. Veterinary pathology 56, 4 (2019), 512--525.
[4]
Alexander Bartik and Scott Nelson. 2016. Credit reports as resumes: The incidence of pre-employment credit screening. (2016).
[5]
Edmon Begoli, Tanmoy Bhattacharya, and Dimitri Kusnezov. 2019. The need for uncertainty quantification in machine-assisted medical decision making. Nature Machine Intelligence 1, 1 (2019), 20--23.
[6]
Richard Berk. 2017. An impact assessment of machine learning risk forecasts on parole board decisions and recidivism. Journal of Experimental Criminology 13, 2 (2017), 193--216.
[7]
Reuben Binns. 2020. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 514--524.
[8]
Philippe Bracke, Anupam Datta, Carsten Jung, and Shayak Sen. 2019. Machine learning explainability in finance: an application to default risk analysis. (2019).
[9]
Lok Chan, Kenzie Doyle, Duncan McElfresh, Vincent Conitzer, John P Dickerson, Jana Schaich Borg, and Walter Sinnott-Armstrong. 2020. Artificial Artificial Intelligence: Measuring Influence of AI'Assessments' on Moral Decision-Making. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 214--220.
[10]
Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter Stewart. 2016. Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. Advances in neural information processing systems 29 (2016).
[11]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153--163.
[12]
Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical reviewof fairmachine learning. arXiv preprint arXiv:1808.00023 (2018).
[13]
Itiel E Dror and David Charlton. 2006. Why experts make errors. Journal of Forensic Identification 56, 4 (2006), 600.
[14]
Ezekiel J Emanuel, Govind Persad, Ross Upshur, Beatriz Thome, Michael Parker, Aaron Glickman, Cathy Zhang, Connor Boyle, Maxwell Smith, and James P Phillips. 2020. Fair allocation of scarce medical resources in the time of Covid-19. New England Journal of Medicine 382, 21 (2020), 2049--2055.
[15]
Ezekiel J. Emanuel and AlanWertheimer. 2006. Who Should Get Influenza Vaccine When Not All Can? Science 312, 5775 (2006), 854--855.
[16]
Rachel Freedman, Jana Schaich Borg, Walter Sinnott-Armstrong, John P. Dickerson, and Vincent Conitzer. 2018. Adapting a Kidney Exchange Algorithm to Align with Human Values. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA, 115.
[17]
Adrian Furnham. 1996. Factors relating to the allocation of medical resources. Journal of Social Behavior and Personality 11, 3 (1996), 615--624.
[18]
Andreas Fuster, Paul Goldsmith-Pinkham, Tarun Ramadorai, and AnsgarWalther. 2022. Predictably unequal? The effects of machine learning on credit markets. The Journal of Finance 77, 1 (2022), 5--47.
[19]
Nina Grgic-Hlaca, Elissa M. Redmiles, Krishna P. Gummadi, and Adrian Weller. 2018. Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction. In Proceedings of the 2018 World Wide Web Conference. 903--912.
[20]
Emir Kamenica and Matthew Gentzkow. 2011. Bayesian persuasion. American Economic Review 101, 6 (2011), 2590--2615.
[21]
Brendan F Klare, Mark J Burge, Joshua C Klontz, Richard W Vorder Bruegge, and Anil K Jain. 2012. Face recognition performance: Role of demographic information. IEEE Transactions on Information Forensics and Security 7, 6 (2012), 1789--1801.
[22]
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Cass R Sunstein. 2018. Discrimination in the Age of Algorithms. Journal of Legal Analysis 10 (2018).
[23]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016).
[24]
Pius Krütli, Thomas Rosemann, Kjell Y Törnblom, and Timo Smieszek. 2016. How to fairly allocate scarce medical resources: ethical argumentation under scrutiny by health professionals and lay people. PloS one 11, 7 (2016).
[25]
Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, Alexandros Psomas, and Ariel D. Procaccia. 2019. WeBuildAI: Participatory Framework for Algorithmic Governance. Proc. ACM Hum.-Comput. Interact. 3 (2019).
[26]
Stephan Lewandowsky, Michael Mundy, and Gerard Tan. 2000. The dynamics of trust: comparing humans to automation. Journal of Experimental Psychology: Applied 6, 2 (2000), 104.
[27]
Jennifer Marie Logg. 2017. Theory of machine: When do people rely on algorithms? Harvard Business School working paper series# 17-086 (2017).
[28]
JenniferMLogg, Julia A Minson, and Don A Moore. 2019. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151 (2019), 90--103.
[29]
Ethan Mark, David Goldsman, Brian Gurbaxani, Pinar Keskinocak, and Joel Sokol. 2019. Using machine learning and an ensemble of methods to predict kidney transplant survival. PloS one 14, 1 (2019).
[30]
Johan Nilsson, Mattias Ohlsson, Peter Höglund, Björn Ekmehag, Bansi Koul, and Bodil Andersson. 2015. The International Heart Transplant Survival Algorithm (IHTSA): a new model to improve organ sharing and survival. PloS one 10, 3 (2015).
[31]
Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia. 2018. A voting-based system for ethical decision making. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
[32]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447--453.
[33]
Govind Persad, Alan Wertheimer, and Ezekiel J Emanuel. 2009. Principles for allocation of scarce medical interventions. The Lancet 373, 9661 (2009), 423--431.
[34]
Gina M Piscitello, Esha M Kapania, William D Miller, Juan C Rojas, Mark Siegler, and William F Parker. 2020. Variation in ventilator allocation guidelines by US state during the coronavirus disease 2019 pandemic: a systematic review. JAMA network open 3, 6 (2020).
[35]
Mohammad Pourhomayoun and Mahdi Shakibi. 2021. Predicting mortality risk in patients with COVID-19 using machine learning to help medical decision-making. Smart Health 20 (2021), 100178.
[36]
Sara J Rosenbaum et al. 2011. Ethical considerations for decision making regarding allocation of mechanical ventilators during a severe influenza pandemic or other public health emergency. (2011).
[37]
Nripsuta Ani Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David C Parkes, and Yang Liu. 2019. How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 99--106.
[38]
C Estelle Smith, Bowen Yu, Anjali Srivastava, Aaron Halfaker, Loren Terveen, and Haiyi Zhu. 2020. Keeping community in the loop: Understanding wikipedia stakeholder values for machine learning-based systems. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.
[39]
Megha Srivastava, Hoda Heidari, and Andreas Krause. 2019. Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2459--2468.
[40]
Wei Tang and Chien-Ju Ho. 2021. On the Bayesian Rational Assumption in Information Design. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 9. 120--130.
[41]
Berk Ustun and Cynthia Rudin. 2016. Supersparse linear integer models for optimized medical scoring systems. Machine Learning 102, 3 (2016), 349--391.
[42]
Niels Van Berkel, Jorge Goncalves, Daniel Russo, Simo Hosio, and Mikael B Skov. 2021. Effect of information presentation on fairness perceptions of machine learning predictors. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--13.
[43]
Effy Vayena, Alessandro Blasimme, and I Glenn Cohen. 2018. Machine learning in medicine: addressing ethical challenges. PLoS medicine 15, 11 (2018).
[44]
Ruotong Wang, F Maxwell Harper, and Haiyi Zhu. 2020. Factors influencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.
[45]
Douglas B White, Mitchell H Katz, John M Luce, and Bernard Lo. 2009. Who should receive life support during a public health emergency? Using ethical principles to improve allocation decisions. Annals of Internal Medicine 150, 2 (2009), 132--138.
[46]
Bowen Yu, Ye Yuan, Loren Terveen, Zhiwei Steven Wu, Jodi Forlizzi, and Haiyi Zhu. 2020. Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-Offs Across Multiple Objectives. Association for Computing Machinery, New York, NY, USA, 1245--1257.

Cited By

View all
  • (2023)Exploring the Effect of AI Assistance on Human Ethical DecisionsProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604750(939-940)Online publication date: 8-Aug-2023
  • (2023)How does Value Similarity affect Human Reliance in AI-Assisted Ethical Decision Making?Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604709(49-57)Online publication date: 8-Aug-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
July 2022
939 pages
ISBN:9781450392471
DOI:10.1145/3514094
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 July 2022

Check for updates

Author Tags

  1. ethical preference, ai ethics
  2. preference elicitation

Qualifiers

  • Research-article

Funding Sources

Conference

AIES '22
Sponsor:
AIES '22: AAAI/ACM Conference on AI, Ethics, and Society
May 19 - 21, 2021
Oxford, United Kingdom

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Upcoming Conference

AIES '24
AAAI/ACM Conference on AI, Ethics, and Society
October 21 - 23, 2024
San Jose , CA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)178
  • Downloads (Last 6 weeks)17
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Exploring the Effect of AI Assistance on Human Ethical DecisionsProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604750(939-940)Online publication date: 8-Aug-2023
  • (2023)How does Value Similarity affect Human Reliance in AI-Assisted Ethical Decision Making?Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604709(49-57)Online publication date: 8-Aug-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media