Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3411763.3451654acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
poster

Understanding User Attitudes Towards Negative Side Effects of AI Systems

Published: 08 May 2021 Publication History

Abstract

Artificial Intelligence (AI) systems deployed in the open world may produce negative side effects—which are unanticipated, undesirable outcomes that occur in addition to the intended outcomes of the system’s actions. These negative side effects affect users directly or indirectly, by violating their preferences or altering their environment in an undesirable, potentially harmful, manner. While the existing literature has started to explore techniques to overcome the impacts of negative side effects in deployed systems, there has been no prior efforts to determine how users perceive and respond to negative side effects. We surveyed 183 participants to develop an understanding of user attitudes towards side effects and how side effects impact user trust in the system. The surveys targeted two domains: an autonomous vacuum cleaner and an autonomous vehicle, each with 183 respondents. The results indicate that users are willing to tolerate side effects that are not safety-critical but prefer to minimize them as much as possible. Furthermore, users are willing to assist the system in mitigating negative side effects by providing feedback and reconfiguring the environment. Trust in the system diminishes if it fails to minimize the impacts of negative side effects over time. These results support key fundamental assumptions in existing techniques and facilitate the development of new methods to overcome negative side effects of AI systems.

References

[1]
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. Concrete Problems in AI Safety. CoRR abs/1606.06565(2016).
[2]
Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err. Journal of Experimental Psychology: General 144, 1 (2015), 114.
[3]
Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2018. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science 64, 3 (2018), 1155–1170.
[4]
Julie S. Downs, Mandy B. Holbrook, Steve Sheng, and Lorrie Faith Cranor. 2010. Are Your Participants Gaming the System? Screening Mechanical Turk Workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2399–2402.
[5]
Jodi Forlizzi and Carl DiSalvo. 2006. Service Robots in the Domestic Environment: A Study of the Roomba Vacuum in the Home. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-Robot Interaction.
[6]
Dylan Hadfield-Menell, Smitha Milli, Pieter Abbeel, Stuart J. Russell, and Anca Dragan. 2017. Inverse Reward Design. In Advances in Neural Information Processing Systems.
[7]
Bill Hibbard. 2012. Avoiding Unintended AI Behaviors. In International Conference on Artificial General Intelligence. Springer, 107–116.
[8]
Lynn M. Hulse, Hui Xie, and Edwin R. Galea. 2018. Perceptions of Autonomous Vehicles: Relationships with Road Users, Risk, Gender and Age. Safety Science 102(2018), 1–13.
[9]
Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-User Expectations of AI Systems. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
[10]
Moritz Körber. 2018. Theoretical Considerations and Development of a Questionnaire to Measure Trust in Automation. In Congress of the International Ergonomics Association. Springer, 13–30.
[11]
Victoria Krakovna, Laurent Orseau, Miljan Martic, and Shane Legg. 2019. Penalizing Side Effects using Stepwise Relative Reachability. In AI Safety Workshop, IJCAI.
[12]
Victoria Krakovna, Laurent Orseau, Richard Ngo, Miljan Martic, and Shane Legg. 2020. Avoiding Side Effects By Considering Future Tasks. In Proceedings of the 20th Conference on Neural Information Processing Systems.
[13]
Miltos Kyriakidis, Riender Happee, and Joost C.F. de Winter. 2015. Public Opinion on Automated Driving: Results of an International Questionnaire Among 5000 Respondents. Transportation research part F: traffic psychology and behaviour 32 (2015), 127–140.
[14]
Ramya Ramakrishnan, Ece Kamar, Debadeepta Dey, Julie Shah, and Eric Horvitz. 2018. Discovering Blind Spots in Reinforcement Learning. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems.
[15]
Stuart Russell. 2017. Provably Beneficial Artificial Intelligence. Exponential Life, The Next Step(2017).
[16]
Sandhya Saisubramanian, Ece Kamar, and Shlomo Zilberstein. 2020. A Multi-Objective Approach to Mitigate Negative Side Effects. In Proceedings of the 29th International Joint Conference on Artificial Intelligence.
[17]
Sandhya Saisubramanian and Shlomo Zilberstein. 2021. Mitigating Negative Side Effects via Environment Shaping. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems.
[18]
Sandhya Saisubramanian, Shlomo Zilberstein, and Ece Kamar. 2020. Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems. CoRR abs/2008.12146(2020).
[19]
Rohin Shah, Dmitrii Krasheninnikov, Jordan Alexander, Pieter Abbeel, and Anca Dragan. 2019. Preferences Implicit in the State of the World. In Proceedings of the 7th International Conference on Learning Representations.
[20]
Alexander Matt Turner, Dylan Hadfield-Menell, and Prasad Tadepalli. 2020. Conservative Agency via Attainable Utility Preservation. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
[21]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the CHI Conference on Human Factors in Computing Systems.
[22]
Shun Zhang, Edmund H. Durfee, and Satinder P. Singh. 2018. Minimax-Regret Querying on Side Effects for Safe Optimality in Factored Markov Decision Processes. In Proceedings of the 27th International Joint Conference on Artificial Intelligence.

Cited By

View all
  • (2024)A Two-level Learning Approach for Achieving the Sustainable Development of Service Ecosystem2024 IEEE International Conference on Web Services (ICWS)10.1109/ICWS62655.2024.00082(622-631)Online publication date: 7-Jul-2024
  • (2024)Crossing the principle–practice gap in AI ethics with ethical problem-solvingAI and Ethics10.1007/s43681-024-00469-8Online publication date: 15-Apr-2024
  • (2023)Understanding Women's Perspectives on Smart Home Security Systems in Patriarchal Societies of Malawi.Proceedings of the 2023 ACM Designing Interactive Systems Conference10.1145/3563657.3595971(1078-1092)Online publication date: 10-Jul-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI EA '21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
May 2021
2965 pages
ISBN:9781450380959
DOI:10.1145/3411763
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 May 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Artificial intelligence systems
  2. Case study
  3. Negative side effects

Qualifiers

  • Poster
  • Research
  • Refereed limited

Funding Sources

Conference

CHI '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

Upcoming Conference

CHI '25
CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)193
  • Downloads (Last 6 weeks)27
Reflects downloads up to 12 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Two-level Learning Approach for Achieving the Sustainable Development of Service Ecosystem2024 IEEE International Conference on Web Services (ICWS)10.1109/ICWS62655.2024.00082(622-631)Online publication date: 7-Jul-2024
  • (2024)Crossing the principle–practice gap in AI ethics with ethical problem-solvingAI and Ethics10.1007/s43681-024-00469-8Online publication date: 15-Apr-2024
  • (2023)Understanding Women's Perspectives on Smart Home Security Systems in Patriarchal Societies of Malawi.Proceedings of the 2023 ACM Designing Interactive Systems Conference10.1145/3563657.3595971(1078-1092)Online publication date: 10-Jul-2023
  • (2023)User Experience for Artificial Intelligence Assistant: Focusing on Negative Side EffectsArtificial Intelligence in HCI10.1007/978-3-031-35894-4_6(88-97)Online publication date: 23-Jul-2023
  • (2022)Metareasoning for Safe Decision Making in Autonomous Systems2022 International Conference on Robotics and Automation (ICRA)10.1109/ICRA46639.2022.9811887(11073-11079)Online publication date: 23-May-2022
  • (2021)Avoiding Negative Side Effects due to Incomplete Knowledge of AI SystemsAI Magazine10.1609/aaai.1202842:4(62-71)Online publication date: 1-Dec-2021

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media