Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3411763.3441342acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
extended-abstract

Operationalizing Human-Centered Perspectives in Explainable AI

Published: 08 May 2021 Publication History

Abstract

The realm of Artificial Intelligence (AI)’s impact on our lives is far reaching – with AI systems proliferating high-stakes domains such as healthcare, finance, mobility, law, etc., these systems must be able to explain their decision to diverse end-users comprehensibly. Yet the discourse of Explainable AI (XAI) has been predominantly focused on algorithm-centered approaches, suffering from gaps in meeting user needs and exacerbating issues of algorithmic opacity. To address these issues, researchers have called for human-centered approaches to XAI. There is a need to chart the domain and shape the discourse of XAI with reflective discussions from diverse stakeholders. The goal of this workshop is to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we put an emphasis on “operationalizing”, aiming to produce actionable frameworks, transferable evaluation methods, concrete design guidelines, and articulate a coordinated research agenda for XAI.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–18.
[2]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160.
[3]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115.
[4]
Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012(2019).
[5]
Zoe M. Becerra, Nadia Fereydooni, Andrew L. Kun, Angus McKerral, Andreas Riener, Clemens Schartmüller, Bruce N. Walker, and Philipp Wintersberger. 2020. Interactive Workshops in a Pandemic...The Real Benefits of Virtual Spaces. submitted to IEEE Pervasive Computing(2020).
[6]
Wiebe E Bijker, Thomas P Hughes, Trevor Pinch, 1987. The social construction of technological systems.
[7]
Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics 8, 8 (2019), 832.
[8]
Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 275–285.
[9]
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. stat 1050(2017), 2.
[10]
Upol Ehsan and Mark O Riedl. 2020. Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach. arXiv preprint arXiv:2002.01092(2020).
[11]
Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O Riedl. 2019. Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 263–274.
[12]
Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2020. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers. Proceedings of the ACM on Human-Computer InteractionCSCW (2020).
[13]
Marco Gillies, Rebecca Fiebrink, Atau Tanaka, Jérémie Garcia, Frédéric Bevilacqua, Alexis Heloir, Fabrizio Nunnari, Wendy Mackay, Saleema Amershi, Bongshin Lee, 2016. Human-centred machine learning. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 3558–3565.
[14]
Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80–89.
[15]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.
[16]
Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608(2018).
[17]
Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven M Drucker. 2019. Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–13.
[18]
Andreas Holzinger. 2018. Explainable AI (ex-AI). Informatik-Spektrum 41, 2 (2018), 138–143.
[19]
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376590
[20]
Zachary C Lipton. 2018. The mythos of model interpretability. Queue 16, 3 (2018), 31–57.
[21]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38.
[22]
Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2018. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. arXiv (2018), arXiv–1811.
[23]
Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2018. How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682(2018).
[24]
Raja Parasuraman and Victor Riley. 1997. Humans and automation: Use, misuse, disuse, abuse. Human factors 39, 2 (1997), 230–253.
[25]
Gabriëlle Ras, Marcel van Gerven, and Pim Haselager. 2018. Explanation methods in deep learning: Users, values, concerns and challenges. In Explainable and Interpretable Models in Computer Vision and Machine Learning. Springer, 19–36.
[26]
Kacper Sokol and Peter Flach. 2020. Explainability fact sheets: a framework for systematic assessment of explainable approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 56–67.
[27]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31(2017), 841.
[28]
Philipp Wintersberger, Hannah Nicklas, Thomas Martlbauer, Stephan Hammer, and Andreas Riener. 2020. Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Virtual Event, DC, USA) (AutomotiveUI ’20). Association for Computing Machinery, New York, NY, USA, 252–261. https://doi.org/10.1145/3409120.3410659
[29]
Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295–305.
[30]
G. Zhou, J. Lu, C.-Y. Wan, M. D. Yarvis, and J. A. Stankovic. 2008. Body Sensor Networks. MIT Press, Cambridge, MA.

Cited By

View all
  • (2024)AI, the New-Age LawyerPowering Industry 5.0 and Sustainable Development Through Innovation10.4018/979-8-3693-3550-5.ch014(198-217)Online publication date: 28-May-2024
  • (2024)Explicabilité et conditions d’appropriation de l’intelligence artificielle : une ressource au service du management ?Question(s) de management10.3917/qdm.229.0131n° 49:2(131-141)Online publication date: 3-Jul-2024
  • (2024)An Intrinsically Explainable Method to Decode P300 Waveforms from EEG Signal Plots Based on Convolutional Neural NetworksBrain Sciences10.3390/brainsci1408083614:8(836)Online publication date: 20-Aug-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI EA '21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
May 2021
2965 pages
ISBN:9781450380959
DOI:10.1145/3411763
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 May 2021

Check for updates

Author Tags

  1. Algorithmic Fairness
  2. Artificial Intelligence
  3. Critical Technical Practice
  4. Explainable Artificial Intelligence
  5. Human-centered Computing
  6. Interpretability
  7. Interpretable Machine Learning
  8. Trust in Automation

Qualifiers

  • Extended-abstract
  • Research
  • Refereed limited

Conference

CHI '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)491
  • Downloads (Last 6 weeks)39
Reflects downloads up to 01 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)AI, the New-Age LawyerPowering Industry 5.0 and Sustainable Development Through Innovation10.4018/979-8-3693-3550-5.ch014(198-217)Online publication date: 28-May-2024
  • (2024)Explicabilité et conditions d’appropriation de l’intelligence artificielle : une ressource au service du management ?Question(s) de management10.3917/qdm.229.0131n° 49:2(131-141)Online publication date: 3-Jul-2024
  • (2024)An Intrinsically Explainable Method to Decode P300 Waveforms from EEG Signal Plots Based on Convolutional Neural NetworksBrain Sciences10.3390/brainsci1408083614:8(836)Online publication date: 20-Aug-2024
  • (2024)A Roadmap of Explainable Artificial Intelligence: Explain to Whom, When, What and How?ACM Transactions on Autonomous and Adaptive Systems10.1145/370200419:4(1-40)Online publication date: 24-Nov-2024
  • (2024)Human-centered AI Technologies in Human-robot Interaction for Social SettingsProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701610(501-505)Online publication date: 1-Dec-2024
  • (2024)Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language ModelsProceedings of the Halfway to the Future Symposium10.1145/3686169.3686185(1-8)Online publication date: 21-Oct-2024
  • (2024)Designing multi-model conversational AI financial systems: understanding sensitive values of women entrepreneurs in BrazilProceedings of the 2024 ACM International Conference on Interactive Media Experiences Workshops10.1145/3672406.3672409(11-18)Online publication date: 12-Jun-2024
  • (2024)Reassuring, Misleading, Debunking: Comparing Effects of XAI Methods on Human DecisionsACM Transactions on Interactive Intelligent Systems10.1145/366564714:3(1-36)Online publication date: 22-May-2024
  • (2024)Seamful XAI: Operationalizing Seamful Design in Explainable AIProceedings of the ACM on Human-Computer Interaction10.1145/36373968:CSCW1(1-29)Online publication date: 26-Apr-2024
  • (2024)Meaningful Transparency for Clinicians: Operationalising HCXAI Research with GynaecologistsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658971(1268-1281)Online publication date: 3-Jun-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media