Abstract
Explanations—a form of post-hoc interpretability—play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understanding of “who” the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. In particular, we advocate for a reflective sociotechnical approach. We illustrate HCXAI through a case study of an explanation system for non-technical end-users that shows how technical advancements and the understanding of human factors co-evolve. Building on the case study, we lay out open research questions pertaining to further refining our understanding of “who” the human is and extending beyond 1-to-1 human-computer interactions. Finally, we propose that a reflective HCXAI paradigm—mediated through the perspective of Critical Technical Practice and supplemented with strategies from HCI, such as value-sensitive design and participatory design—not only helps us understand our intellectual blind spots, but it can also open up new design and research spaces.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Streamproc/Mediastreamrecorder, August 2017. https://github.com/streamproc/MediaStreamRecorder
Agre, P.: Toward a critical technical practice: lessons learned in trying to reform AI. In: Bowker. G., Star, S., Turner, W., Gasser, L. (eds.) Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide, Erlbaum (1997)
Agre, P., Agre, P.E.: Computation and Human Experience. Cambridge University Press, New York (1997)
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Barocas, S., Selbst, A.D.: Big data’s disparate impact. Cal. L. Rev. 104, 671 (2016)
Beer, J.M., Prakash, A., Mitzner, T.L., Rogers, W.A.: Understanding robot acceptance. Technical report, Georgia Institute of Technology (2011)
Berk, R.: Criminal Justice Forecasts of Risk: A Machine Learning Approach. Springer, New York (2012). https://doi.org/10.1007/978-1-4614-3085-8
Bermingham, A., Smeaton, A.: On using twitter to monitor political sentiment and predict election results. In: Proceedings of the Workshop on Sentiment Analysis where AI meets Psychology (SAAIP 2011), pp. 2–10 (2011)
Block, N.: Two neural correlates of consciousness. Trends Cogn. Sci. 9(2), 46–52 (2005)
Block, N.: Consciousness, accessibility, and the mesh between psychology and neuroscience. Behav. Brain Sci. 30(5–6), 481–499 (2007)
Bødker, S.: Through the interface-a human activity approach to user interface design. DAIMI Report Series (224) (1991)
Chen, H., Chiang, R.H., Storey, V.C.: Business intelligence and analytics: from big data to big impact. MIS Q., 1165–1188 (2012)
Chernova, S., Veloso, M.M.: A confidence-based approach to multi-robot learning from demonstration. In: AAAI Spring Symposium: Agents that Learn from Human Teachers, pp. 20–27 (2009)
Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q., 319–340 (1989)
Djajadiningrat, J.P., Gaver, W.W., Fres, J.: Interaction relabelling and extreme characters: methods for exploring aesthetic interactions. In: Proceedings of the 3rd Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, pp. 66–71 (2000)
Dourish, P.: Where the Action Is: the Foundations of Embodied Interaction. MIT Press, Cambridge (2004)
Dourish, P., Finlay, J., Sengers, P., Wright, P.: Reflective HCI: towards a critical technical practice. In: CHI 2004 Extended Abstracts on Human Factors in Computing Systems, pp. 1727–1728 (2004)
Ehn, P.: Scandinavian design-on skill and participation. In: Adler, P., Winograd, T. (eds.) Usability-Turning Technologies into Tools (1992)
Ehsan, U., Harrison, B., Chan, L., O. Riedl, M.: Rationalization: a neural machine translation approach to generating natural language explanations. In: Proceedings of the AAAI Conference on Artificial Intelligence, Ethics, and Society, February 2018
Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.: Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: Proceedings of the International Conference on Intelligence User Interfaces, March 2019
Feenberg, A.: Critical theory of technology (1991)
Fodor, J.A.: The Elm and the Expert: Mentalese and Its Semantics. MIT Press, Cambridge (1994)
Friedman, B., Hendry, D.: The envisioning cards: a toolkit for catalyzing humanistic and technical imaginations. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1145–1148 (2012)
Friedman, B., Kahn, P.H., Borning, A., Huldtgren, A.: Value sensitive design and information systems. In: Doorn, N., Schuurbiers, D., van de Poel, I., Gorman, M.E. (eds.) Early engagement and new technologies: Opening up the laboratory. PET, vol. 16, pp. 55–95. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-7844-3_4
Galindo, J., Tamayo, P.: Credit risk assessment using statistical and machine learning: basic methodology and risk modeling applications. Comput. Econ. 15(1–2), 107–143 (2000)
Gaver, B., Martin, H.: Alternatives: exploring information appliances through conceptual design proposals. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 209–216 (2000)
Gaver, W.W., et al.: The drift table: designing for ludic engagement. In: CHI 2004 Extended Abstracts on Human Factors in Computing Systems, pp. 885–900 (2004)
Held, D.: Introduction to Critical Theory: Horkheimer to Habermas, vol. 261. University of California Press (1980)
Kaniarasu, P., Steinfeld, A., Desai, M., Yanco, H.: Robot confidence and trust alignment. In: 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 155–156. IEEE (2013)
Lipton, Z.C.: The Mythos of Model Interpretability. ArXiv e-prints, June 2016
Luong, M.T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. arXiv preprint arXiv:1706.07269 (2017)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)
Schön, D.A.: The Reflective Practitioner: How Professionals Think in Action. Routledge (2017)
Sengers, P., Boehner, K., David, S., Kaye, J.: Reflective design. In: Proceedings of the 4th Decennial Conference on Critical Computing: Between Sense and Sensibility, pp. 49–58 (2005)
Suchman, L., Suchman, L.A.: Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press, Cambridge (2007)
Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information technology: toward a unified view. MIS Q., 425–478 (2003)
Watkins, C., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)
Wright, P., McCarthy, J.: Technology as Experience. MIT Press, Cambridge (2004)
Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)
Acknowledgements
Sincerest thanks to all past and present teammates of the Human-centered XAI group at the Entertainment Intelligence Lab whose hard work made the case study possible—Brent Harrison, Pradyumna Tambwekar, Larry Chan, Chenhann Gan, and Jiahong Sun. Special thanks to Dr. Judy Gichoya for her informed perspectives on the medical scenarios. We’d also like to thank Ishtiaque Ahmed, Malte Jung, Samir Passi, and Phoebe Sengers for conversations throughout the years that have constructively added to the notion of a ‘Reflective HCXAI’. We are indebted to Rachel Urban and Lara J. Martin for their amazing proofreading assistance. We are grateful to reviewers for their useful comments and critique. This material is based upon work supported by the National Science Foundation under Grant No. 1928586.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Ehsan, U., Riedl, M.O. (2020). Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In: Stephanidis, C., Kurosu, M., Degen, H., Reinerman-Jones, L. (eds) HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence. HCII 2020. Lecture Notes in Computer Science(), vol 12424. Springer, Cham. https://doi.org/10.1007/978-3-030-60117-1_33
Download citation
DOI: https://doi.org/10.1007/978-3-030-60117-1_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60116-4
Online ISBN: 978-3-030-60117-1
eBook Packages: Computer ScienceComputer Science (R0)