Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Notions of explainability and evaluation approaches for explainable artificial intelligence

Published: 01 December 2021 Publication History

Abstract

Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models that lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested, coupled with several studies attempting to define the concept of explainability and its evaluation. This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability and the evaluation approaches for XAI methods. The structure of this hierarchy builds on top of an exhaustive analysis of existing taxonomies and peer-reviewed scientific material. Findings suggest that scholars have identified numerous notions and requirements that an explanation should meet in order to be easily understandable by end-users and to provide actionable information that can inform decision making. They have also suggested various approaches to assess to what degree machine-generated explanations meet these demands. Overall, these approaches can be clustered into human-centred evaluations and evaluations with more objective metrics. However, despite the vast body of knowledge developed around the concept of explainability, there is not a general consensus among scholars on how an explanation should be defined, and how its validity and reliability assessed. Eventually, this review concludes by critically discussing these gaps and limitations, and it defines future research directions with explainability as the starting component of any artificial intelligent system.

Graphical abstract

Display Omitted

Highlights

Proposal of a hierarchical system to organise the notions of explainability.
Proposal of a hierarchical system to organise the evaluation approaches for XAI methods.
Summary of approaches for defining an explanation and its effectiveness.
Summary of scattered evaluation approaches for assessing XAI methods.
Proposal of a framework with explainability as core element.

References

[1]
Adadi Amina, Berrada Mohammed, Peeking inside the black-box: A survey on explainable artificial intelligence (XAI), IEEE Access 6 (2018) 52138–52160,.
[2]
Preece Alun, Asking “Why” in AI: Explainability of intelligent systems–perspectives and challenges, Intell. Syst. Account. Finance Manag. 25 (2) (2018) 63–72,.
[3]
Wang Weiquan, Benbasat Izak, Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs, J. Manage. Inf. Syst. 23 (4) (2007) 217–246,.
[4]
Lapuschkin Sebastian, Wäldchen Stephan, Binder Alexander, Montavon Grégoire, Samek Wojciech, Müller Klaus-Robert, Unmasking clever hans predictors and assessing what machines really learn, Nat. Commun. 10 (1) (2019) 1096,.
[5]
Rudin Cynthia, Algorithms for interpretable machine learning, in: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, New York, New York, USA, 2014, p. 1519,.
[6]
Rudin Cynthia, Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, Nat. Mach. Intell. 1 (5) (2019) 206,.
[7]
Fellous Jean-Marc, Sapiro Guillermo, Rossi Andrew, Mayberg Helen S., Ferrante Michele, Explainable artificial intelligence for neuroscience: Behavioral neurostimulation, Front. Neurosci. 13 (2019) 1346,.
[8]
Fox Maria, Long Derek, Magazzeni Daniele, Explainable planning, in: IJCAI 2017 Workshop on Explainable Artificial Intelligence, XAI, International Joint Conferences on Artificial Intelligence, Inc, Melbourne, Australia, 2017, pp. 24–30.
[9]
Došilović Filip Karlo, Brčić Mario, Hlupić Nikica, Explainable artificial intelligence: A survey, in: 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO, IEEE, Opatija, Croatia, 2018, pp. 0210–0215,.
[10]
Thelisson Eva, Padh Kirtan, Celis L. Elisa, Regulatory mechanisms and algorithms towards trust in AI/ML, in: IJCAI Workshop on Explainable AI, XAI, International Joint Conferences on Artificial Intelligence, Inc, Melbourne, Australia, 2017, pp. 53–57.
[11]
Thelisson Eva, Towards trust, transparency, and liability in AI/AS systems, in: Proceedings of the 26th International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence, Inc, Melbourne, Australia, 2017, pp. 5215–5216,.
[12]
Wachter Sandra, Mittelstadt Brent, Floridi Luciano, Transparent, explainable, and accountable AI for robotics, Sci. Robot. 2 (6) (2017),.
[13]
Samek Wojciech, Müller Klaus-Robert, Towards explainable artificial intelligence, in: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer, Cham, Switzerland, 2019, pp. 5–22,.
[14]
Lacave Carmen, Díez Francisco J., A review of explanation methods for Bayesian networks, Knowl. Eng. Rev. 17 (2) (2002) 107–127,.
[15]
Martens David, Baesens Bart, Van Gestel Tony, Vanthienen Jan, Comprehensible credit scoring models using rule extraction from support vector machines, European J. Oper. Res. 183 (3) (2007) 1466–1476,.
[16]
Arrieta Alejandro Barredo, Díaz-Rodríguez Natalia, Del Ser Javier, Bennetot Adrien, Tabik Siham, Barbado Alberto, García Salvador, Gil-López Sergio, Molina Daniel, Benjamins Richard, et al., Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, Inf. Fusion 58 (2020) 82–115,.
[17]
Guidotti Riccardo, Monreale Anna, Ruggieri Salvatore, Turini Franco, Giannotti Fosca, Pedreschi Dino, A survey of methods for explaining black box models, ACM Comput. Surv. (CSUR) 51 (5) (2018) 93:1–93:42,.
[18]
Miller Tim, Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence 267 (2019) 1–38,.
[19]
Dzindolet Mary T., Peterson Scott A., Pomranky Regina A., Pierce Linda G., Beck Hall P., The role of trust in automation reliance, Int. J. Hum.-Comput. Stud. 58 (6) (2003) 697–718,.
[20]
Tintarev Nava, Masthoff Judith, A survey of explanations in recommender systems, in: IEEE 23rd International Conference on Data Engineering Workshop, IEEE, Istanbul, Turkey, 2007, pp. 801–810,.
[21]
Lipton Zachary C., The mythos of model interpretability, Commun. ACM 61 (10) (2018) 36–43.
[22]
Ha Taehyun, Lee Sangwon, Kim Sangyeon, Designing explainability of an artificial intelligence system, in: Proceedings of the Technology, Mind, and Society, ACM, Washington, District of Columbia, USA, 2018, p. 14:1,.
[23]
Chajewska Urszula, Halpern Joseph Y., Defining explanation in probabilistic systems, in: Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers Inc., Providence, Rhode Island, USA, 1997, pp. 62–71.
[24]
Holzinger Andreas, Langs Georg, Denk Helmut, Zatloukal Kurt, Müller Heimo, Causability and explainability of artificial intelligence in medicine, Wiley Interdiscip. Rev.: Data Min. Knowl. Discovery 9 (2019),.
[25]
Miller Tim, Howe Piers, Sonenberg Liz, Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences, in: IJCAI Workshop on Explainable AI, XAI, International Joint Conferences on Artificial Intelligence, Inc, Melbourne, Australia, 2017, pp. 36–42.
[26]
Dam Hoa Khanh, Tran Truyen, Ghose Aditya, Explainable software analytics, in: Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, ACM, Gothenburg, Sweden, 2018, pp. 53–56,.
[27]
Kulesza Todd, Burnett Margaret, Wong Weng-Keen, Stumpf Simone, Principles of explanatory debugging to personalize interactive machine learning, in: Proceedings of the 20th International Conference on Intelligent User Interfaces, ACM, Atlanta, Georgia, USA, 2015, pp. 126–137,.
[28]
Kulesza Todd, Stumpf Simone, Burnett Margaret, Yang Sherry, Kwan Irwin, Wong Weng-Keen, Too much, too little, or just right? Ways explanations impact end users’ mental models, in: IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC, IEEE, Raleigh, NC, USA, 2013, pp. 3–10,.
[29]
Moraffah Raha, Karami Mansooreh, Guo Ruocheng, Raglin Adrienne, Liu Huan, Causal interpretability for machine learning-problems, methods and evaluation, ACM SIGKDD Explor. Newsl. 22 (1) (2020) 18–33,.
[30]
Cui Xiaocong, Lee Jung Min, Hsieh J., An integrative 3C evaluation framework for explainable artificial intelligence, in: AI and Semantic Technologies for Intelligent Information Systems, SIGODIS, AIS eLibrary, Cancún, Mexico, 2019, pp. 1–10.
[31]
Askira-Gelman Irit, Knowledge discovery: Comprehensibility of the results, in: Proceedings of the Thirty-First Hawaii International Conference on System Sciences, vol. 5, IEEE, Hawaii, 1998, pp. 247–255,.
[32]
Alonso Jose M., Castiello Ciro, Mencar Corrado, A bibliometric analysis of the explainable artificial intelligence research field, in: International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, Springer, Cádiz, Spain, 2018, pp. 3–15.
[33]
Bibal Adrien, Frénay Benoît, Interpretability of machine learning models and representations: An introduction, in: Proceedings on the European Symposium on Artificial Neural Networks, ESANN, i6doc, Bruges, Belgium, 2016, pp. 77–82.
[34]
Bratko Ivan, Machine learning: Between accuracy and interpretability, in: Learning, Networks and Statistics, Springer, Vienna, Austria, 1997, pp. 163–177,.
[35]
Doran Derek, Schulz Sarah, Besold Tarek R., What does explainable AI really mean? A new conceptualization of perspectives, in: 16th International Conference of the Italian Association of Artificial Intelligence, 2017. Workshop on Comprehensibility and Explanation in AI and ML, Cex, University of Bremen, Germany, Bari, Italy, 2017, pp. 1–8.
[36]
Freitas Alex A., Are we really discovering interesting knowledge from data?, Expert Update BCS-SGAI Mag. 9 (1) (2006) 41–47.
[37]
Goebel Randy, Chander Ajay, Holzinger Katharina, Lecue Freddy, Akata Zeynep, Stumpf Simone, Kieseberg Peter, Holzinger Andreas, Explainable AI: The new 42?, in: International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Springer, Hamburg, Germany, 2018, pp. 295–303,.
[38]
Watson David S., Krutzinna Jenny, Bruce Ian N., Griffiths Christopher E.M., McInnes Iain B., Barnes Michael R., Floridi Luciano, Clinical applications of machine learning algorithms: Beyond the black box, BMJ 364 (2019) l886,.
[39]
Jung Alexander, Nardelli Pedro Henrique Juliano, An information-theoretic approach to personalized explainable machine learning, IEEE Signal Process. Lett. 27 (2020) 825–829,.
[40]
de Fine Licht Karl, de Fine Licht Jenny, Artificial intelligence, transparency, and public decision-making, AI Soc. (2020) 1–10,.
[41]
Tintarev Nava, Masthoff Judith, Designing and evaluating explanations for recommender systems, in: Recommender Systems Handbook, Springer, Boston, Massachusetts, USA, 2011, pp. 479–510,.
[42]
Tintarev Nava, Masthoff Judith, Explaining recommendations: Design and evaluation, in: Recommender Systems Handbook, Springer, Boston, Massachusetts, USA, 2015, pp. 353–382,.
[43]
Chander Ajay, Srinivasan Ramya, Evaluating explanations by cognitive value, in: International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Springer, Hamburg, Germany, 2018, pp. 314–328.
[44]
Zhang Yu, Sreedharan Sarath, Kulkarni Anagha, Chakraborti Tathagata, Zhuo Hankz Hankui, Kambhampati Subbarao, Plan explicability and predictability for robot task planning, in: IEEE International Conference on Robotics and Automation, ICRA, IEEE, Singapore, 2017, pp. 1313–1320,.
[45]
Alvarez-Melis David, Jaakkola Tommi S., Towards robust interpretability with self-explaining neural networks, in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, NeurIPS, Neural Information Processing Systems Foundation, Inc., Montréal, Canada, 2018, pp. 7786–7795.
[46]
Abdul Ashraf, Vermeulen Jo, Wang Danding, Lim Brian Y, Kankanhalli Mohan, Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda, in: Proceedings of the CHI Conference on Human Factors in Computing Systems, ACM, Montréal, Canada, 2018, pp. 582–599,.
[47]
Chromik Michael, Eiband Malin, Völkel Sarah Theres, Buschek Daniel, Dark patterns of explainability, transparency, and user control for intelligent systems, in: Joint Proceedings of the ACM IUI 2019 Workshops Co-Located with the 24th ACM Conference on Intelligent User Interfaces ACM IUI, vol. 2327, CEUR-WS.org, Los Angeles, California, USA, 2019.
[48]
Dodge Jonathan, Penney Sean, Anderson Andrew, Burnett Margaret M, What should be in an XAI explanation? What IFT reveals, in: IUI Workshop 7: Explainable Smart Systems - ExSS, CEUR-WS.org, Tokyo, Japan, 2018.
[49]
Lim Brian Y., Dey Anind K., Avrahami Daniel, Why and why not explanations improve the intelligibility of context-aware intelligent systems, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, Boston, Massachusetts, USA, 2009, pp. 2119–2128,.
[50]
Lim Brian Y., Yang Qian, Abdul Ashraf M., Wang Danding, Why these explanations? Selecting intelligibility types for explanation goals, in: Joint Proceedings of the ACM IUI 2019 Workshops Co-Located with the 24th ACM Conference on Intelligent User Interfaces ACM IUI, vol. 2327, CEUR-WS.org, Los Angeles, California, USA, 2019.
[51]
Moore Johanna D., Paris Cécile L., Planning text for advisory dialogues: Capturing intentional and rhetorical information, Comput. Linguist. 19 (4) (1993) 651–694.
[52]
Madumal Prashan, Miller Tim, Sonenberg Liz, Vetere Frank, A grounded interaction protocol for explainable artificial intelligence, in: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, International Foundation for Autonomous Agents and Multiagent Systems, Montréal, Canada, 2019, pp. 1033–1041.
[53]
Freitas Alex A., On rule interestingness measures, in: Research and Development in Expert Systems XV, Springer, United Kingdom, 1999, pp. 147–158,.
[54]
Sequeira Pedro, Yeh Eric, Gervasio Melinda T., Interestingness elements for explainable reinforcement learning through introspection, in: Joint Proceedings of the ACM IUI 2019 Workshops Co-Located with the 24th ACM Conference on Intelligent User Interfaces ACM IUI, vol. 2327, CEUR-WS.org, Los Angeles, California, USA, 2019.
[55]
Biran Or, Cotton Courtenay, Explanation and justification in machine learning: A survey, in: IJCAI 2017 Workshop on Explainable Artificial Intelligence, XAI, International Joint Conferences on Artificial Intelligence, Inc, Melbourne, Australia, 2017, pp. 8–13.
[56]
Carrington André, Fieguth Paul, Chen Helen, Measures of model interpretability for model selection, in: International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Springer, Hamburg, Germany, 2018, pp. 329–349.
[57]
Montavon Grégoire, Samek Wojciech, Müller Klaus-Robert, Methods for interpreting and understanding deep neural networks, Digit. Signal Process. 73 (2017) 1–15,.
[58]
Roscher Ribana, Bohn Bastian, Duarte Marco F., Garcke Jochen, Explainable machine learning for scientific insights and discoveries, IEEE Access 8 (2020) 42200–42216,.
[59]
Sassoon Isabel, Kökciyan Nadin, Sklar Elizabeth, Parsons Simon, Explainable argumentation for wellness consultation, in: International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, Springer, Cham, Switzerland, 2019, pp. 186–202.
[60]
Sundararajan Mukund, Xu Jinhua, Taly Ankur, Sayres Rory, Najmi Amir, Exploring principled visualizations for deep network attributions, in: Joint Proceedings of the ACM IUI 2019 Workshops Co-Located with the 24th ACM Conference on Intelligent User Interfaces, ACM IUI, vol. 2327, CEUR-WS.org, Los Angeles, California, USA, 2019.
[61]
Van Belle Vanya, Lisboa Paulo, Research directions in interpretable machine learning models, in: 21st European Symposium on Artificial Neural Networks, ESANN, i6doc, Bruges, Belgium, 2013, pp. 533–541.
[62]
Vellido Alfredo, Martín-Guerrero José David, Lisboa Paulo J.G., Making machine learning models interpretable, in: European Symposium on Artificial Neural Networks, ESANN, vol. 12, i6doc, Bruges, Belgium, 2012, pp. 163–172.
[63]
Zhou Shang-Ming, Gan John Q., Low-level interpretability and high-level interpretability: A unified view of data-driven interpretable fuzzy system modelling, Fuzzy Sets and Systems 159 (23) (2008) 3091–3131,.
[64]
Coeckelbergh Mark, Artificial intelligence, responsibility attribution, and a relational justification of explainability, Sci. Eng. Ethics 26 (4) (2020) 2051–2068,.
[65]
Gregor Shirley, Benbasat Izak, Explanations from intelligent systems: Theoretical foundations and implications for practice, MIS Q. 23 (4) (1999) 497–530,.
[66]
Weihs Claus, Sondhauss Um, Combining mental fit and data fit for classification rule selection, in: Exploratory Data Analysis in Empirical Research, Springer, Munich, Germany, 2003, pp. 188–203,.
[67]
Freitas Alex A., Comprehensible classification models: A position paper, ACM SIGKDD Explor. Newslett. 15 (1) (2014) 1–10,.
[68]
Liu Shixia, Wang Xiting, Liu Mengchen, Zhu Jun, Towards better analysis of machine learning models: A visual analytics perspective, Vis. Inform. 1 (1) (2017) 48–56,.
[69]
Alvarez-Melis David, Jaakkola Tommi S., On the robustness of interpretability methods, in: Proceedings of the 2018 ICML Workshop in Human Interpretability in Machine Learning, ICML, Stockholm, Sweden, 2018, pp. 66–71.
[70]
McAllister Rowan, Gal Yarin, Kendall Alex, Van Der Wilk Mark, Shah Amar, Cipolla Roberto, Weller Adrian Vivian, Concrete problems for autonomous vehicle safety: Advantages of Bayesian deep learning, in: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence, Inc, Melbourne, Australia, 2017, pp. 4745–4753,.
[71]
Sokol Kacper, Flach Peter, Explainability fact sheets: A framework for systematic assessment of explainable approaches, in: Proceedings of the Conference on Fairness, Accountability, and Transparency, 2020, pp. 56–67,.
[72]
Kindermans Pieter-Jan, Hooker Sara, Adebayo Julius, Alber Maximilian, Schütt Kristof T., Dähne Sven, Erhan Dumitru, Kim Been, The (un)reliability of saliency methods, in: NIPS Workshop on Explaining and Visualizing Deep Learning, NIPS, Long Beach, California, USA, 2017, pp. 93–101.
[73]
Sundararajan Mukund, Taly Ankur, Yan Qiqi, Axiomatic attribution for deep networks, in: Proceedings of the 34th International Conference on Machine Learning-Volume 70, JMLR.org, Sydney, Australia, 2017, pp. 3319–3328.
[74]
Offert Fabian, “I know it when i see it”. Visualization and intuitive interpretability, in: NIPS Symposium on Interpretable Machine Learning, NIPS, Long Beach, California, USA, 2017, pp. 43–46.
[75]
Koji Maruhashi, Masaru Todoriki, Takuya Ohwa, Keisuke Goto, Yu Hasegawa, Hiroya Inakoshi, Hirokazu Anai, Learning multi-way relations via tensor decomposition with neural networks, in: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
[76]
Larsson Stefan, Heintz Fredrik, Transparency in artificial intelligence, Internet Policy Rev. 9 (2) (2020),.
[77]
Lyons Joseph B., Being transparent about transparency, in: AAAI Spring Symposium, AAAI Press, Palo Alto, California, USA, 2013, pp. 48–53.
[78]
Weller Adrian, Challenges for transparency, in: Proceedings of the ICML Workshop on Human Interpretability in Machine Learning, ICML, Sydney, Australia, 2017, pp. 55–62.
[79]
Páez Andrés, The pragmatic turn in explainable artificial intelligence (XAI), Minds Mach. 29 (2019) 1–19,.
[80]
Simonyan Karen, Vedaldi Andrea, Zisserman Andrew, Deep inside convolutional networks: Visualising image classification models and saliency maps, in: Proceedings of ICLR Workshop, ICLR, Banff, Canada, 2014.
[81]
Lou Yin, Caruana Rich, Gehrke Johannes, Intelligible models for classification and regression, in: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Beijing, China, 2012, pp. 150–158,.
[82]
Shrikumar Avanti, Greenside Peyton, Kundaje Anshul, Learning important features through propagating activation differences, in: Proceedings of the 34th International Conference on Machine Learning-Volume 70, JMLR.org, Sydney, Australia, 2017, pp. 3145–3153.
[83]
Bach Sebastian, Binder Alexander, Montavon Grégoire, Klauschen Frederick, Müller Klaus-Robert, Samek Wojciech, On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PLoS One 10 (7) (2015).
[84]
Herlocker Jonathan L., Konstan Joseph A., Riedl John, Explaining collaborative filtering recommendations, in: Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, ACM, Philadelphia, Pennsylvania, USA, 2000, pp. 241–250,.
[85]
Krause Josua, Perer Adam, Ng Kenney, Interacting with predictions: Visual inspection of black-box machine learning models, in: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, ACM, San Jose, California, USA, 2016, pp. 5686–5697,.
[86]
Ribera Mireia, Lapedriza Àgata, Can we do better explanations? A proposal of user-centered explainable AI, in: Joint Proceedings of the ACM IUI 2019 Workshops Co-Located with the 24th ACM Conference on Intelligent User Interfaces, ACM IUI, vol. 2327, CEUR-WS.org, Los Angeles, California, USA, 2019.
[87]
de Graaf Maartje M.A., Malle Bertram F., How people explain action (and autonomous intelligent systems should too), in: AAAI Fall Symposium on Artificial Intelligence for Human-Robot Interaction, AAAI Press, Arlington, Virginia, USA, 2017, pp. 19–26.
[88]
Harbers Maaike, van den Bosch Karel, Meyer John-Jules Ch, A study into preferred explanations of virtual agent behavior, in: International Workshop on Intelligent Virtual Agents, Springer, Amsterdam, Netherlands, 2009, pp. 132–145,.
[89]
Glomsrud Jon Arne, Ødegårdstuen André, Clair Asun Lera St, Smogeli Øyvind, Trustworthy versus explainable AI in autonomous vessels, in: Proceedings of the International Seminar on Safety and Security of Autonomous Vessels (ISSAV) and European STAMP Workshop and Conference (ESWC), Sciendo, Espoo, Finland, 2020, pp. 37–47,.
[90]
Wick Michael R., Thompson William B., Reconstructive explanation: Explanation as complex problem solving, in: Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence, Inc, Detroit, Michigan, USA, 1989, pp. 135–140.
[91]
Wick Michael R., Second generation expert system explanation, in: Second Generation Expert Systems, Springer, Berlin, Germany, 1993, pp. 614–640,.
[92]
Haynes Steven R., Cohen Mark A., Ritter Frank E., Designs for explaining intelligent agents, Int. J. Hum.-Comput. Stud. 67 (1) (2009) 90–110,.
[93]
Sheh Raymond, Monteath Isaac, Introspectively assessing failures through explainable artificial intelligence, in: IROS Workshop on Introspective Methods for Reliable Autonomy, iliad-project.eu, Vancouver, Canada, 2017, pp. 40–47.
[94]
Barzilay Regina, McCullough Daryl, Rambow Owen, DeCristofaro Jonathan, Korelsky Tanya, Lavoie Benoit, A new approach to expert system explanations, in: Proceedings of the Ninth International Workshop on Natural Language Generation, Association for Computational Linguistics, Niagara-on-the-Lake, Ontario, Canada, 1998, pp. 78–87.
[95]
Lombrozo Tania, The structure and function of explanations, Trends Cognitive Sci. 10 (10) (2006) 464–470,.
[96]
Weiner J.L., BLAH, a system which explains its reasoning, Artificial Intelligence 15 (1–2) (1980) 19–48,.
[97]
Walton Douglas, A dialogue system specification for explanation, Synthese 182 (3) (2011) 349–374.
[98]
Cawsey Alison, Generating interactive explanations, in: Proceedings of the 9th National Conference on Artificial Intelligence, vol. 1, Citeseer, Anaheim, California, USA, 1991, pp. 86–91.
[99]
Cawsey Alison, Planning interactive explanations, Int. J. Man-Mach. Stud. 38 (2) (1993) 169–199,.
[100]
Cawsey Alison, User modelling in interactive explanations, User Model. User-Adapt. Interact. 3 (3) (1993) 221–247,.
[101]
Pollack Martha E., Hirschberg Julia, Webber Bonnie, User participation in the reasoning processes of expert systems, in: Proceedings of Second National Conference Artificial Intelligence, MIT Press, Cambridge, Massachusetts, Menlo Park, California, USA, 1982, pp. 358–361.
[102]
Johnson Hilary, Johnson Peter, Explanation facilities and interactive systems, in: Proceedings of the 1st International Conference on Intelligent User Interfaces, ACM, Orlando, Florida, USA, 1993, pp. 159–166,.
[103]
Moore Johanna D., Paris Cecile L., Planning text for advisory dialogues, in: Proceedings of the 27th Annual Meeting on Association for Computational Linguistics, Association for Computational Linguistics, Vancouver, British Columbia, Canada, 1989, pp. 203–211,.
[104]
Moore Johanna D., Swartout William R., A reactive approach to explanation, in: IJCAI, International Joint Conferences on Artificial Intelligence, Inc, Detroit, Michigan, USA, 1989, pp. 1504–1510.
[105]
Moore Johanna D., Swartout William R., A reactive approach to explanation: Taking the user’s feedback into account, in: Natural Language Generation in Artificial Intelligence and Computational Linguistics, Springer, USA, 1991, pp. 3–48.
[106]
Core Mark G., Lane H. Chad, Van Lent Michael, Gomboc Dave, Solomon Steve, Rosenberg Milton, Building explainable artificial intelligence systems, in: Proceedings of the 21st National Conference on Artificial Intelligence, AAAI Press, Boston, Massachusetts, USA, 2006, pp. 1766–1773,.
[107]
Gomboc Dave, Solomon Steve, Core Mark G., Lane H. Chad, Van Lent Michael, Design recommendations to support automated explanation and tutoring, in: Proceedings of the Fourteenth Conference on Behavior Representation in Modelling and Simulation, Simulation Interoperability Standards Organization (SISO), Universal City, California, USA, 2005, pp. 331–340.
[108]
Lane H. Chad, Core Mark G., Van Lent Michael, Solomon Steve, Gomboc Dave, Explainable artificial intelligence for training and tutoring, in: Proceedings of the 12th International Conference on Artificial Intelligence in Education, AIED, IOS Press, Amsterdam, The Netherlands, 2005, pp. 762–764.
[109]
Van Lent Michael, Fisher William, Mancuso Michael, An explainable artificial intelligence system for small-unit tactical behavior, in: Proceedings of the National Conference on Artificial Intelligence, Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, San Jose, California, USA, 2004, pp. 900–907.
[110]
Graesser Arthur C., Chipman Patrick, Haynes Brian C., Olney Andrew, Autotutor: An intelligent tutoring system with mixed-initiative dialogue, IEEE Trans. Educ. 48 (4) (2005) 612–618,.
[111]
Langley Pat, Meadows Ben, Sridharan Mohan, Choi Dongkyu, Explainable agency for intelligent autonomous systems, in: Proceedings of the Thirty-First Conference on Artificial Intelligence, AAAI Press, San Francisco, California, USA, 2017, pp. 4762–4764.
[112]
Sohrabi Shirin, Baier Jorge A., McIlraith Sheila A., Preferred explanations: Theory and generation via planning, in: Proceedings of the Twenty-Fifth Conference on Artificial Intelligence, AAAI Press, San Francisco, California, USA, 2011, pp. 261–267.
[113]
Natalia Díaz-Rodríguez, Galena Pisoni, Accessible cultural heritage through explainable artificial intelligence, in: Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, 2020, pp. 317–324.
[114]
Pisoni Galena, Díaz-Rodríguez Natalia, Gijlers Hannie, Tonolli Linda, Human-centred artificial intelligence for designing accessible cultural heritage, Appl. Sci. 11 (2) (2021) 870.
[115]
Gacto Maria Jose, Alcalá Rafael, Herrera Francisco, Interpretability of linguistic fuzzy rule-based systems: An overview of interpretability measures, Inform. Sci. 181 (20) (2011) 4340–4360,.
[116]
García Salvador, Fernández Alberto, Luengo Julián, Herrera Francisco, A study of statistical techniques and performance measures for genetics-based machine learning: Accuracy and interpretability, Soft Comput. 13 (10) (2009) 959.
[117]
Otero Fernando E.B., Freitas Alex A., Improving the interpretability of classification rules discovered by an ant colony algorithm: Extended results, Evol. Comput. 24 (3) (2016) 385–409,.
[118]
Robnik-Šikonja Marko, Kononenko Igor, Explaining classifications for individual instances, IEEE Trans. Knowl. Data Eng. 20 (5) (2008) 589–600,.
[119]
Robnik-Šikonja Marko, Explanation of prediction models with explain prediction, Informatica 42 (1) (2018) 13–22.
[120]
Bohanec Marko, Robnik-Šikonja Marko, Kljajić Borštnar Mirjana, Decision-making framework with double-loop learning through interpretable black-box machine learning models, Ind. Manag. Data Syst. 117 (7) (2017) 1389–1406,.
[121]
Bohanec Marko, Borštnar Mirjana Kljajić, Robnik-Šikonja Marko, Explaining machine learning models in sales predictions, Expert Syst. Appl. 71 (2017) 416–428,.
[122]
Zhang Quan-shi, Zhu Song-Chun, Visual interpretability for deep learning: A survey, Front. Inf. Technol. Electron. Eng. 19 (1) (2018) 27–39.
[123]
Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I. Inouye, Pradeep K. Ravikumar, On the (in)fidelity and sensitivity of explanations, in: Advances in Neural Information Processing Systems, Vancouver, BC, Canada, pp. 10965–10976, 2019.
[124]
Barratt Shane, InterpNET: Neural introspection for interpretable deep learning, in: NIPS Symposium on Interpretable Machine Learning, NIPS, Long Beach, California, USA, 2017, pp. 47–53.
[125]
Ignatiev Alexey, Towards trustable explainable AI, in: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI, Yokohama, Japan, 2020, pp. 5154–5158,.
[126]
Laugel Thibault, Lesot Marie-Jeanne, Marsala Christophe, Renard Xavier, Detyniecki Marcin, The dangers of post-hoc interpretability: Unjustified counterfactual explanations, in: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI, International Joint Conferences on Artificial Intelligence Organization, Macao, China, 2019, pp. 2801–2807,.
[127]
Adebayo Julius, Gilmer Justin, Goodfellow Ian, Kim Been, Local explanation methods for deep neural networks lack sensitivity to parameter values, in: 6th International Conference on Learning Representations, ICLR, Vancouver, Canada, 2018.
[128]
Adebayo Julius, Gilmer Justin, Muelly Michael, Goodfellow Ian, Hardt Moritz, Kim Been, Sanity checks for saliency maps, in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS, Neural Information Processing Systems Foundation, Inc., Montréal, Canada, 2018, pp. 9505–9515.
[129]
Ancona Marco, Ceolini Enea, Oztireli Cengiz, Gross Markus, Towards better understanding of gradient-based attribution methods for deep neural networks, in: 6th International Conference on Learning Representations, ICLR, Vancouver, Canada, 2018.
[130]
Arras Leila, Horn Franziska, Montavon Grégoire, Müller Klaus-Robert, Samek Wojciech, Explaining predictions of non-linear classifiers in NLP, in: Proceedings of the 1st Workshop on Representation Learning for NLP, Association for Computational Linguistics, Berlin, Germany, 2016, pp. 1–7,.
[131]
Binder Alexander, Samek Wojciech, Montavon Grégoire, Bach Sebastian, Müller Klaus-Robert, Analyzing and validating neural networks predictions, in: Proceedings of the ICML Workshop on Visualization for Deep Learning, ICML, New York City, New York, USA, 2016, pp. 118–121.
[132]
Ghorbani Amirata, Abid Abubakar, Zou James, Interpretation of neural networks is fragile, in: NIPS Workshop on Machine Deception, NIPS, Long Beach, California, USA, 2017.
[133]
Nguyen Thu Trang, Le Nguyen Thach, Ifrim Georgiana, A model-agnostic approach to quantifying the informativeness of explanation methods for time series classification, in: International Workshop on Advanced Analytics and Learning on Temporal Data, Springer, 2020, pp. 77–94. [online], https://doi.org/10.1007/978-3-030-65742-0_6.
[134]
Samek Wojciech, Wiegand Thomas, Müller Klaus-Robert, Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models, ITU J.: ICT Discov. 1 (2017) 1–10.
[135]
Samek Wojciech, Binder Alexander, Montavon Grégoire, Lapuschkin Sebastian, Müller Klaus-Robert, Evaluating the visualization of what a deep neural network has learned, IEEE Trans. Neural Netw. Learn. Syst. 28 (11) (2017) 2660–2673,.
[136]
Erhan Dumitru, Bengio Yoshua, Courville Aaron, Vincent Pascal, Visualizing higher-layer features of a deep network, Univ. Montr. 1341 (3) (2009) 1.
[137]
Gevrey Muriel, Dimopoulos Ioannis, Lek Sovan, Review and comparison of methods to study the contribution of variables in artificial neural network models, Ecol. Model. 160 (3) (2003) 249–264,.
[138]
Arras Leila, Horn Franziska, Montavon Grégoire, Müller Klaus-Robert, Samek Wojciech, “What is relevant in a text document?”: An interpretable machine learning approach, PLoS One 12 (8) (2017),.
[139]
Montavon Grégoire, Lapuschkin Sebastian, Binder Alexander, Samek Wojciech, Müller Klaus-Robert, Explaining nonlinear classification decisions with deep taylor decomposition, Pattern Recognit. 65 (May) (2017) 211–222.
[140]
Selvaraju Ramprasaath R., Cogswell Michael, Das Abhishek, Vedantam Ramakrishna, Parikh Devi, Batra Dhruv, Grad-CAM: Visual explanations from deep networks via gradient-based localization, in: Proceedings of the IEEE International Conference on Computer Vision, IEEE, Venice, Italy, 2017, pp. 618–626,.
[141]
Goyal Yash, Mohapatra Akrit, Parikh Devi, Batra Dhruv, Towards transparent AI systems: Interpreting visual question answering models, in: ICML Workshop on Visualization for Deep Learning, ICML, New York City, New York, USA, 2016.
[142]
Ribeiro Marco Tulio, Singh Sameer, Guestrin Carlos, Why should I trust you?: Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, San Francisco, CA, USA, 2016, pp. 1135–1144,.
[143]
Zeiler Matthew D., Fergus Rob, Visualizing and understanding convolutional networks, in: European Conference on Computer Vision, Springer, Zurich, Switzerland, 2014, pp. 818–833.
[144]
Baehrens David, Schroeter Timon, Harmeling Stefan, Kawanabe Motoaki, Hansen Katja, Mãžller Klaus-Robert, How to explain individual classification decisions, J. Mach. Learn. Res. 11 (Jun) (2010) 1803–1831.
[145]
Kindermans Pieter-Jan, Schütt Kristof T., Alber Maximilian, Müller Klaus-Robert, Erhan Dumitru, Kim Been, Dähne Sven, Learning how to explain neural networks: PatternNet and PatternAttribution, in: 6th International Conference on Learning Representations, ICLR, Vancouver, Canada, 2018.
[146]
Lundberg Scott M., Lee Su-In, A unified approach to interpreting model predictions, in: Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation, Inc., Long Beach, California, USA, 2017, pp. 4765–4774.
[147]
Smilkov Daniel, Thorat Nikhil, Kim Been, Viégas Fernanda, Wattenberg Martin, Smoothgrad: Removing noise by adding noise, in: International Conference on Machine Learning 2017 - Workshop on Visualization for Deep Learning, ICML, Sydney, Australia, 2017, pp. 15–24.
[148]
Sanneman Lindsay, Shah Julie A., A situation awareness-based framework for design and evaluation of explainable AI, in: International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, Springer, Auckland, New Zealand, 2020, pp. 94–110,.
[149]
Lim Brian Y., Dey Anind K., Assessing demand for intelligibility in context-aware applications, in: Proceedings of the 11th International Conference on Ubiquitous Computing, ACM, Orlando, Florida, USA, 2009, pp. 195–204,.
[150]
Kim Been, Shah Julie A., Doshi-Velez Finale, Mind the gap: A generative approach to interpretable feature selection and extraction, in: Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation, Inc., Montréal, Québec, Canada, 2015, pp. 2260–2268.
[151]
Hepenstal Sam, McNeish David, Explainable artificial intelligence: What do you need to know?, in: International Conference on Human-Computer Interaction, Springer, Copenhagen, Denmark, 2020, pp. 266–275,.
[152]
Suermondt Henri J., Cooper Gregory F., An evaluation of explanations of probabilistic inference, Comput. Biomed. Res. 26 (3) (1993) 242–254,.
[153]
Ye L. Richard, Johnson Paul E., The impact of explanation facilities on user acceptance of expert systems advice, MIS Q. 19 (2) (1995) 157–172,.
[154]
Putnam Vanessa, Conati Cristina, Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS), in: Joint Proceedings of the ACM IUI 2019 Workshops Co-Located with the 24th ACM Conference on Intelligent User Interfaces, ACM IUI, vol. 2327, CEUR-WS.org, Los Angeles, California, USA, 2019.
[155]
Tullio Joe, Dey Anind K., Chalecki Jason, Fogarty James, How it works: A field study of non-technical users interacting with an intelligent system, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, San Jose, California, USA, 2007, pp. 31–40,.
[156]
Kaur Harmanpreet, Nori Harsha, Jenkins Samuel, Caruana Rich, Wallach Hanna, Wortman Vaughan Jennifer, Interpreting interpretability: Understanding data scientists’ use of interpretability tools for machine learning, in: Proceedings of the CHI Conference on Human Factors in Computing Systems, 2020, pp. 1–14,.
[157]
Holzinger Andreas, Kickmeier-Rust Michael, Müller Heimo, KANDINSKY patterns as IQ-test for machine learning, in: International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Springer, Canterbury, United Kingdom, 2019, pp. 1–14,.
[158]
Lapuschkin Sebastian, Binder Alexander, Montavon Grégoire, Müller Klaus-Robert, Samek Wojciech, Analyzing classifiers: Fisher vectors and deep neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, Nevada, USA, 2016, pp. 2912–2920,.
[159]
Malhi Avleen, Knapic Samanta, Främling Kary, Explainable agents for less bias in human-agent decision making, in: International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, Springer, Auckland, New Zealand, 2020, pp. 129–146,.
[160]
Srinivasan Vignesh, Lapuschkin Sebastian, Hellge Cornelius, Müller Klaus-Robert, Samek Wojciech, Interpretable human action recognition in compressed domain, in: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, IEEE, New Orleans, Louisiana, USA, 2017, pp. 1692–1696,.
[161]
Assaf Roy, Schumann Anika, Explainable deep neural networks for multivariate time series predictions, in: Proceedings of the 28th International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, Macao, China, 2019, pp. 6488–6490,.
[162]
Ding Yanzhuo, Liu Yang, Luan Huanbo, Sun Maosong, Visualizing and understanding neural machine translation, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, Association for Computational Linguistics, Vancouver, Canada, 2017, pp. 1150–1159,.
[163]
Sturm Irene, Lapuschkin Sebastian, Samek Wojciech, Müller Klaus-Robert, Interpretable deep neural networks for single-trial EEG classification, J. Neurosci. Methods 274 (2016) 141–145,.
[164]
Weitz Katharina, Schiller Dominik, Schlagowski Ruben, Huber Tobias, André Elisabeth, “Let me explain!”: Exploring the potential of virtual agents in explainable AI interaction design, J. Multimodal User Interfaces (2020) 1–12,.
[165]
Kim Been, Rudin Cynthia, Shah Julie A., The Bayesian case model: A generative approach for case-based reasoning and prototype classification, in: Advances in Neural Information Processing Systems, Neural Information Processing Systems Foundation, Inc., Montréal, Québec, Canada, 2014, pp. 1952–1960.
[166]
Stock Pierre, Cisse Moustapha, Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases, in: Proceedings of the European Conference on Computer Vision, ECCV, Springer, Munich, Germany, 2018, pp. 498–512,.
[167]
Bau David, Zhou Bolei, Khosla Aditya, Oliva Aude, Torralba Antonio, Network dissection: Quantifying interpretability of deep visual representations, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Honolulu, Hawaii, USA, 2017, pp. 6541–6549,.
[168]
Luštrek Mitja, Gams Matjaž, Martinčić-Ipšić Sanda, et al., Comprehensibility of classification trees–survey design validation, in: 6th International Conference on Information Technologies and Information Society-ITIS2014, ITIS, Šmarješke toplice, Slovenia, 2014, pp. 46–61.
[169]
Hansen Katja, Baehrens David, Schroeter Timon, Rupp Matthias, Müller Klaus-Robert, Visual interpretation of kernel-based prediction models, Mol. Inform. 30 (9) (2011) 817–826,.
[170]
Aleven Vincent A.W.M.M., Koedinger Kenneth R., An effective metacognitive strategy: Learning by doing and explaining with a computer-based cognitive tutor, Cogn. Sci. 26 (2) (2002) 147–179,.
[171]
Harbers Maaike, Broekens Joost, Van Den Bosch Karel, Meyer John-Jules, Guidelines for developing explainable cognitive models, in: Proceedings of ICCM, Citeseer, Berlin, Germany, 2010, pp. 85–90.
[172]
Harbers Maaike, van den Bosch Karel, Meyer John-Jules, Design and evaluation of explainable BDI agents, in: 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, vol. 2, IEEE, Toronto, Canada, 2010, pp. 125–132,.
[173]
Lage Isaac, Ross Andrew, Gershman Samuel J., Kim Been, Doshi-Velez Finale, Human-in-the-loop interpretability prior, in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS, Neural Information Processing Systems Foundation, Inc., Montréal, Canada, 2018, pp. 10180–10189.
[174]
Poursabzi-Sangdeh Forough, Goldstein Daniel G., Hofman Jake M., Vaughan Jennifer Wortman, Wallach Hanna, Manipulating and measuring model interpretability, in: NIPS Women in Machine Learning Workshop, NIPS, Long Beach, California, USA, 2017.
[175]
Ribeiro Marco Tulio, Singh Sameer, Guestrin Carlos, Anchors: High-precision model-agnostic explanations, in: Thirty-Second AAAI Conference on Artificial Intelligence, AAAI Press, New Orleans, Louisiana, USA, 2018, pp. 1527–1535.
[176]
Andreas Holzinger, André Carrington, Heimo Müller, Measuring the quality of explanations: The system causability scale (SCS): Comparing human and machine explanations, KI-Künstliche Intell. 34 (2) (2020) 193–198,.
[177]
Spinner Thilo, Schlegel Udo, Schäfer Hanna, El-Assady Mennatallah, Explainer: A visual analytics framework for interactive and explainable machine learning, IEEE Trans. Vis. Comput. Graph. 26 (2019) 1064–1074,.
[178]
Kulesza Todd, Stumpf Simone, Wong Weng-Keen, Burnett Margaret M, Perona Stephen, Ko Andrew, Oberst Ian, Why-oriented end-user debugging of naive Bayes text classification, ACM Trans. Interact. Intell. Syst. (TiiS) 1 (1) (2011) 2:1–2:31,.
[179]
Allahyari Hiva, Lavesson Niklas, User-oriented assessment of classification model understandability, in: 11th Scandinavian Conference on Artificial Intelligence, IOS Press, Trondheim, Norway, 2011, pp. 11–19,.
[180]
Huysmans Johan, Dejaeger Karel, Mues Christophe, Vanthienen Jan, Baesens Bart, An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models, Decis. Support Syst. 51 (1) (2011) 141–154,.
[181]
Dragoni Mauro, Donadello Ivan, Eccher Claudio, Explainable AI meets persuasiveness: Translating reasoning results into behavioral change advice, Artif. Intell. Med. (2020),.
[182]
Lawless William F., Mittu Ranjeev, Sofge Donald, Hiatt Laura, Artificial intelligence, autonomy, and human-machine teams: Interdependence, context, and explainable AI, AI Mag. 40 (3) (2019) 5–13.
[183]
Wang Danding, Yang Qian, Abdul Ashraf, Lim Brian Y., Designing theory-driven user-centric explainable AI, in: Proceedings of the CHI Conference on Human Factors in Computing Systems, 2019, pp. 1–15,.
[184]
Bennetot Adrien, Laurent Jean-Luc, Chatila Raja, Díaz-Rodríguez Natalia, Towards explainable neural-symbolic visual reasoning, in: NeSy Workshop IJCAI, International Joint Conferences on Artificial Intelligence, Inc, Macao, China, 2019, pp. 71–75.
[185]
Bride Hadrien, Dong Jie, Dong Jin Song, Hóu Zhé, Towards dependable and explainable machine learning using automated reasoning, in: International Conference on Formal Engineering Methods, Springer, Gold Coast, Australia, 2018, pp. 412–416,.
[186]
Rizzo Lucas, Longo Luca, A qualitative investigation of the explainability of defeasible argumentation and non-monotonic fuzzy reasoning, in: Proceedings for the 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science Trinity College Dublin, CEUR-WS.org, Dublin, Ireland, 2018, pp. 138–149.
[187]
Rizzo Lucas, Longo Luca, Inferential models of mental workload with defeasible argumentation and non-monotonic fuzzy reasoning: A comparative study, in: 2nd Workshop on Advances in Argumentation in Artificial Intelligence, CEUR-WS.org, Trento, Italy, 2018, pp. 11–26.
[188]
Zeng Zhiwei, Miao Chunyan, Leung Cyril, Chin Jing Jih, Building more explainable artificial intelligence with argumentation, in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th Symposium on Educational Advances in Artificial Intelligence (EAAI-18), AAAI Press, New Orleans, Louisiana, USA, 2018, pp. 8044–8046.
[189]
Garcez Artur d’Avila, Besold Tarek R., De Raedt Luc, Földiak Peter, Hitzler Pascal, Icard Thomas, Kühnberger Kai-Uwe, Lamb Luis C., Miikkulainen Risto, Silver Daniel L., Neural-symbolic learning and reasoning: Contributions and challenges, in: AAAI Spring Symposium Series, AAAI Press, Palo Alto, California, USA, 2015, pp. 20–23.

Cited By

View all
  • (2024)Towards Citizen-Centric Multiagent Systems Based on Large Language ModelsProceedings of the 2024 International Conference on Information Technology for Social Good10.1145/3677525.3678636(26-31)Online publication date: 4-Sep-2024
  • (2024)Time for an Explanation: A Mini-Review of Explainable Physio-Behavioural Time-Series ClassificationCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3679001(885-889)Online publication date: 5-Oct-2024
  • (2024)A User-Centric Exploration of Axiomatic Explainable AI in Participatory Budgeting.Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3677599(126-130)Online publication date: 5-Oct-2024
  • Show More Cited By

Index Terms

  1. Notions of explainability and evaluation approaches for explainable artificial intelligence
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Information Fusion
        Information Fusion  Volume 76, Issue C
        Dec 2021
        430 pages

        Publisher

        Elsevier Science Publishers B. V.

        Netherlands

        Publication History

        Published: 01 December 2021

        Author Tags

        1. Explainable artificial intelligence
        2. Notions of explainability
        3. Evaluation methods

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 15 Oct 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Towards Citizen-Centric Multiagent Systems Based on Large Language ModelsProceedings of the 2024 International Conference on Information Technology for Social Good10.1145/3677525.3678636(26-31)Online publication date: 4-Sep-2024
        • (2024)Time for an Explanation: A Mini-Review of Explainable Physio-Behavioural Time-Series ClassificationCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3679001(885-889)Online publication date: 5-Oct-2024
        • (2024)A User-Centric Exploration of Axiomatic Explainable AI in Participatory Budgeting.Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3677599(126-130)Online publication date: 5-Oct-2024
        • (2024)Evaluating Human-Centered AI Explanations: Introduction of an XAI Evaluation Framework for Fact-CheckingProceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation10.1145/3643491.3660283(91-100)Online publication date: 10-Jun-2024
        • (2024)XAIport: A Service Framework for the Early Adoption of XAI in AI Model DevelopmentProceedings of the 2024 ACM/IEEE 44th International Conference on Software Engineering: New Ideas and Emerging Results10.1145/3639476.3639759(67-71)Online publication date: 14-Apr-2024
        • (2024)TabEE: Tabular Embeddings ExplanationsProceedings of the ACM on Management of Data10.1145/36393292:1(1-26)Online publication date: 26-Mar-2024
        • (2024)Exploring the Usability and Trustworthiness of AI-Driven User Interfaces for Neurological DiagnosisAdjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization10.1145/3631700.3665192(627-634)Online publication date: 27-Jun-2024
        • (2024)Trust Issues: Discrepancies in Trustworthy AI Keywords Use in Policy and ResearchProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659035(2222-2233)Online publication date: 3-Jun-2024
        • (2024)The Role of Explainability in Collaborative Human-AI Disinformation DetectionProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659031(2157-2174)Online publication date: 3-Jun-2024
        • (2024)Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language ModelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642934(1-20)Online publication date: 11-May-2024
        • Show More Cited By

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media