Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3450613.3456846acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
short-paper

Model-Agnostic Counterfactual Explanations of Recommendations

Published: 21 June 2021 Publication History

Abstract

Explanations for algorithmically generated recommendations is an important requirement for transparent and trustworthy recommender systems. When the internal recommendation model is not inherently interpretable (e.g., most contemporary systems are complex and opaque), or when access to the system is not available (e.g., recommendation as a service), explanations have to be generated post-hoc, i.e., after the system is trained. In this common setting, the standard approach is to provide plausible interpretations of the observed outputs of the system, e.g., by building a simple surrogate model that is inherently interpretable, and explaining that model. This however has several drawbacks. First, such explanations are not truthful, as they are rationalizations of the observed inputs and outputs constructed by another system. Second, there are privacy concerns, as to train a surrogate model, one has to know the interactions from users other than the one who seeks an explanation. Third, such explanations may not be scrutable and actionable, as they typically return weights for items or other users that are difficult to comprehend, and hard to act upon so to improve the quality of one’s recommendations.
In this work, we present a model-agnostic explanation mechanism that is truthful, private, scrutable, and actionable. The key idea is to provide counterfactual explanations, defined as those small changes to the user’s interaction history that are responsible for observing the recommendation output to be explained. Without access to the internal recommendation model, finding concise counterfactual explanations is a hard search problem. We propose several strategies that seek to efficiently extract concise explanations under constraints. Experimentally, we show that these strategies are more efficient and effective than exhaustive and random search.

Supplementary Material

MP4 File (UMAP21-sp163.mp4)
Presentation Video

References

[1]
Behnoush Abdollahi and Olfa Nasraoui. 2017. Using Explainability for Constrained Matrix Factorization. In Proceedings of the Eleventh ACM Conference on Recommender Systems, RecSys 2017, Como, Italy, August 27-31, 2017, Paolo Cremonesi, Francesco Ricci, Shlomo Berkovsky, and Alexander Tuzhilin (Eds.). ACM, 79–83. https://doi.org/10.1145/3109859.3109913
[2]
Konstantin Bauman, Bing Liu, and Alexander Tuzhilin. 2017. Aspect Based Recommendations: Recommending Items with the Most Valuable Aspects Based on User Reviews. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017. ACM, 717–725. https://doi.org/10.1145/3097983.3098170
[3]
Weiyu Cheng, Yanyan Shen, Linpeng Huang, and Yanmin Zhu. 2019. Incorporating Interpretability into Latent Factor Models via Fast Influence Analysis. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, Ankur Teredesai, Vipin Kumar, Ying Li, Rómer Rosales, Evimaria Terzi, and George Karypis(Eds.). ACM, 885–893. https://doi.org/10.1145/3292500.3330857
[4]
Zhiyong Cheng, Ying Ding, Lei Zhu, and Mohan S. Kankanhalli. 2018. Aspect-Aware Latent Factor Model: Rating Prediction with Ratings and Reviews. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, Pierre-Antoine Champin, Fabien L. Gandon, Mounia Lalmas, and Panagiotis G. Ipeirotis (Eds.). ACM, 639–648. https://doi.org/10.1145/3178876.3186145
[5]
Azin Ghazimatin, Oana Balalau, Rishiraj Saha Roy, and Gerhard Weikum. 2020. PRINCE: Provider-side Interpretability with Counterfactual Explanations in Recommender Systems. In WSDM ’20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020, James Caverlee, Xia (Ben) Hu, Mounia Lalmas, and Wei Wang (Eds.). ACM, 196–204. https://doi.org/10.1145/3336191.3371824
[6]
Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In CSCW.
[7]
Yunfeng Hou, Ning Yang, Yi Wu, and Philip S. Yu. 2019. Explainable recommendation with fusion of aspect information. World Wide Web 22, 1 (2019), 221–240. https://doi.org/10.1007/s11280-018-0558-1
[8]
Yichao Lu, Ruihai Dong, and Barry Smyth. 2018. Coevolutionary Recommendation Model: Mutual Learning between Ratings and Reviews. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23-27, 2018, Pierre-Antoine Champin, Fabien L. Gandon, Mounia Lalmas, and Panagiotis G. Ipeirotis (Eds.). ACM, 773–782. https://doi.org/10.1145/3178876.3186158
[9]
Julian J. McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In Seventh ACM Conference on Recommender Systems, RecSys ’13, Hong Kong, China, October 12-16, 2013, Qiang Yang, Irwin King, Qing Li, Pearl Pu, and George Karypis (Eds.). ACM, 165–172. https://doi.org/10.1145/2507157.2507163
[10]
Ramaravind Kommiya Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In FAT* ’20: Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, January 27-30, 2020, Mireille Hildebrandt, Carlos Castillo, Elisa Celis, Salvatore Ruggieri, Linnet Taylor, and Gabriela Zanfir-Fortuna (Eds.). ACM, 607–617. https://doi.org/10.1145/3351095.3372850
[11]
Caio Nóbrega and Leandro Balby Marinho. 2019. Towards explaining recommendations through local surrogate models. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC 2019, Limassol, Cyprus, April 8-12, 2019, Chih-Cheng Hungand George A. Papadopoulos (Eds.). ACM, 1671–1678. https://doi.org/10.1145/3297280.3297443
[12]
Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. 2012. A generalized taxonomy of explanations styles for traditional and social recommender systems. Data Min. Knowl. Discov. 24, 3 (2012), 555–583. https://doi.org/10.1007/s10618-011-0215-0
[13]
Georgina Peake and Jun Wang. 2018. Explanation Mining: Post Hoc Interpretability of Latent Factor Models for Recommendation Systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, Yike Guo and Faisal Farooq (Eds.). ACM, 2060–2069. https://doi.org/10.1145/3219819.3220072
[14]
Judea Pearl and Dana Mackenzie. 2018. The book of why: the new science of cause and effect. Basic books.
[15]
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In KDD.
[16]
Badrul Munir Sarwar, George Karypis, Joseph A. Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In Proceedings of the Tenth International World Wide Web Conference, WWW 10, Hong Kong, China, May 1-5, 2001, Vincent Y. Shen, Nobuo Saito, Michael R. Lyu, and Mary Ellen Zurko (Eds.). ACM, 285–295. https://doi.org/10.1145/371920.372071
[17]
Nava Tintarev and Judith Masthoff. 2007. A Survey of Explanations in Recommender Systems. In ICDEW.
[18]
Nava Tintarev and Judith Masthoff. 2015. Explaining Recommendations: Design and Evaluation. In Recommender Systems Handbook, Francesco Ricci, Lior Rokach, and Bracha Shapira (Eds.). Springer, 353–382. https://doi.org/10.1007/978-1-4899-7637-6_10
[19]
Jesse Vig, Shilad Sen, and John Riedl. 2009. Tagsplanations: explaining recommendations using tags. In Proceedings of the 14th International Conference on Intelligent User Interfaces, IUI 2009, Sanibel Island, Florida, USA, February 8-11, 2009, Cristina Conati, Mathias Bauer, Nuria Oliver, and Daniel S. Weld (Eds.). ACM, 47–56. https://doi.org/10.1145/1502650.1502661
[20]
Sandra Wachter, Brent D. Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. CoRR abs/1711.00399(2017).
[21]
Hongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. 2018. RippleNet: Propagating User Preferences on the Knowledge Graph for Recommender Systems. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, Alfredo Cuzzocrea, James Allan, Norman W. Paton, Divesh Srivastava, Rakesh Agrawal, Andrei Z. Broder, Mohammed J. Zaki, K. Selçuk Candan, Alexandros Labrinidis, Assaf Schuster, and Haixun Wang (Eds.). ACM, 417–426. https://doi.org/10.1145/3269206.3271739
[22]
Yongfeng Zhang and Xu Chen. 2020. Explainable Recommendation: A Survey and New Perspectives. Found. Trends Inf. Retr. 14, 1 (2020), 1–101. https://doi.org/10.1561/1500000066
[23]
Yongfeng Zhang, Guokun Lai, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma. 2014. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In The 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’14, Gold Coast, QLD, Australia - July 06 - 11, 2014, Shlomo Geva, Andrew Trotman, Peter Bruza, Charles L. A. Clarke, and Kalervo Järvelin (Eds.). ACM, 83–92. https://doi.org/10.1145/2600428.2609579

Cited By

View all
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
  • (2024)A Counterfactual Framework for Learning and Evaluating Explanations for Recommender SystemsProceedings of the ACM Web Conference 202410.1145/3589334.3645560(3723-3733)Online publication date: 13-May-2024
  • (2024)Attention Is Not the Only Choice: Counterfactual Reasoning for Path-Based Explainable RecommendationIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.337360836:9(4458-4471)Online publication date: Sep-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UMAP '21: Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization
June 2021
325 pages
ISBN:9781450383660
DOI:10.1145/3450613
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 June 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. counterfactuals
  2. explainability
  3. recommender systems

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Conference

UMAP '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 162 of 633 submissions, 26%

Upcoming Conference

UMAP '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)117
  • Downloads (Last 6 weeks)5
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
  • (2024)A Counterfactual Framework for Learning and Evaluating Explanations for Recommender SystemsProceedings of the ACM Web Conference 202410.1145/3589334.3645560(3723-3733)Online publication date: 13-May-2024
  • (2024)Attention Is Not the Only Choice: Counterfactual Reasoning for Path-Based Explainable RecommendationIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.337360836:9(4458-4471)Online publication date: Sep-2024
  • (2024)Reinforced Path Reasoning for Counterfactual Explainable RecommendationIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.335407736:7(3443-3459)Online publication date: Jul-2024
  • (2024)Learning-based counterfactual explanations for recommendationScience China Information Sciences10.1007/s11432-023-3974-267:8Online publication date: 25-Jul-2024
  • (2024)Fairness and Explainability for Enabling Trust in AI SystemsA Human-Centered Perspective of Intelligent Personalized Environments and Systems10.1007/978-3-031-55109-3_3(85-110)Online publication date: 1-May-2024
  • (2023)Climbing the Freelance Ladder? A Freelancer-Centric Approach for Recommending Tasks and Reskilling StrategiesSSRN Electronic Journal10.2139/ssrn.4669635Online publication date: 2023
  • (2023)A Reusable Model-agnostic Framework for Faithfully Explainable Recommendation and System ScrutabilityACM Transactions on Information Systems10.1145/360535742:1(1-29)Online publication date: 21-Aug-2023
  • (2023)Prototype-Guided Counterfactual Explanations via Variational Auto-encoder for RecommendationMachine Learning and Knowledge Discovery in Databases: Applied Data Science and Demo Track10.1007/978-3-031-43427-3_39(652-668)Online publication date: 18-Sep-2023
  • (2022)XRecSys: A framework for path reasoning quality in explainable recommendationSoftware Impacts10.1016/j.simpa.2022.10040414(100404)Online publication date: Dec-2022
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media