Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3386392.3399995acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
keynote

Building User Trust in Recommendations via Fairness and Explanations

Published: 13 July 2020 Publication History

Abstract

Modern Artificial Intelligence (AI) techniques, based on the statistical analysis of big volumes of data, are quickly gaining traction across various domains. Recommender Systems are a class of AI techniques that extract preference patterns from large traces of human behavior. Recommenders assist people in taking decisions that range from harmless everyday life dilemmas, e.g., what shoes to buy, to seemingly innocuous choices but with long-term, hidden consequences, e.g., what news article to read, up to more critical decisions, e.g., which person to hire.
As more and more aspects of our everyday lives are influenced by automated decisions made by recommender systems, it becomes natural to question whether these systems are trustworthy, particularly given the opaqueness and complexity of their internal workings. These questions are timely posed in the broader context of concerns regarding the societal and ethical implications of applying AI techniques, which have also brought about new regulations, like the EU's "Right to Explanation".
In this talk, we discuss techniques for increasing the user's trust in the decisions of a recommender system, focusing on fairness aspects and explanation approaches. On the one hand, fairness means that the system exhibits certain desirable ethical traits, such as being non-discriminatory, diversity-aware, and bias-free. On the other hand, explanations provide human-understandable interpretations of the inner working of the system. Both mechanisms can be used in tandem to promote trust in the system. In addition, we investigate user trust from the standpoint of different stakeholders that potentially have varying levels of technical background and diverse needs.

Supplementary Material

VTT File (3386392.3399995.vtt)
MP4 File (3386392.3399995.mp4)
Supplemental Video

References

[1]
Robin Burke. 2017. Multisided Fairness for Recommendation. CoRR, Vol. abs/1707.00093 (2017).
[2]
Bryce Goodman and Seth R. Flaxman. 2017. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation". AI Magazine, Vol. 38, 3 (2017), 50--57.
[3]
Arvind Narayanan. 2018. Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., New York, USA.
[4]
Marco Tú lio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In KDD.
[5]
Ashudeep Singh and Thorsten Joachims. 2018. Fairness of Exposure in Rankings. In KDD. ACM, 2219--2228. https://doi.org/10.1145/3219819.3220088
[6]
Nava Tintarev and Judith Masthoff. 2007. A Survey of Explanations in Recommender Systems. In ICDEW.
[7]
Sandra Wachter, Brent D. Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. CoRR, Vol. abs/1711.00399 (2017).
[8]
Sirui Yao and Bert Huang. 2017. Beyond Parity: Fairness Objectives for Collaborative Filtering. In NIPS. 2925--2934.

Cited By

View all
  • (2024)Incorporation of Two-Fold Trust in Group Recommender System to Handle Popularity BiasSN Computer Science10.1007/s42979-023-02576-55:2Online publication date: 10-Feb-2024
  • (2023)Seeking information about assistive technologyInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2023.103078177:COnline publication date: 26-Jul-2023
  • (2022)Mitigating Popularity Bias in Recommendation with Unbalanced Interactions: A Gradient Perspective2022 IEEE International Conference on Data Mining (ICDM)10.1109/ICDM54844.2022.00054(438-447)Online publication date: Nov-2022
  • Show More Cited By

Index Terms

  1. Building User Trust in Recommendations via Fairness and Explanations

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UMAP '20 Adjunct: Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization
    July 2020
    395 pages
    ISBN:9781450379502
    DOI:10.1145/3386392
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 July 2020

    Check for updates

    Author Tags

    1. explanations
    2. fairness
    3. recommender systems
    4. user trust

    Qualifiers

    • Keynote

    Conference

    UMAP '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 162 of 633 submissions, 26%

    Upcoming Conference

    UMAP '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)26
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 04 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Incorporation of Two-Fold Trust in Group Recommender System to Handle Popularity BiasSN Computer Science10.1007/s42979-023-02576-55:2Online publication date: 10-Feb-2024
    • (2023)Seeking information about assistive technologyInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2023.103078177:COnline publication date: 26-Jul-2023
    • (2022)Mitigating Popularity Bias in Recommendation with Unbalanced Interactions: A Gradient Perspective2022 IEEE International Conference on Data Mining (ICDM)10.1109/ICDM54844.2022.00054(438-447)Online publication date: Nov-2022
    • (2021)Diversity-aware Recommendations for Social Justice? Exploring User Diversity and Fairness in Recommender SystemsAdjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization10.1145/3450614.3463293(404-410)Online publication date: 21-Jun-2021

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media