Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3523227.3547373acmotherconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
tutorial

Improving Recommender Systems with Human-in-the-Loop

Published: 13 September 2022 Publication History

Abstract

Today, most recommender systems employ Machine Learning to recommend posts, products, and other items, usually produced by the users. Although the impressive progress in Deep Learning and Reinforcement Learning, we observe that recommendations made by such systems still do not correlate with actual human preferences. In our tutorial, we will bridge the gap between crowdsourcing and recommender systems communities by showing how one can incorporate human-in-the-loop into their recommender system to gather the real human feedback on the ranked recommendations. We will discuss the ranking data lifecycle and run through it step-by-step. A significant portion of tutorial time is devoted to a hands-on practice, when the attendees will, under our guidance, sample recommendations and build the ground truth dataset using crowdsourced data, and compute the offline evaluation scores.

References

[1]
Ralph Allan Bradley and Milton E. Terry. 1952. Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons. Biometrika 39, 3/4 (1952), 324–345. https://doi.org/10.2307/2334029
[2]
Olivier Chapelle, Donald Metlzer, Ya Zhang, and Pierre Grinspan. 2009. Expected Reciprocal Rank for Graded Relevance. In Proceedings of the 18th ACM Conference on Information and Knowledge Management(CIKM ’09). Association for Computing Machinery, Hong Kong, China, 621–630. https://doi.org/10.1145/1645953.1646033
[3]
Maurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. 2019. Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches. In Proceedings of the 13th ACM Conference on Recommender Systems(RecSys ’19). Association for Computing Machinery, Copenhagen, Denmark, 101–109. https://doi.org/10.1145/3298689.3347058
[4]
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated Gain-Based Evaluation of IR Techniques. ACM Transactions on Information Systems 20, 4 (2002), 422–446. https://doi.org/10.1145/582415.582418
[5]
Martha Larson, Paolo Cremonesi, and Alexandros Karatzoglou. 2014. Overview of ACM RecSys CrowdRec 2014 Workshop: Crowdsourcing and Human Computation for Recommender Systems. In Proceedings of the 8th ACM Conference on Recommender Systems(RecSys ’14). Association for Computing Machinery, Foster City, Silicon Valley, CA, USA, 381–382. https://doi.org/10.1145/2645710.2645783
[6]
Niek Tax, Sander Bockting, and Djoerd Hiemstra. 2015. A cross-benchmark comparison of 87 learning to rank methods. Information Processing & Management 51, 6 (2015), 757–772. https://doi.org/10.1016/j.ipm.2015.07.002
[7]
Dmitry Ustalov, Nikita Pavlichenko, Vladimir Losev, Iulian Giliazev, and Evgeny Tulin. 2021. A General-Purpose Crowdsourcing Computational Quality Control Toolkit for Python. In The Ninth AAAI Conference on Human Computation and Crowdsourcing: Works-in-Progress and Demonstration Track(HCOMP 2021). 4 pages. arxiv:2109.08584 [cs.HC]

Cited By

View all
  • (2024)Investigating Characteristics of Media Recommendation Solicitation in r/ifyoulikeblankProceedings of the ACM on Human-Computer Interaction10.1145/36870418:CSCW2(1-23)Online publication date: 8-Nov-2024
  • (2024)8–10% of algorithmic recommendations are ‘bad’, but… an exploratory risk-utility meta-analysis and its regulatory implicationsInternational Journal of Information Management: The Journal for Information Professionals10.1016/j.ijinfomgt.2023.10274375:COnline publication date: 25-Jun-2024
  • (2024)Human-in-the-loop in artificial intelligence in education: A review and entity-relationship (ER) analysisComputers in Human Behavior: Artificial Humans10.1016/j.chbah.2024.1000532:1(100053)Online publication date: Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
RecSys '22: Proceedings of the 16th ACM Conference on Recommender Systems
September 2022
743 pages
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 September 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. crowdsourcing
  2. human-in-the-loop
  3. offline evaluation
  4. recommender systems

Qualifiers

  • Tutorial
  • Research
  • Refereed limited

Conference

Acceptance Rates

Overall Acceptance Rate 254 of 1,295 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)54
  • Downloads (Last 6 weeks)2
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Investigating Characteristics of Media Recommendation Solicitation in r/ifyoulikeblankProceedings of the ACM on Human-Computer Interaction10.1145/36870418:CSCW2(1-23)Online publication date: 8-Nov-2024
  • (2024)8–10% of algorithmic recommendations are ‘bad’, but… an exploratory risk-utility meta-analysis and its regulatory implicationsInternational Journal of Information Management: The Journal for Information Professionals10.1016/j.ijinfomgt.2023.10274375:COnline publication date: 25-Jun-2024
  • (2024)Human-in-the-loop in artificial intelligence in education: A review and entity-relationship (ER) analysisComputers in Human Behavior: Artificial Humans10.1016/j.chbah.2024.1000532:1(100053)Online publication date: Jan-2024
  • (2023)Recommender Algorithms Do No Harm ~90% But… An Exploratory Risk-Utility Meta-Analysis of Algorithmic AuditsSSRN Electronic Journal10.2139/ssrn.4426783Online publication date: 2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media