Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2484028.2484170acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

A document rating system for preference judgements

Published: 28 July 2013 Publication History

Abstract

High quality relevance judgments are essential for the evaluation of information retrieval systems. Traditional methods of collecting relevance judgments are based on collecting binary or graded nominal judgments, but such judgments are limited by factors such as inter-assessor disagreement and the arbitrariness of grades. Previous research has shown that it is easier for assessors to make pairwise preference judgments. However, unless the preferences collected are largely transitive, it is not clear how to combine them in order to obtain document relevance scores. Another difficulty is that the number of pairs that need to be assessed is quadratic in the number of documents. In this work, we consider the problem of inferring document relevance scores from pairwise preference judgments by analogy to tournaments using the Elo rating system. We show how to combine a linear number of pairwise preference judgments from multiple assessors to compute relevance scores for every document.

References

[1]
B. Carterette, P. N. Bennett, D. M. Chickering, and S. T. Dumais. Here or there. In ECIR, 2008.
[2]
X. Chen, P. N. Bennett, K. Collins-Thompson, and E. Horvitz. Pairwise ranking aggregation in a crowdsourced setting. In Proceedings of WSDM. ACM, 2013.
[3]
A. Elo and S. Sloan. The Rating of Chess Players, Past and Present. Arco Publishing, 1978.
[4]
H. P. Frei and P. Schauble. Determining the effectiveness of retrieval algorithms. Inf. Process. Manage., 27(2--3), 1991.
[5]
M. E. Glickman. Parameter estimation in large dynamic paired comparison experiments. In Applied Statistics, pages 48--377, 1999.
[6]
M. Hosseini, I. J. Cox, N. Milić-Frayling, G. Kazai, and V. Vinay. On aggregating labels from multiple crowd workers to infer relevance of documents. In ECIR. Springer-Verlag, 2012.
[7]
P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on Amazon Mechanical Turk. In SIGKDD Workshop on Human Computation. ACM, 2010.
[8]
T. Joachims. Optimizing search engines using clickthrough data. In SIGKDD. ACM, 2002.
[9]
S. Niu, J. Guo, Y. Lan, and X. Cheng. Top-k learning to rank: labeling, ranking and evaluation. In Proceedings of SIGIR. ACM, 2012.
[10]
F. Radlinski and T. Joachims. Active exploration for learning rankings from clickthrough data. In Proceedings of SIGKDD. ACM, 2007.

Cited By

View all
  • (2024)Reliable Information Retrieval Systems Performance Evaluation: A ReviewIEEE Access10.1109/ACCESS.2024.337723912(51740-51751)Online publication date: 2024
  • (2023)A Preference Judgment Tool for Authoritative AssessmentProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3591801(3100-3104)Online publication date: 19-Jul-2023
  • (2022)Preferences on a Budget: Prioritizing Document Pairs when Crowdsourcing Relevance JudgmentsProceedings of the ACM Web Conference 202210.1145/3485447.3511960(319-327)Online publication date: 25-Apr-2022
  • Show More Cited By

Index Terms

  1. A document rating system for preference judgements

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '13: Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
    July 2013
    1188 pages
    ISBN:9781450320344
    DOI:10.1145/2484028
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 28 July 2013

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. evaluation
    2. preference judgment

    Qualifiers

    • Short-paper

    Conference

    SIGIR '13
    Sponsor:

    Acceptance Rates

    SIGIR '13 Paper Acceptance Rate 73 of 366 submissions, 20%;
    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)12
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 04 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Reliable Information Retrieval Systems Performance Evaluation: A ReviewIEEE Access10.1109/ACCESS.2024.337723912(51740-51751)Online publication date: 2024
    • (2023)A Preference Judgment Tool for Authoritative AssessmentProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3591801(3100-3104)Online publication date: 19-Jul-2023
    • (2022)Preferences on a Budget: Prioritizing Document Pairs when Crowdsourcing Relevance JudgmentsProceedings of the ACM Web Conference 202210.1145/3485447.3511960(319-327)Online publication date: 25-Apr-2022
    • (2020)Graded RelevanceEvaluating Information Retrieval and Access Tasks10.1007/978-981-15-5554-1_1(1-20)Online publication date: 2-Sep-2020
    • (2018)Improved Fuzzy Rank AggregationInternational Journal of Rough Sets and Data Analysis10.4018/IJRSDA.20181001055:4(74-87)Online publication date: Oct-2018
    • (2017)Merge-Tie-JudgeProceedings of the ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3121050.3121095(277-280)Online publication date: 1-Oct-2017
    • (2017)Transitivity, Time Consumption, and Quality of Preference Judgments in CrowdsourcingAdvances in Information Retrieval10.1007/978-3-319-56608-5_19(239-251)Online publication date: 8-Apr-2017
    • (2015)Listwise Approach for Rank Aggregation in CrowdsourcingProceedings of the Eighth ACM International Conference on Web Search and Data Mining10.1145/2684822.2685308(253-262)Online publication date: 2-Feb-2015
    • (2014)Methods for ordinal peer gradingProceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining10.1145/2623330.2623654(1037-1046)Online publication date: 24-Aug-2014

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media