Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2063576.2063861acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
poster

A nugget-based test collection construction paradigm

Published: 24 October 2011 Publication History

Abstract

The problem of building test collections is central to the development of information retrieval systems such as search engines. Starting with a few relevant "nuggets" of information manually extracted from existing TREC corpora, we implement and test a methodology that finds and correctly assesses the vast majority of relevant documents found by TREC assessors - as well as up to four times more additional relevant documents. Our methodology produces highly accurate test collections that hold the promise of addressing the issues of scalability, reusability, and applicability.

References

[1]
33rd ACM SIGIR Workshop on Crowdsourcing for Search Evaluation, Geneva, Switzerland, 2010.
[2]
Javed A. Aslam and Emine Yilmaz. Inferring document relevance via average precision. In Susan Dumais, Efthimis N. Efthimiadis, David Hawking, and Kalervo Jarvelin, editors, Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 601--602. ACM Press, August 2006.
[3]
Peter Bailey, Nick Craswell, Ian Soboroff, Paul Thomas, Arjen P. de Vries, and Emine Yilmaz. Relevance assessment: Are judges exchangeable and does it matter? In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '08, pages 667--674, New York, NY, USA, 2008. ACM.
[4]
Andrei Z. Broder. Identifying and filtering near-duplicate documents. In Proceedings of the 11th Annual Symposium on Combinatorial Pattern Matching, COM '00, pages 1--10, London, UK, 2000. Springer-Verlag.
[5]
Ben Carterette, James Allan, and Ramesh K. Sitaraman. Minimal test collections for retrieval evaluation. In Proceedings of SIGIR, pages 268--275, 2006.
[6]
Ben Carterette, Virgil Pavlu, Evangelos Kanoulas, Javed A. Aslam, and James Allan. Evaluation over thousands of queries. In Sung-Hyon Myaeng, Douglas W. Oard, Fabrizio Sebastiani, Tat-Seng Chua, and Mun-Kew Leong, editors, Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 651--658. ACM Press, July 2008.
[7]
Steve Krenzel. Finding blurbs. Website. http://www.stevekrenzel.com/articles/blurbs.
[8]
Filip Radlinski, Madhu Kurup, and Thorsten Joachims. How does clickthrough data reflect retrieval quality? In Proceeding of the 17th ACM conference on Information and knowledge management, CIKM '08, pages 43--52, New York, NY, USA, 2008. ACM.
[9]
Ellen M. Voorhees. Variations in relevance judgments and the measurement of retrieval effectiveness. Inf. Process. Manage., 36:697--716, September 2000.

Cited By

View all
  • (2019)On enhancing the robustness of timeline summarization test collectionsInformation Processing and Management: an International Journal10.1016/j.ipm.2019.02.00656:5(1815-1836)Online publication date: 1-Sep-2019
  • (2013)Pseudo test collections for training and tuning microblog rankersProceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval10.1145/2484028.2484063(53-62)Online publication date: 28-Jul-2013
  • (2012)IR system evaluation using nugget-based test collectionsProceedings of the fifth ACM international conference on Web search and data mining10.1145/2124295.2124343(393-402)Online publication date: 8-Feb-2012

Index Terms

  1. A nugget-based test collection construction paradigm

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CIKM '11: Proceedings of the 20th ACM international conference on Information and knowledge management
    October 2011
    2712 pages
    ISBN:9781450307178
    DOI:10.1145/2063576
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 October 2011

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tag

    1. information nuggets

    Qualifiers

    • Poster

    Conference

    CIKM '11
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

    Upcoming Conference

    CIKM '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)15
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 25 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2019)On enhancing the robustness of timeline summarization test collectionsInformation Processing and Management: an International Journal10.1016/j.ipm.2019.02.00656:5(1815-1836)Online publication date: 1-Sep-2019
    • (2013)Pseudo test collections for training and tuning microblog rankersProceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval10.1145/2484028.2484063(53-62)Online publication date: 28-Jul-2013
    • (2012)IR system evaluation using nugget-based test collectionsProceedings of the fifth ACM international conference on Web search and data mining10.1145/2124295.2124343(393-402)Online publication date: 8-Feb-2012

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media