Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2766462.2767740acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Untangling Result List Refinement and Ranking Quality: a Framework for Evaluation and Prediction

Published: 09 August 2015 Publication History
  • Get Citation Alerts
  • Abstract

    Traditional batch evaluation metrics assume that user interaction with search results is limited to scanning down a ranked list. However, modern search interfaces come with additional elements supporting result list refinement (RLR) through facets and filters, making user search behavior increasingly dynamic. We develop an evaluation framework that takes a step beyond the interaction assumption of traditional evaluation metrics and allows for batch evaluation of systems with and without RLR elements. In our framework we model user interaction as switching between different sublists. This provides a measure of user effort based on the joint effect of user interaction with RLR elements and result quality. We validate our framework by conducting a user study and comparing model predictions with real user performance. Our model predictions show significant positive correlation with real user effort. Further, in contrast to traditional evaluation metrics, the predictions using our framework, of when users stand to benefit from RLR elements, reflect findings from our user study.
    Finally, we use the framework to investigate under what conditions systems with and without RLR elements are likely to be effective. We simulate varying conditions concerning ranking quality, users, task and interface properties demonstrating a cost-effective way to study whole system performance.

    References

    [1]
    M. Agosti, N. Fuhr, E. Toms, and P. Vakkari. Evaluation methodologies in information retrieval dagstuhl seminar 13441. In ACM SIGIR Forum, volume 48, pages 36--41. ACM, 2014.
    [2]
    L. Azzopardi. Modelling interaction with economic models of search. In SIGIR'14, pages 3--12, 2014.
    [3]
    B. Carterette. System effectiveness, user models, and user utility: a conceptual framework for investigation. In SIGIR'11, 2011.
    [4]
    O. Chapelle and Y. Zhang. A dynamic bayesian network click model for web search ranking. In WWW'09, pages 1--10, 2009.
    [5]
    O. Chapelle, D. Metlzer, Y. Zhang, and P. Grinspan. Expected reciprocal rank for graded relevance. In CIKM'09, 2009.
    [6]
    A. Chuklin, P. Serdyukov, and M. de Rijke. Click model-based information retrieval metrics. In SIGIR '13, 2013.
    [7]
    N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In WSDM'08, 2008.
    [8]
    T. Demeester, D. Trieschnigg, D. Nguyen, and D. Hiemstra. Overview of the trec 2013 federated web search track. In TREC'14.
    [9]
    D. Downey, S. Dumais, and E. Horvitz. Models of searching and browsing: languages, studies, and applications. In IJCAI'07, 2007.
    [10]
    G. Dupret and B. Piwowarski. A user browsing model to predict search engine click data from past observations. In SIGIR '08, 2008.
    [11]
    J. English, M. Hearst, R. Sinha, K. Swearingen, and K.-P. Yee. Flexible search and navigation using faceted metadata. In University of Berkeley. Citeseer, 2002.
    [12]
    N. Fuhr. A probability ranking principle for interactive information retrieval. Information Retrieval, 11 (3): 251--265, 2008.
    [13]
    F. Guo, C. Liu, and Y. Wang. Efficient multiple-click models in web search. In WSDM '09, 2009.
    [14]
    J. Huang, R. W. White, and S. Dumais. No clicks, no problem: using cursor movements to understand and improve search. In SIGCHI'11, pages 1225--1234. ACM, 2011.
    [15]
    elin and Kek\"al\"ainen(2002)}jarvelin2002cumulatedK. Jarvelin and J. Kekalainen. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20 (4): 422--446, 2002.
    [16]
    T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In SIGIR'05, pages 154--161. ACM, 2005.
    [17]
    A. Kashyap, V. Hristidis, and M. Petropoulos. Facetor: cost-driven exploration of faceted query results. In CIKM '10, 2010.
    [18]
    W. Kong and J. Allan. Extending faceted search to the general web. In CIKM'14, 2014.
    [19]
    J. Koren, Y. Zhang, and X. Liu. Personalized interactive faceted search. In WWW '08, pages 477--486, 2008.
    [20]
    R. L. Kumar, M. A. Smith, and S. Bannerjee. User interface features influencing overall ease of use and personalization. Information & Management, 41 (3): 289--302, 2004.
    [21]
    C. Li, N. Yan, S. B. Roy, L. Lisham, and G. Das. Facetedpedia: dynamic generation of query-dependent faceted interfaces for wikipedia. In WWW '10, pages 651--660. ACM, 2010.
    [22]
    A. Moffat and J. Zobel. Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. Inf. Syst., 27 (1): Article 2, 2008.
    [23]
    P. Morville and J. Callender. Search patterns. O'Reilly Media, 2010.
    [24]
    A. Schuth and M. Marx. Evaluation methods for rankings of facetvalues for faceted search. In CLEF'11, pages 131--136, 2011.
    [25]
    M. D. Smucker and C. L. Clarke. Stochastic simulation of time-biased gain. In CIKM'12, pages 2040--2044, 2012.
    [26]
    D. Vandic, F. Frasincar, and U. Kaymak. Facet selection algorithms for web product search. In CIKM '13, pages 2327--2332. ACM, 2013.
    [27]
    E. Voorhees, D. K. Harman, et al. TREC: Experiment and evaluation in information retrieval. MIT, 2005.
    [28]
    B. S. Wynar, A. G. Taylor, and J. Osborn. Introduction to cataloging and classification. Libraries Unlimited, 1985.

    Cited By

    View all
    • (2023)A Systematic Review of Cost, Effort, and Load Research in Information Search and Retrieval, 1972–2020ACM Transactions on Information Systems10.1145/358306942:1(1-39)Online publication date: 18-Aug-2023
    • (2016)Optimizing Nugget Annotations with Active LearningProceedings of the 25th ACM International on Conference on Information and Knowledge Management10.1145/2983323.2983694(2359-2364)Online publication date: 24-Oct-2016

    Index Terms

    1. Untangling Result List Refinement and Ranking Quality: a Framework for Evaluation and Prediction

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGIR '15: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval
      August 2015
      1198 pages
      ISBN:9781450336215
      DOI:10.1145/2766462
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 09 August 2015

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. evaluation
      2. faceted search
      3. search behavior
      4. simulation

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      SIGIR '15
      Sponsor:

      Acceptance Rates

      SIGIR '15 Paper Acceptance Rate 70 of 351 submissions, 20%;
      Overall Acceptance Rate 792 of 3,983 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)2
      • Downloads (Last 6 weeks)0

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)A Systematic Review of Cost, Effort, and Load Research in Information Search and Retrieval, 1972–2020ACM Transactions on Information Systems10.1145/358306942:1(1-39)Online publication date: 18-Aug-2023
      • (2016)Optimizing Nugget Annotations with Active LearningProceedings of the 25th ACM International on Conference on Information and Knowledge Management10.1145/2983323.2983694(2359-2364)Online publication date: 24-Oct-2016

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media