Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2684822.2685319acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article

Understanding and Predicting Graded Search Satisfaction

Published: 02 February 2015 Publication History

Abstract

Understanding and estimating satisfaction with search engines is an important aspect of evaluating retrieval performance. Research to date has modeled and predicted search satisfaction on a binary scale, i.e., the searchers are either satisfied or dissatisfied with their search outcome. However, users' search experience is a complex construct and there are different degrees of satisfaction. As such, binary classification of satisfaction may be limiting. To the best of our knowledge, we are the first to study the problem of understanding and predicting graded (multi-level) search satisfaction. We ex-amine sessions mined from search engine logs, where searcher satisfaction was also assessed on multi-point scale by human annotators. Leveraging these search log data, we observe rich and non-monotonous changes in search behavior in sessions with different degrees of satisfaction. The findings suggest that we should predict finer-grained satisfaction levels. To address this issue, we model search satisfaction using features indicating search outcome, search effort, and changes in both outcome and effort during a session. We show that our approach can predict subtle changes in search satisfaction more accurately than state-of-the-art methods, affording greater insight into search satisfaction. The strong performance of our models has implications for search providers seeking to accu-rately measure satisfaction with their services.

References

[1]
M. Ageev et al. 2011. Find it if you can: A game for modeling different types of web search success using interaction data. In SIGIR'11: 345--354.
[2]
A. Al-Maskari et al. 2007. The relationship between IR effec-tiveness measures and user satisfaction. In SIGIR'07: 773--774.
[3]
J. Arguello. 2014. Predicting search task difficulty. In ECIR'14: 88--99.
[4]
L. Azzopardi et al. 2013. How query cost affects search be-havior. In SIGIR'13: 23--32.
[5]
L. Azzopardi. 2014. Modelling interaction with economic models of search. In SIGIR'14: 3--12.
[6]
J. E. Bailey and S. W. Pearson. 1983. Development of a tool for measuring and analyzing computer user satisfaction. Man-agement Science, 29(5): 530--545.
[7]
P. N. Bennett et al. 2012. Modeling the impact of short- and long-term behavior on search personalization. In SIGIR'12: 185--194.
[8]
H. Feild et al. 2010. Predicting searcher frustration. In SIGIR'10: 34--41.
[9]
S. Fox et al. 2005. Evaluating implicit measures to improve web search. ACM TOIS, 23(2): 147--168.
[10]
Q. Guo et al. 2011. Why searchers switch: understanding and predicting engine switching rationales. In SIGIR'11: 335--344.
[11]
A. Hassan. 2012. A semi-supervised approach to modeling web search satisfaction. In SIGIR'12: 275--284.
[12]
A. Hassan et al. 2013. Beyond clicks: Query reformulation as a predictor of search satisfaction. In CIKM'13: 2019--2028.
[13]
A. Hassan et al. 2010. Beyond DCG: User behavior as a pre-dictor of a successful search. In WSDM'10: 221--230.
[14]
A. Hassan et al. 2014. Struggling or exploring? Disambiguating long search sessions? In WSDM'14: 53--62.
[15]
S. B. Huffman and M. Hochster. 2007. How well does result relevance predict session satisfaction? In SIGIR'07: 567--574.
[16]
B. Ives et al. 1983. The measurement of user information sat-isfaction. CACM, 26(10): 785--793.
[17]
K. Järvelin and J. Kekäläinen. 2000. IR evaluation methods for retrieving highly relevant documents. In SIGIR'00: 41--48.
[18]
K. Järvelin et al. 2008. Discounted cumulated gain based eval-uation of multiple-query IR sessions. In ECIR'08: 4--15.
[19]
J. Jiang et al. Searching, browsing, and clicking in a search session: changes in user behavior by task and over time. In SIGIR'14: 607--616.
[20]
T. Joachims et al. 2005. Accurately interpreting clickthrough data as implicit feedback. In SIGIR'05: 154--161.
[21]
R. Jones and K. Klinkner. 2008. Beyond the session timeout: automatic hierarchical segmentation of search topics in query logs. In CIKM'08: 699--708.
[22]
E. Kanoulas et al. 2011. Evaluating multi-query sessions. In SIGIR'11: 1053--1062.
[23]
D. Kelly. 2009. Methods for evaluating interactive information retrieval systems with users. Foundation and Trends in In-formation Retrieval, 3(1--2): 1--224.
[24]
Y. Kim et al. 2014. Modeling dwell time to predict click-level satisfaction. In WSDM'14: 193--202.
[25]
R. Kohavi et al. 2009. Controlled experiments on the web: survey and practical guide. Data Mining and Knowledge Dis-covery. 18(1): 140--181.
[26]
J. Liu et al. 2012. Exploring and predicting search task diffi-culty. In CIKM'12: 1313--1322.
[27]
J. Liu and N. J. Belkin. 2010. Personalizing information re-trieval for multi-session tasks. In SIGIR'10: 26--33.
[28]
L. Lorigo et al. 2006. The influence of task and gender on search and evaluation behavior using Google. IP&M, 42(4): 1123--1131.
[29]
G. Mankiw. 2010. Principles of Macroeconomics. South-Western Cengage Learning.
[30]
A. Marshall. 2009. Principles of Economics: Abridged Edi-tion. Cosimo Classics.
[31]
V. McKinney et al. 2002. The measurement of Web-customer satisfaction: an expectation and disconfirmation approach. In-formation Systems Research, 13(3): 296--315.
[32]
R. L. Oliver. 1980. A cognitive model of the antecedents and consequences of satisfaction decisions. Journal of Marketing Research. 17(4): 460--470.
[33]
C. L. Smith and P. B. Kantor. 2008. User adaptation: Good results from poor systems. In SIGIR'08: 147--154.
[34]
L. T. Su. 2003. A comprehensive and systematic model of user evaluation of Web search engines. JASIST, 54(13): 1175--1192.
[35]
H. Wang et al. 2014. Modeling action-level satisfaction for search task satisfaction prediction. In SIGIR'14: 123--132.
[36]
R. W. White. 2013. Beliefs and biases in web search. In SIGIR'13: 3--12.
[37]
E. Yilmaz et al. 2014. Relevance and effort: an analysis of document utility. In CIKM'14: 91--100.

Cited By

View all
  • (2024)Individual Persistence Adaptation for User-Centric Evaluation of User Satisfaction in Recommender SystemsIEEE Access10.1109/ACCESS.2024.336069312(23626-23635)Online publication date: 2024
  • (2023)A modeling of repurchase intention in Sharia hotels: An integrated model of price, location, religiosity, trust, and satisfactionInternational Journal of ADVANCED AND APPLIED SCIENCES10.21833/ijaas.2023.12.01810:12(161-171)Online publication date: Dec-2023
  • (2023)Understanding and Predicting User Satisfaction with Conversational Recommender SystemsACM Transactions on Information Systems10.1145/362498942:2(1-37)Online publication date: 8-Nov-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
WSDM '15: Proceedings of the Eighth ACM International Conference on Web Search and Data Mining
February 2015
482 pages
ISBN:9781450333177
DOI:10.1145/2684822
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 February 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. effort
  2. evaluation
  3. search satisfaction
  4. session
  5. utility

Qualifiers

  • Research-article

Conference

WSDM 2015

Acceptance Rates

WSDM '15 Paper Acceptance Rate 39 of 238 submissions, 16%;
Overall Acceptance Rate 498 of 2,863 submissions, 17%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)34
  • Downloads (Last 6 weeks)4
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Individual Persistence Adaptation for User-Centric Evaluation of User Satisfaction in Recommender SystemsIEEE Access10.1109/ACCESS.2024.336069312(23626-23635)Online publication date: 2024
  • (2023)A modeling of repurchase intention in Sharia hotels: An integrated model of price, location, religiosity, trust, and satisfactionInternational Journal of ADVANCED AND APPLIED SCIENCES10.21833/ijaas.2023.12.01810:12(161-171)Online publication date: Dec-2023
  • (2023)Understanding and Predicting User Satisfaction with Conversational Recommender SystemsACM Transactions on Information Systems10.1145/362498942:2(1-37)Online publication date: 8-Nov-2023
  • (2023)Group Fairness for Content Creators: the Role of Human and Algorithmic Biases under Popularity-based RecommendationsProceedings of the 17th ACM Conference on Recommender Systems10.1145/3604915.3608841(863-870)Online publication date: 14-Sep-2023
  • (2023)A Systematic Review of Cost, Effort, and Load Research in Information Search and Retrieval, 1972–2020ACM Transactions on Information Systems10.1145/358306942:1(1-39)Online publication date: 18-Aug-2023
  • (2023)Representing Tasks with a Graph-Based Method for Supporting Users in Complex Search TasksProceedings of the 2023 Conference on Human Information Interaction and Retrieval10.1145/3576840.3578279(378-382)Online publication date: 19-Mar-2023
  • (2023)Practice and Challenges in Building a Business-oriented Search Engine Quality MetricProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3591841(3295-3299)Online publication date: 19-Jul-2023
  • (2023)The role of luck in the success of social media influencersApplied Network Science10.1007/s41109-023-00573-48:1Online publication date: 25-Jul-2023
  • (2023)Constructing and meta-evaluating state-aware evaluation metrics for interactive search systemsInformation Retrieval Journal10.1007/s10791-023-09426-126:1-2Online publication date: 31-Oct-2023
  • (2022)Understanding User Satisfaction with Task-oriented Dialogue SystemsProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3477495.3531798(2018-2023)Online publication date: 6-Jul-2022
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media