Abstract
Clickthrough data has been the subject of increasing popularity as an implicit indicator of user feedback. Previous analysis has suggested that user click behaviour is subject to a quality bias—that is, users click at different rank positions when viewing effective search results than when viewing less effective search results. Based on this observation, it should be possible to use click data to infer the quality of the underlying search system. In this paper we carry out a user study to systematically investigate how click behaviour changes for different levels of search system effectiveness as measured by information retrieval performance metrics. Our results show that click behaviour does not vary systematically with the quality of search results. However, click behaviour does vary significantly between individual users, and between search topics. This suggests that using direct click behaviour—click rank and click frequency—to infer the quality of the underlying search system is problematic. Further analysis of our user click data indicates that the correspondence between clicks in a search result list and subsequent confirmation that the clicked resource is actually relevant is low. Using clicks as an implicit indication of relevance should therefore be done with caution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Agichtein, E., Brill, E., Dumais, S.: Improving web search ranking by incorporating user behavior information. In: Efthimiadis, et al. (eds.) [7], pp. 19–26.
Agichtein, E., Brill, E., Dumais, S., Ragno, R.: Learning user interaction models for predicting web search result preferences. In: Efthimiadis, et al. (eds.) [7], pp. 3–10.
Allan, J., Carterette, B., Lewis, J.: When will information retrieval be “good enough”? In: Marchionini, et al. (eds.) [15], pp. 433–440.
Bailey, P., Craswell, N., Hawking, D.: Engineering a multi-purpose test collection for web retrieval experiments. Information Processing and Management 39(6), 853–871 (2003)
Buckley, C., Voorhees, E.M.: Retrieval system evaluation. In: TREC: experiment and evaluation in information retrieval [21]
Craswell, N., Szummer, M.: Random walks on the click graph. In: Kraaij, et al. (eds.) [14], pp. 239–246
Efthimiadis, E., Dumais, S., Hawking, D., Järvelin, K. (eds.): Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, WA (2006)
Fox, S., Karnawat, K., Mydland, M., Dumais, S., White, T.: Evaluating implicit measures to improve web search. ACM Transactions on Information Systems 23(2), 147–168 (2005)
Harman, D.K.: The TREC test collection. In: TREC: experiment and evaluation in information retrieval [21]
Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, Edmonton, Alberta, Canada, pp. 133–142. ACM Press, New York (2002)
Joachims, T., Granka, L., Pan, B., Hembrooke, H., Gay, G.: Accurately interpreting clickthrough data as implicit feedback. In: Marchionini,, et al. (eds.) [15], pp. 154–161.
Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., Gay, G.: Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Transactions on Information Systems 25(2), 7 (2007)
Kemp, C., Ramamohanarao, K.: Long-term learning for web search engines. In: Proceedings of the 6th European Conference on Principles of Data Mining and Knowledge Discovery, London, UK, pp. 263–274. Springer, Heidelberg (2002)
Kraaij, W., de Vries, A., Clarke, C., Fuhr, N., Kando, N. (eds.): Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, The Netherlands (2007)
Marchionini, G., Moffat, A., Tait, J., Baeza-Yates, R., Ziviani, N. (eds.): Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Salvador, Brazil (2005)
Radlinski, F., Joachims, T.: Query chains: learning to rank from implicit feedback. In: Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, Chicago, Illinois, USA, pp. 239–248 (2005)
Radlinski, F., Joachims, T.: Active exploration for learning rankings from clickthrough data. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, San Jose, California, pp. 570–579 (2007)
Turpin, A., Scholer, F.: User performance versus precision measures for simple search tasks. In: Efthimiadis, et al. (eds.) [7], pp. 11–18.
Turpin, A., Scholer, F., Billerbeck, B., Abel, L.: Examining the pseudo-standard web search engine results page. In: Proceedings of the 11th Australasian Document Computing Symposium, Brisbane, Australia, pp. 9–16 (2006)
Turpin, A., Tsegay, Y., Hawking, D., Williams, H.E.: Fast generation of result snippets in web search. In: Kraaij, et al. (eds.) [14], pp. 127–134.
Voorhees, E.M., Harman, D.K.: TREC: experiment and evaluation in information retrieval. MIT Press, Cambridge (2005)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Scholer, F., Shokouhi, M., Billerbeck, B., Turpin, A. (2008). Using Clicks as Implicit Judgments: Expectations Versus Observations. In: Macdonald, C., Ounis, I., Plachouras, V., Ruthven, I., White, R.W. (eds) Advances in Information Retrieval. ECIR 2008. Lecture Notes in Computer Science, vol 4956. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78646-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-540-78646-7_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-78645-0
Online ISBN: 978-3-540-78646-7
eBook Packages: Computer ScienceComputer Science (R0)