Abstract
This paper asks to what extent querying, clicking, and text editing behavior can predict the usefulness of the search results retrieved during essay writing. To render the usefulness of a search result directly observable for the first time in this context, we cast the writing task as “essay writing with text reuse,” where text reuse serves as usefulness indicator. Based on 150 essays written by 12 writers using a search engine to find sources for reuse, while their querying, clicking, reuse, and text editing activities were recorded, we build linear regression models for the two indicators (1) number of words reused from clicked search results, and (2) number of times text is pasted, covering 69% (90%) of the variation. The three best predictors from both models cover 91–95% of the explained variation. By demonstrating that straightforward models can predict retrieval success, our study constitutes a first step towards incorporating usefulness signals in retrieval personalization for general writing tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Belkin, N., Cole, M., Liu, J.: A model for evaluating interactive information retrieval. In: SIGIR Workshop on the Future of IR Evaluation, 23 July 2009, Boston (2009)
Hersh, W.: Relevance and retrieval evaluation: perspectives from medicine. J. Am. Soc. Inf. Sci. 45(3), 201–206 (1994)
Järvelin, K., et al.: Task-based information interaction evaluation: the viewpoint of program theory. ACM Trans. Inf. Syst. 33(1), 3:1–3:30 (2015)
Cooper, W.S.: On selecting a measure of retrieval effectiveness. JASIST 24(2), 87–100 (1973)
Vakkari, P.: Task based information searching. ARIST 1, 413–464 (2003)
Yilmaz, E., Verma, M., Craswell, N., Radlinski, F., Bailey, P.: Relevance and effort: an analysis of document utility. In: Proceedings of the CIKM 2014, pp. 91–100. ACM (2014)
Kelly, D., Belkin, N.J.: Display time as implicit feedback: understanding task effects. In: Proceedings of the SIGIR 2004, pp. 377–384. ACM (2004)
Liu, J., Belkin, N.J.: Personalizing information retrieval for multi-session tasks: the roles of task stage and task type. In: Proceedings of the SIGIR 2010, pp. 26–33. ACM (2010)
Liu, C., Belkin, N., Cole, M.: Personalization of search results using interaction behaviors in search sessions. In: Proceedings of the SIGIR 2012, pp. 205–214. ACM (2012)
Mao, J., et al.: Understanding and predicting usefulness judgment in web search. In: Proceedings of the SIGIR 2017, pp. 1169–1172. ACM, New York (2017)
Serola, S., Vakkari, P.: The anticipated and assessed contribution of information types in references retrieved for preparing a research proposal. JASIST 56(4), 373–381 (2005)
Ahn, J.W., Brusilovsky, P., He, D., Grady, J., Li, Q.: Personalized web exploration with task models. In: Proceedings of the WWW 2008, pp. 1–10. ACM, New York (2008)
He, D., et al.: An evaluation of adaptive filtering in the context of realistic task-based information exploration. IP & M 44(2), 511–533 (2008)
Sakai, T., Dou, Z.: Summaries, ranked retrieval and sessions: a unified framework for information access evaluation. In: Proceedings of the SIGIR 2013, pp. 473–482. ACM (2013)
Potthast, M., Hagen, M., Völske, M., Stein, B.: Crowdsourcing interaction logs to understand text reuse from the web. In: Proceedings of the ACL 2013, pp. 1212–1221. Association for Computational Linguistics, August 2013
Potthast, M., et al.: ChatNoir: a search engine for the ClueWeb09 corpus. In: Proceedings of the SIGIR 2012, p. 1004. ACM, August 2012
Hagen, M., Potthast, M., Stein, B.: Source retrieval for plagiarism detection from large web corpora: recent approaches. In: CLEF 2015 Evaluation Labs, CLEF and CEUR-WS.org, September 2015
Hagen, M., Potthast, M., Völske, M., Gomoll, J., Stein, B.: How writers search: analyzing the search and writing logs of non-fictional essays. In: Kelly, D., Capra, R., Belkin, N., Teevan, J., Vakkari, P. (eds.) Proceedings of the CHIIR 2016, pp. 193–202. ACM, March 2016
Hair, J.F., Black, W.C., Babin, B.J., Anderson, R.: Multivariate Data Analysis. Prentice-Hall, New Jersey (2010)
Hassan, A., Jones, R., Klinkner, K.: Beyond DCG: user behavior as a predictor of a successful search. In: Proceedings of the WSDM 2010, pp. 221–230. ACM (2010)
Gwizdka, J.: Characterizing relevance with eye-tracking measures. In: Proceedings of the 5th Information Interaction in Context Symposium, pp. 58–67. ACM (2014)
Smucker, M., Jethani, C.: Time to judge relevance as an indicator of assessor error. In: Proceedings of the SIGIR 2012, pp. 1153–1154. ACM (2012)
Weigl, D.M., Page, K.R., Organisciak, P., Downie, J.S.: Information-seeking in large-scale digital libraries: strategies for scholarly workset creation. In: Proceedings of the JCDL 2017, pp. 1–4, June 2017
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Vakkari, P., Völske, M., Potthast, M., Hagen, M., Stein, B. (2018). Predicting Retrieval Success Based on Information Use for Writing Tasks. In: Méndez, E., Crestani, F., Ribeiro, C., David, G., Lopes, J. (eds) Digital Libraries for Open Knowledge. TPDL 2018. Lecture Notes in Computer Science(), vol 11057. Springer, Cham. https://doi.org/10.1007/978-3-030-00066-0_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-00066-0_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00065-3
Online ISBN: 978-3-030-00066-0
eBook Packages: Computer ScienceComputer Science (R0)