Query dependent pseudo-relevance feedback based on wikipedia

Y Xu, GJF Jones, B Wang - Proceedings of the 32nd international ACM …, 2009 - dl.acm.org
Proceedings of the 32nd international ACM SIGIR conference on Research and …, 2009dl.acm.org
Pseudo-relevance feedback (PRF) via query-expansion has been proven to be e® ective in
many information retrieval (IR) tasks. In most existing work, the top-ranked documents from
an initial search are assumed to be relevant and used for PRF. One problem with this
approach is that one or more of the top retrieved documents may be non-relevant, which can
introduce noise into the feedback process. Besides, existing methods generally do not take
into account the significantly different types of queries that are often entered into an IR …
Pseudo-relevance feedback (PRF) via query-expansion has been proven to be e®ective in many information retrieval (IR) tasks. In most existing work, the top-ranked documents from an initial search are assumed to be relevant and used for PRF. One problem with this approach is that one or more of the top retrieved documents may be non-relevant, which can introduce noise into the feedback process. Besides, existing methods generally do not take into account the significantly different types of queries that are often entered into an IR system. Intuitively, Wikipedia can be seen as a large, manually edited document collection which could be exploited to improve document retrieval effectiveness within PRF. It is not obvious how we might best utilize information from Wikipedia in PRF, and to date, the potential of Wikipedia for this task has been largely unexplored. In our work, we present a systematic exploration of the utilization of Wikipedia in PRF for query dependent expansion. Specifically, we classify TREC topics into three categories based on Wikipedia: 1) entity queries, 2) ambiguous queries, and 3) broader queries. We propose and study the effectiveness of three methods for expansion term selection, each modeling the Wikipedia based pseudo-relevance information from a different perspective. We incorporate the expansion terms into the original query and use language modeling IR to evaluate these methods. Experiments on four TREC test collections, including the large web collection GOV2, show that retrieval performance of each type of query can be improved. In addition, we demonstrate that the proposed method out-performs the baseline relevance model in terms of precision and robustness.
ACM Digital Library