Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Boosting Search Performance Using Query Variations

Published: 04 October 2019 Publication History
  • Get Citation Alerts
  • Abstract

    Rank fusion is a powerful technique that allows multiple sources of information to be combined into a single result set. Query variations covering the same information need represent one way in which different sources of information might arise. However, when implemented in the obvious manner, fusion over query variations is not cost-effective, at odds with the usual web-search requirement for strict per-query efficiency guarantees. In this work, we propose a novel solution to query fusion by splitting the computation into two parts: one phase that is carried out offline, to generate pre-computed centroid answers for queries addressing broadly similar information needs, and then a second online phase that uses the corresponding topic centroid to compute a result page for each query. To achieve this, we make use of score-based fusion algorithms whose costs can be amortized via the pre-processing step and that can then be efficiently combined during subsequent per-query re-ranking operations. Experimental results using the ClueWeb12B collection and the UQV100 query variations demonstrate that centroid-based approaches allow improved retrieval effectiveness at little or no loss in query throughput or latency and within reasonable pre-processing requirements. We additionally show that queries that do not match any of the pre-computed clusters can be accurately identified and efficiently processed in our proposed ranking pipeline.

    References

    [1]
    R. Agrawal, S. Gollapudi, A. Halverson, and S. Ieong. 2009. Diversifying search results. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM’09). 5--14.
    [2]
    G. Amati and C. J. van Rijsbergen. 2002. Probabilistic models of information retrieval based on measuring the divergence from randomness. ACM Trans. Inf. Syst. 20, 4 (2002), 357--389.
    [3]
    Y. Anava, A. Shtok, O. Kurland, and E. Rabinovich. 2016. A probabilistic fusion framework. In Proceedings of the International Conference on Theory of Information Retrieval (ICTIR’16). 1463--1472.
    [4]
    R. Baeza-Yates, A. Gionis, F. Junqueira, V. Murdock, V. Plachouras, and F. Silvestri. 2008. Design trade-offs for search engine caching. ACM Trans. Web 2, 4 (2008), 1--28.
    [5]
    P. Bailey, A. Moffat, F. Scholer, and P. Thomas. 2016. UQV100: A test collection with query variability. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’16). 725--728.
    [6]
    P. Bailey, A. Moffat, F. Scholer, and P. Thomas. 2017. Retrieval consistency in the presence of query variations. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’17). 395--404.
    [7]
    M. Barbaro and T. Zeller. 2006. A face is exposed for AOL searcher No. 4417749. Retrieved November 8, 2018 from https://nytimes.com/2006/08/09/technology/09aol.html.
    [8]
    N. J. Belkin, C. Cool, W. B. Croft, and J. P. Callan. 1993. The effect of multiple query variations on information retrieval system performance. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’93). 339--346.
    [9]
    M. Bendersky, D. Metzler, and W. B. Croft. 2012. Effective query formulation with multiple information sources. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM’12). 443--452.
    [10]
    R. Benham and J. S. Culpepper. 2017. Risk-reward trade-offs in rank fusion. In Proceedings of the Australasian Document Computing Symposium (ADCS’17). 1:1--1:8.
    [11]
    R. Benham, J. S. Culpepper, L. Gallagher, X. Lu, and J. Mackenzie. 2018. Towards efficient and effective query variant generation. In Proceedings of the Conference on Design of Experimental Search 8 Information Retrieval Systems (DESIRES’18). 62--67.
    [12]
    R. Benham, L. Gallagher, J. Mackenzie, T. T. Damessie, R.-C. Chen, F. Scholer, A. Moffat, and J. S. Culpepper. 2017. RMIT at the 2017 TREC CORE track. In Proceedings of the Text Retrieval Conference (TREC’17).
    [13]
    R. Benham, L. Gallagher, J. Mackenzie, B. Liu, X. Lu, F. Scholer, A. Moffat, and J. S. Culpepper. 2018. RMIT at the 2018 TREC CORE track. In Proceedings of the Text Retrieval Conference (TREC’18).
    [14]
    B. Billerbeck, F. Scholer, H. E. Williams, and J. Zobel. 2003. Query expansion using associated queries. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’03). 2--9.
    [15]
    B. Billerbeck and J. Zobel. 2004. Techniques for efficient query expansion. In Proceedings of the Symposium on String Processing and Information Retrieval (SPIRE’04). 30--42.
    [16]
    D. Broccolo, C. Macdonald, S. Orlando, I. Ounis, R. Perego, F. Silvestri, and N. Tonellotto. 2013. Load-sensitive selective pruning for distributed search. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’13). 379--388.
    [17]
    A. Z. Broder, D. Carmel, M. Herscovici, A. Soffer, and J. Y. Zien. 2003. Efficient query evaluation using a two-level retrieval process. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’03). 426--434.
    [18]
    C. Buckley, G. Salton, J. Allan, and A. Singhal. 1995. Automatic query expansion using SMART: TREC 3. In Proceedings of the Text Retrieval Conference (TREC’95).
    [19]
    C. Burges, R. Ragno, and Q. V. Le. 2006. Learning to rank with nonsmooth cost functions. In Proceedings of the Conference on Neural Information Processing Systems (NIPS’06). 193--200.
    [20]
    C. Cadwalladr and E. Graham-Harrison. 2018. Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. Retrieved November 8. 2018 from https://theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election. Accessed: 2018-11-08.
    [21]
    B. B. Cambazoglu, F. P. Junqueira, V. Plachouras, S. Banachowski, B. Cui, S. Lim, and B. Bridge. 2010. A refreshing perspective of search engine caching. In Proceedings of the International Conference on the World Wide Web (WWW’10). 181--190.
    [22]
    M.-A. Cartright, J. Allan, V. Lavrenko, and A. McGregor. 2010. Fast query expansion using approximations of relevance models. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’10). 1573--1576.
    [23]
    K. Chakrabarti, S. Chaudhuri, and V. Ganti. 2011. Interval-based pruning for top-k processing over compressed lists. In Proceedings of the International Conference on Data Engineering (ICDE’11). 709--720.
    [24]
    R.-C. Chen, L. Gallagher, R. Blanco, and J. S. Culpepper. 2017. Efficient cost-aware cascade ranking in multi-stage retrieval. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’17). 445--454.
    [25]
    G. V. Cormack, C. L. A. Clarke, and S. Büttcher. 2009. Reciprocal rank fusion outperforms Condorcet and individual rank learning methods. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’09). 758--759.
    [26]
    M. Crane, J. S. Culpepper, J. Lin, J. Mackenzie, and A. Trotman. 2017. A comparison of document-at-a-time and score-at-a-time query evaluation. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM’17). 201--210.
    [27]
    N. Craswell and M. Szummer. 2007. Random walks on the click graph. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’07). 239--246.
    [28]
    V. Dang and W. B. Croft. 2010. Query reformulation using anchor text. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM’10). 41--50.
    [29]
    J. Dean and L. A. Barroso. 2013. The tail at scale. Comm. ACM 56, 2 (2013), 74--80.
    [30]
    L. Dhulipala, I. Kabiljo, B. Karrer, G. Ottaviano, S. Pupyrev, and A. Shalita. 2016. Compressing graphs and indexes with recursive graph bisection. In Proceedings of the Conference on Knowledge Discovery and Data Mining (KDD’16). 1535--1544.
    [31]
    C. Dimopoulos, S. Nepomnyachiy, and T. Suel. 2013. Optimizing top-k document retrieval strategies for block-max indexes. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM’13). 113--122.
    [32]
    S. Ding and T. Suel. 2011. Faster top-k document retrieval using block-max indexes. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’11). 993--1002.
    [33]
    T. Fagni, R. Perego, F. Silvestri, and S. Orlando. 2006. Boosting the performance of web search engines: Caching and prefetching query results by exploiting historical usage data. ACM Trans. Inf Syst. 24, 1 (2006), 51--78.
    [34]
    M. Fontoura, V. Josifovski, J. Liu, S. Venkatesan, X. Zhu, and J. Zien. 2011. Evaluation strategies for top-k queries over memory-resident inverted indexes. Proceedings of the Very Large Databases (VLDB) 4, 12 (2011), 1213--1224.
    [35]
    E. A. Fox and J. A. Shaw. 1993. Combination of multiple searches. In Proceedings of the Text Retrieval Conference (TREC’93). 243--252.
    [36]
    N. Fuhr. 2018. Some common mistakes in IR evaluation, and how they can be avoided. SIGIR Forum 51, 3 (2018), 32--41.
    [37]
    L. Gallagher, R.-C. Chen, R. Blanco, and J. S. Culpepper. 2019. Joint optimization of cascade ranking models. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM’19). 15--23.
    [38]
    Q. Gan and T. Suel. 2009. Improved techniques for result caching in web search engines. In Proceedings of the International Conference on the World Wide Web (WWW’09). 431--440.
    [39]
    F. Hafizoglu, E. C. Kucukoglu, and I. S. Altingovde. 2017. On the efficiency of selective search. In Proceedings of the European Conference on Information Retrieval (ECIR’17). 705--712.
    [40]
    D. K. Harman. 1995. Overview of the third text retrieval conference (TREC-3). In Proceedings of the Text Retrieval Conference (TREC’95).
    [41]
    Y. He, J. Tang, H. Ouyang, C. Kang, D. Yin, and Y. Chang. 2016. Learning to rewrite queries. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’16). 1443--1452.
    [42]
    S. Huo, M. Zhang, Y. Liu, and S. Ma. 2014. Improving tail query performance by fusion model. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’14). 559--658.
    [43]
    K. Järvelin and J. Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. 20, 4 (2002), 422--446.
    [44]
    S. Kim, Y. He, S.-W. Hwang, S. Elnikety, and S. Choi. 2015. Delayed-dynamic-selective (DDS) prediction for reducing extreme tail latency in web search. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM’15). 7--16.
    [45]
    Y. Kim, J. Callan, J. S. Culpepper, and A. Moffat. 2016. Does selective search benefit from WAND optimization? In Proceedings of the European Conference on Information Retrieval (ECIR’16). 145--158.
    [46]
    Y. Kim, J. Callan, J. S. Culpepper, and A. Moffat. 2017. Efficient distributed selective search. Inf. Retriev. 20, 3 (2017), 221--252.
    [47]
    J. Kong, A. Scott, and G. M. Goerg. 2016. Improving semantic topic clustering for search queries with word co-occurrence and bigraph co-clustering. Google Inc (2016). Retrieved from https://ai.google/research/pubs/pub45569.pdf.
    [48]
    A. K. Kozorovitsky and O. Kurland. 2011. Cluster-based fusion of retrieved lists. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’11). 893--902.
    [49]
    O. Kurland and J. S. Culpepper. 2018. Tutorial: Fusion in information retrieval. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’18). 1383--1386.
    [50]
    V. Lavrenko and J. Allan. 2006. Real-time query expansion in relevance models. IR 473, University of Massachusetts Amherst (2006).
    [51]
    V. Lavrenko and W. B. Croft. 2001. Relevance-based language models. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’01). 120--127.
    [52]
    C.-J. Lee, Q. Ai, W. B. Croft, and D. Sheldon. 2015. An optimization framework for merging multiple result lists. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’15). 303--312.
    [53]
    S. Liang, Z. Ren, and M. de Rijke. 2014. Fusion helps diversification. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’14). 303--312.
    [54]
    T.-Y. Liu. 2009. Learning to rank for information retrieval. Found. Trends Inf. Retriev. 3, 3 (2009), 225--331.
    [55]
    X. Lu, A. Moffat, and J. S. Culpepper. 2016. The effect of pooling and evaluation depth on IR metrics. Inf. Retriev. 19, 4 (2016), 416--445.
    [56]
    X. Lu, A. Moffat, and J. S. Culpepper. 2016. Efficient and effective higher order proximity modeling. In Proceedings of the International Conference on Theory of Information Retrieval (ICTIR’16). 21--30.
    [57]
    H. Ma and B. Wang. 2012. User-aware caching and prefetching query results in web search engines. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’12). 1163--1164.
    [58]
    C. Macdonald, R. L. T. Santos, and I. Ounis. 2013. The whens and hows of learning to rank for web search. Inf. Retriev. 16, 5 (2013), 584--628.
    [59]
    C. Macdonald, R. L. T. Santos, I. Ounis, and B. He. 2013. About learning models with multiple query-dependent features. ACM Trans. Inf. Syst. 31, 3 (2013), 11:1--11:39.
    [60]
    J. Mackenzie. 2017. Managing tail latencies in large scale IR systems. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’17). 1369.
    [61]
    J. Mackenzie, J. S. Culpepper, R. Blanco, M. Crane, and J. Lin. 2018. Query driven algorithm selection in early stage retrieval. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM’18). 396--404.
    [62]
    J. Mackenzie, A. Mallia, M. Petri, J. S. Culpepper, and T. Suel. 2019. Compressing inverted indexes with recursive graph bisection: A reproducibility study. In Proceedings of the European Conference on Information Retrieval (ECIR’19). 339--352.
    [63]
    A. Mallia, G. Ottaviano, E. Porciani, N. Tonellotto, and R. Venturini. 2017. Faster blockmax WAND with variable-sized blocks. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’17). 625--634.
    [64]
    A. Mallia, M. Siedlaczek, and T. Suel. 2019. An experimental study of index compression and DAAT query processing methods. In Proceedings of the European Conference on Information Retrieval (ECIR’19). 353--368.
    [65]
    D. Metzler and W. B. Croft. 2005. A Markov random field model for term dependencies. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’05). 472--479.
    [66]
    D. Metzler, S. Dumais, and C. Meek. 2007. Similarity measures for short segments of text. In Proceedings of the European Conference on Information Retrieval (ECIR’07). 16--27.
    [67]
    A. Moffat. 2016. Judgment pool effects caused by query variations. In Proceedings of the Australasian Document Computing Symposium (ADCS’16). 65--68.
    [68]
    A. Moffat, P. Bailey, F. Scholer, and P. Thomas. 2017. Incorporating user expectations and behavior into the measurement of search effectiveness. ACM Trans. Inf. Syst. 35, 3 (2017), 24:1--24:38.
    [69]
    A. Moffat, F. Scholer, P. Thomas, and P. Bailey. 2015. Pooled evaluation over query variations: Users are as diverse as systems. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’15). 1759--1762.
    [70]
    A. Moffat and J. Zobel. 2008. Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. Inf. Syst. 27, 1 (2008), 2.1--2.27.
    [71]
    A. Mourão and J. Magalhães. 2018. Low-complexity supervised rank fusion models. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’18). 1691--1694.
    [72]
    G. Ottaviano and R. Venturini. 2014. Partitioned Elias-Fano indexes. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’14). 273--282.
    [73]
    J. Ponte and W. B. Croft. 1998. A language modeling approach to information retrieval. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’98). 275--281.
    [74]
    F. Radlinski, M. Kurup, and T. Joachims. 2008. How does clickthrough data reflect retrieval quality? In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’08). 43--52.
    [75]
    S. E. Robertson and S. Walker. 2000. Microsoft Cambridge at TREC-9: Filtering track. In Proceedings of the Text Retrieval Conference (TREC’00).
    [76]
    J. J. Rocchio. 1971. Relevance feedback in information retrieval. In The SMART Retrieval System: Experiments in Automatic Document Processing. 313--323.
    [77]
    F. Scholer and H. E. Williams. 2002. Query association for effective retrieval. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’02). 324--331.
    [78]
    D. Sheldon, M. Shokouhi, M. Szummer, and N. Craswell. 2011. LambdaMerge: Merging the results of query reformulations. In Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM’11). 795--804.
    [79]
    J. Shen, M. Karimzadehgan, M. Bendersky, Z. Qin, and D. Metzler. 2018. Multi-task learning for email search ranking with auxiliary query clustering. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’18). 2127--2135.
    [80]
    T. Strohman, H. Turtle, and W. B. Croft. 2005. Optimization strategies for complex queries. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’05). 219--225.
    [81]
    H. Turtle and W. B. Croft. 1991. Evaluation of an inference network-based retrieval model. 9, 3 (1991), 187--222.
    [82]
    H. R. Turtle and J. Flood. 1995. Query evaluation: Strategies and optimizations. Inf. Process. Manage. 31, 6 (1995), 831--850.
    [83]
    C. C. Vogt. 1999. Adaptive Combination of Evidence for Information Retrieval. Ph.D. Dissertation. Retrieved from http://cseweb.ucsd.edu/groups/guru/docs/theses/vogt-thesis.pdf.
    [84]
    C. C. Vogt. 2000. How much more is better? Characterizing the effects of adding more IR systems to a combination. In Proceedings of the Recherche d’Information etses Applications (RIAO’00). 457--475.
    [85]
    C. C. Vogt and G. W. Cottrell. 1999. Fusion via a linear combination of scores. Inf. Retriev. 1, 3 (1999), 151--173.
    [86]
    J. Wang, E. Lo, M. L. Yiu, J. Tong, G. Wang, and X. Liu. 2014. Cache design of SSD-based search engine architectures: An experimental study. ACM Trans. Inf. Syst. 32, 4 (2014), 1--26.
    [87]
    J. R. Wen, J. Y. Nie, and H. J. Zhang. 2001. Clustering user queries of a search engine. In Proceedings of the International Conference on the World Wide Web (WWW’01). 162--168.
    [88]
    J. R. Wen, J. Y. Nie, and H. J. Zhang. 2002. Query clustering using user logs. ACM Trans. Inf. Syst. 20, 1 (2002), 59--81.
    [89]
    H. Wu and H. Fang. 2013. An incremental approach to efficient pseudo-relevance feedback. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’13). 553--562.
    [90]
    G.-R. Xue, H.-J. Zeng, Z. Chen, Y. Yu, W.-Y. Ma, W. Xi, and W. Fan. 2004. Optimizing web search using web click-through data. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM’04). 118--126.
    [91]
    X. Xue and W. B. Croft. 2013. Modeling reformulation using query distributions. ACM Trans. Inf. Syst. 31, 2 (2013), 6:1--6:34.
    [92]
    J.-M. Yun, Y. He, S. Elnikety, and S. Ren. 2015. Optimal aggregation policy for reducing tail latency of web search. In Proceedings of the ACM International Conference on Research and Development in Information Retrieval (SIGIR’15). 63--72.
    [93]
    J. Zobel and A. Moffat. 2006. Inverted files for text search engines. Comput. Surv. 38, 2 (2006), 6.1--6.56.

    Cited By

    View all
    • (2024)Is your search query well-formed? A natural query understanding for patent prior art searchWorld Patent Information10.1016/j.wpi.2023.10225476(102254)Online publication date: Mar-2024
    • (2023)How do Human and Contextual Factors Affect the Way People Formulate Queries?Proceedings of the 2023 Conference on Human Information Interaction and Retrieval10.1145/3576840.3578336(499-503)Online publication date: 19-Mar-2023
    • (2023)Improving Content Retrievability in Search with Controllable Query GenerationProceedings of the ACM Web Conference 202310.1145/3543507.3583261(3182-3192)Online publication date: 30-Apr-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Information Systems
    ACM Transactions on Information Systems  Volume 37, Issue 4
    October 2019
    299 pages
    ISSN:1046-8188
    EISSN:1558-2868
    DOI:10.1145/3357218
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 October 2019
    Accepted: 01 July 2019
    Revised: 01 May 2019
    Received: 01 November 2018
    Published in TOIS Volume 37, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Rank fusion
    2. dynamic pruning
    3. effectiveness
    4. efficiency
    5. experimentation

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    • Australian Research Training Program Scholarship
    • Australian Research Council’s Discovery Projects Scheme
    • RMIT Vice Chancellors PhD Scholarship
    • Amazon Research Award
    • Google Faculty Research Award

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)35
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Is your search query well-formed? A natural query understanding for patent prior art searchWorld Patent Information10.1016/j.wpi.2023.10225476(102254)Online publication date: Mar-2024
    • (2023)How do Human and Contextual Factors Affect the Way People Formulate Queries?Proceedings of the 2023 Conference on Human Information Interaction and Retrieval10.1145/3576840.3578336(499-503)Online publication date: 19-Mar-2023
    • (2023)Improving Content Retrievability in Search with Controllable Query GenerationProceedings of the ACM Web Conference 202310.1145/3543507.3583261(3182-3192)Online publication date: 30-Apr-2023
    • (2023)Offline Pseudo Relevance Feedback for Efficient and Effective Single-pass Dense RetrievalProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3592028(2209-2214)Online publication date: 19-Jul-2023
    • (2023)Searching Parameterized Retrieval & Verification Loss for Re-IdentificationIEEE Journal of Selected Topics in Signal Processing10.1109/JSTSP.2023.325098917:3(560-574)Online publication date: May-2023
    • (2023)Performance prediction of multivariable linear regression based on the optimal influencing factors for ranking aggregation in crowdsourcing taskData Technologies and Applications10.1108/DTA-09-2022-034658:2(176-200)Online publication date: 4-Jul-2023
    • (2023)Index-Based Batch Query Processing RevisitedAdvances in Information Retrieval10.1007/978-3-031-28241-6_6(86-100)Online publication date: 16-Mar-2023
    • (2022)Where Do Queries Come From?Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3477495.3531711(2850-2862)Online publication date: 6-Jul-2022
    • (2022)sMARE: a new paradigm to evaluate and understand query performance prediction methodsInformation Retrieval10.1007/s10791-022-09407-w25:2(94-122)Online publication date: 1-Jun-2022
    • (2022)Validating Simulations of User Query VariantsAdvances in Information Retrieval10.1007/978-3-030-99736-6_6(80-94)Online publication date: 10-Apr-2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media