Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Learning Online Trends for Interactive Query Auto-Completion

Published: 01 November 2017 Publication History

Abstract

Query auto-completion (QAC) is widely used by modern search engines to assist users by predicting their intended queries. Most QAC approaches rely on deterministic batch learning algorithms trained from past query log data. However, query popularities keep changing all the time and QAC operates in a real-time scenario where users interact with the search engine continually. So, ideally, QAC must be timely and adaptive enough to reflect time-sensitive changes in an online fashion. Second, due to the vertical position bias, a query suggestion with a higher rank tends to attract more clicks regardless of user’s original intention. Hence, in the long run, it is important to place some lower ranked yet potentially more relevant queries to higher positions to collect more valuable user feedbacks. In order to tackle these issues, we propose to formulate QAC as a ranked Multi-Armed Bandits (MAB) problem which enjoys theoretical soundness. To utilize prior knowledge from query logs, we propose to use Bayesian inference and Thompson Sampling to solve this MAB problem. Extensive experiments on large scale datasets show that our QAC algorithm has the capacity to adaptively learn temporal trends, and outperforms existing QAC algorithms in ranking qualities.

References

[1]
Z. Bar-Yossef and N. Kraus, “Context-sensitive query auto-completion,” in Proc. 20th Int. Conf. World Wide Web, 2011, pp. 107–116.
[2]
M. Shokouhi, “Detecting seasonal queries by time-series analysis,” in Proc. 34th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2011, pp. 1171–1172.
[3]
M. Shokouhi and K. Radinsky, “Time-sensitive query auto-completion,” in Proc. 35th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2012, pp. 601– 610.
[4]
S. Whiting and J. M. Jose, “Recent and robust query auto-completion,” in Proc. 23rd Int. Conf. World Wide Web, 2014, pp. 971–982.
[5]
M. Shokouhi, “Learning to personalize query auto-completion,” in Proc. 36th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2013, pp. 103–112.
[6]
Y. Li, A. Dong, H. Wang, H. Deng, Y. Chang, and C. Zhai, “A two-dimensional click model for query auto-completion,” in Proc. 37th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2014, pp. 455–464. [Online]. Available: http://doi.acm.org/10.1145/2600428.2609571
[7]
H. Bast, D. Majumdar, and I. Weber, “Efficient interactive query expansion with complete search,” in Proc. 16th ACM Conf. Conf. Inf. Knowl. Manage., 2007, pp. 857–860.
[8]
R. W. White and G. Marchionini, “Examining the effectiveness of real-time query expansion,” Inf. Process. Manage., vol. 43, no. 3, pp. 685– 704, 2007.
[9]
H. Bast and I. Weber, “Type less, find more: Fast autocompletion search with a succinct index,” in Proc. 29th Annu. Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2006, pp. 364–371.
[10]
S. M. Beitzel, E. C. Jensen, A. Chowdhury, D. Grossman, and O. Frieder, “Hourly analysis of a very large topically categorized web query log,” in Proc. 27th Annu. Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2004, pp. 321–328.
[11]
A. Kulkarni, J. Teevan, K. M. Svore, and S. T. Dumais, “Understanding temporal query dynamics,” in Proc. 4th ACM Int. Conf. Web Search Data Mining, 2011, pp. 167–176.
[12]
P. Auer, N. Cesa-Bianchi, and P. Fischer, “ Finite-time analysis of the multiarmed bandit problem,” Mach. Learn. , vol. 47, no. 2/3, pp. 235–256, 2002.
[13]
S. Bubeck and N. Cesa-Bianchi, “Regret analysis of stochastic and nonstochastic multi-armed bandit problems,” Found. Trends Mach. Learn., vol. 5, no. 1, pp. 1–122, 2012. [Online]. Available: http://dx.doi.org/10.1561/2200000024
[14]
F. Radlinski, R. Kleinberg, and T. Joachims, “Learning diverse rankings with multi-armed bandits,” in Proc. 25th Int. Conf. Mach. Learn. , 2008, pp. 784–791.
[15]
Y. Yue and T. Joachims, “ Interactively optimizing information retrieval systems as a dueling bandits problem,” in Proc. 26th Annu. Int. Conf. Mach. Learn., 2009, pp. 1201 –1208.
[16]
Y. Yue, J. Broder, R. Kleinberg, and T. Joachims, “The k-armed dueling bandits problem,” J. Comput. Syst. Sci., vol. 78, no. 5, pp. 1538–1556, 2012.
[17]
K. Hofmann, S. Whiteson, and M. de Rijke, “Balancing exploration and exploitation in listwise and pairwise online learning to rank for information retrieval,” Inf. Retrieval, vol. 16, no. 1, pp. 63– 90, 2013.
[18]
T. Moon, W. Chu, L. Li, Z. Zheng, and Y. Chang, “An online learning framework for refining recency search results with user click feedback,” ACM Trans. Inf. Syst., vol. 30, no. 4, 2012, Art. no.
[19]
L. Li, W. Chu, J. Langford, and X. Wang, “Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms,” in Proc. ACM Int. Conf. Web Search Data Mining , 2011, pp. 297–306.
[20]
M. Sloan and J. Wang, “Iterative expectation for multi period information retrieval,” in Proc. ACM Int. Conf. Web Search Data Mining Workshop Web Search Click Data, 2013.
[21]
S. Chaudhuri and R. Kaushik, “Extending autocompletion to tolerate errors,” in Proc. ACM SIGMOD Int. Conf. Manage. Data, 2009, pp. 707–718.
[22]
S. Ji, G. Li, C. Li, and J. Feng, “Efficient interactive fuzzy keyword search,” in Proc. 18th Int. Conf. World Wide Web, 2009, pp. 371–380.
[23]
J. C. Gittins, “Bandit processes and dynamic allocation indices,” J. Roy. Statist. Soc. Series B (Methodoogical), vol. 41, pp. 148 –177, 1979.
[24]
T. L. Lai and H. Robbins, “Asymptotically efficient adaptive allocation rules,” Advances Appl. Math., vol. 6, no. 1, pp. 4–22, 1985.
[25]
R. Agrawal, “Sample mean based index policies with $O(log n)$ regret for the multi-armed bandit problem,” Advances Appl. Probability, vol. 27, pp. 1054–1078, 1995.
[26]
W. R. Thompson, “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples,” Biometrika, vol. 25, pp. 285–294, 1933.
[27]
D. Russo and B. Van Roy, “Learning to optimize via posterior sampling,” Math. Operations Res., vol. 39, pp. 1221–1243, 2014.
[28]
S. Agrawal and N. Goyal, “Analysis of thompson sampling for the multi-armed bandit problem,” in Proc. 25th Annu. Conf. Learning Theory, 2012 pp. 39.1–39.26.
[29]
P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire, “The nonstochastic multiarmed bandit problem,” SIAM J. Comput., vol. 32, no. 1, pp. 48–77, 2002.
[30]
L. Li, W. Chu, J. Langford, and R. E. Schapire, “A contextual-bandit approach to personalized news article recommendation,” in Proc. 19th Int. Conf. World Wide Web, 2010, pp. 661–670.
[31]
W. Chu, L. Li, L. Reyzin, and R. E. Schapire, “Contextual bandits with linear payoff functions,” in Proc. Int. Conf. Artif. Intell. Statist., 2011, pp. 208–214.
[32]
P. Rigollet and A. Zeevi, “Nonparametric bandits with covariates,” in Proc. Conf. Learn. Theory, 2010, pp. 54–66.
[33]
A. Slivkins, “Contextual bandits with similarity information,” in Proc. Conf. Learn. Theory, 2011, pp. 679– 702.
[34]
R. Kleinberg, A. Slivkins, and E. Upfal, “Multi-armed bandits in metric spaces,” in Proc. 40th Annu. ACM Symp. Theory Comput., 2008, pp. 681–690.
[35]
R. Kleinberg and A. Slivkins, “Sharp dichotomies for regret minimization in metric spaces,” in Proc. 21st Annu. ACM-SIAM Symp. Discrete Algorithms, 2010, pp. 827 –846.
[36]
P. Auer, R. Ortner, and C. Szepesvári, “ Improved rates for the stochastic continuum-armed bandit problem,” in Learning Theory . Berlin, Germany: Springer, 2007, pp. 454–468.
[37]
S. Bubeck, R. Munos, G. Stoltz, and C. Szepesvari, “X-armed bandits,” J. Mach. Learn. Res., vol. 12, pp. 1655–1695, 2011.
[38]
T. Uchiya, A. Nakamura, and M. Kudo, “Algorithms for adversarial bandit problems with multiple plays,” in Algorithmic Learning Theory . Berlin, Germany: Springer, 2010, pp. 375–389.
[39]
S. Kale, L. Reyzin, and R. E. Schapire, “ Non-stochastic bandit slate problems,” in Proc. Advances Neural Inf. Process. Syst. , 2010, pp. 1054–1062.
[40]
F. Radlinski and T. Joachims, “Active exploration for learning rankings from clickthrough data,” in Proc. 13th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2007, pp. 570–579.
[41]
M. Streeter and D. Golovin, “An online algorithm for maximizing submodular functions,” in Proc. Advances Neural Inf. Process. Syst., 2009, pp. 1577–1584.
[42]
A. Slivkins, F. Radlinski, and S. Gollapudi, “Ranked bandits in metric spaces: Learning diverse rankings over large document collections,” J. Mach. Learn. Res., vol. 14, no. 1, pp. 399–436, 2013.
[43]
K. Hofmann, A. Schuth, S. Whiteson, and M. de Rijke, “Reusing historical interaction data for faster online learning to rank for IR,” in Proc. 6th ACM Int. Conf. Web Search Data Mining, 2013, pp. 183–192.
[44]
O. Chapelle and L. Li, “An empirical evaluation of thompson sampling,” in Proc. Advances Neural Inf. Process. Syst., 2011, pp. 2249–2257.
[45]
G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of approximations for maximizing submodular set functions,” Math. Program. , vol. 14, no. 1, pp. 265–294, 1978.
[46]
Y. Song, D. Zhou, and L.-W. He, “Query suggestion by constructing term-transition graphs,” in Proc. 5th ACM Int. Conf. Web Search Data Mining, 2012, pp. 353–362.
[47]
D. Agarwal, B.-C. Chen, and P. Elango, “ Spatio-temporal models for estimating click-through rate,” in Proc. 18th Int. Conf. World Wide Web, 2009, pp. 21–30.
[48]
O. Chapelle and Y. Zhang, “A dynamic Bayesian network click model for web search ranking,” in Proc. 18th Int. Conf. World Wide Web, 2009, pp. 1–10 .
[49]
A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin, Bayesian Data Analysis. Boca Raton, FL, USA: CRC Press, 2013.

Cited By

View all
  • (2024)Effective Generalized Low-Rank Tensor Contextual BanditsIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.346978236:12(8051-8065)Online publication date: 1-Dec-2024
  • (2023)Deep Learning Methods for Query Auto CompletionAdvances in Information Retrieval10.1007/978-3-031-28241-6_35(341-348)Online publication date: 2-Apr-2023
  • (2021)Exploratory Search of GANs with Contextual BanditsProceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3482103(3157-3161)Online publication date: 26-Oct-2021
  • Show More Cited By

Index Terms

  1. Learning Online Trends for Interactive Query Auto-Completion
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image IEEE Transactions on Knowledge and Data Engineering
        IEEE Transactions on Knowledge and Data Engineering  Volume 29, Issue 11
        Nov. 2017
        255 pages

        Publisher

        IEEE Educational Activities Department

        United States

        Publication History

        Published: 01 November 2017

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 12 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Effective Generalized Low-Rank Tensor Contextual BanditsIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.346978236:12(8051-8065)Online publication date: 1-Dec-2024
        • (2023)Deep Learning Methods for Query Auto CompletionAdvances in Information Retrieval10.1007/978-3-031-28241-6_35(341-348)Online publication date: 2-Apr-2023
        • (2021)Exploratory Search of GANs with Contextual BanditsProceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3482103(3157-3161)Online publication date: 26-Oct-2021
        • (2020)Learning to Generate Personalized Query Auto-Completions via a Multi-View Multi-Task Attentive ApproachProceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining10.1145/3394486.3403350(2998-3007)Online publication date: 23-Aug-2020

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media