Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3308558.3313727acmotherconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Dynamic Ensemble of Contextual Bandits to Satisfy Users' Changing Interests

Published: 13 May 2019 Publication History

Abstract

Recommender systems have to handle a highly non-stationary environment, due to users' fast changing interests over time. Traditional solutions have to periodically rebuild their models, despite high computational cost. But this still cannot empower them to automatically adjust to abrupt changes in trends caused by timely information. It is important to note that the changes of reward distributions caused by a non-stationary environment can also be context dependent. When the change is orthogonal to the given context, previously maintained models should be reused for better recommendation prediction.
In this work, we focus on contextual bandit algorithms for making adaptive recommendations. We capitalize on the unique context-dependent property of reward changes to conquer the challenging non-stationary environment for model update. In particular, we maintain a dynamic ensemble of contextual bandit models, where each bandit model's reward estimation quality is monitored regarding given context and possible environment changes. Only the admissible models to the current environment will be used for recommendation. We provide a rigorous upper regret bound analysis of our proposed algorithm. Extensive empirical evaluations on both synthetic and three real-world datasets confirmed the algorithm's advantage against existing non-stationary solutions that simply create new models whenever an environment change is detected.

References

[1]
Yasin Abbasi-yadkori, Dávid Pál, and Csaba Szepesvári. 2011. Improved Algorithms for Linear Stochastic Bandits. In NIPS. 2312-2320.
[2]
Fabian Abel, Qi Gao, Geert-Jan Houben, and Ke Tao. 2011. Analyzing temporal dynamics in twitter profiles for personalized recommendations in the social web. In Proceedings of the 3rd International Web Science Conference. ACM, 2.
[3]
Peter Auer. 2002. Using Confidence Bounds for Exploitation-Exploration Trade-offs. Journal of Machine Learning Research 3 (2002), 397-422.
[4]
Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. 2002. Finite-time Analysis of the Multiarmed Bandit Problem. Mach. Learn. 47, 2-3 (May 2002), 235-256.
[5]
Dimitris Bertsimas and Jose´ Ni&ntiled;o Mora. 2000. Restless Bandits, Linear Programming Relaxations, and a Primal-Dual Index Heuristic. Oper. Res. 48, 1 (Jan. 2000), 80-90.
[6]
John S. Breese, David Heckerman, and Carl Kadie. Empirical Analysis of Predictive Algorithms for Collaborative Filtering. Technical Report MSR-TR-98-12. Microsoft Research. 18 pages.
[7]
Yang Cao, Zheng Wen, Branislav Kveton, and Yao Xie. 2018. Nearly Optimal Adaptive Procedure for Piecewise-Stationary Bandit: a Change-Point Detection Approach. https://arxiv.org/abs/1802.03692(2018).
[8]
Nicolò Cesa-Bianchi, Claudio Gentile, and Giovanni Zappella. 2013. A Gang of Bandits. In Pro. NIPS (2013).
[9]
Olivier Chapelle and Lihong Li. 2011. An Empirical Evaluation of Thompson Sampling. In Advances in Neural Information Processing Systems. 2249-2257.
[10]
Joohyun Lee Fang Liu and Ness Shroff. 2018. A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem(AAAI'18).
[11]
Sarah Filippi, Olivier Cappe, Aure´lien Garivier, and Csaba Szepesvári. 2010. Parametric bandits: The generalized linear case. In NIPS. 586-594.
[12]
Aure´lien Garivier and Eric Moulines. 2008. On upper-confidence bound policies for non-stationary bandit problems. arXiv preprint arXiv:0805.3415(2008).
[13]
Claudio Gentile, Shuai Li, Purushottam Kar, Alexandros Karatzoglou, Giovanni Zappella, and Evans Etrue. 2017. On Context-Dependent Clustering of Bandits. In Proceedings of the 34th International Conference on Machine Learning(Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, International Convention Centre, Sydney, Australia, 1253-1262.
[14]
Negar Hariri, Bamshad Mobasher, and Robin Burke. 2015. Adapting to User Preference Changes in Interactive Recommendation. In Proceedings of the 24th International Conference on Artificial Intelligence(IJCAI'15). 4268-4274.
[15]
Ce´dric Hartland, Sylvain Gelly, Nicolas Baskiotis, Olivier Teytaud, and Michèle Sebag. 2006. Multi-armed Bandit, Dynamic Environments and Meta-Bandits. (Nov. 2006). https://hal.archives-ouvertes.fr/hal-00113668working paper or preprint.
[16]
Jaya Kawale, Hung H Bui, Branislav Kveton, Long Tran-Thanh, and Sanjay Chawla. 2015. Efficient Thompson Sampling for Online Matrix-Factorization Recommendation. In Advances in Neural Information Processing Systems. 1297-1305.
[17]
Michal Kompan and Mária Bieliková. 2010. Content-Based News Recommendation. In E-Commerce and Web Technologies, Francesco Buccafurri and Giovanni Semeraro (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 61-72.
[18]
Georgia Koutrika. 2018. Recent Advances in Recommender Systems: Matrices, Bandits, and Blenders. In EDBT.
[19]
John Langford and Tong Zhang. 2008. The Epoch-Greedy Algorithm for Multi-armed Bandits with Side Information. In NIPS. 817-824.
[20]
Lihong Li, Wei Chu, John Langford, and Robert E Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of 19th WWW. ACM, 661-670.
[21]
Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. 2011. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In Proceedings of 4th WSDM. ACM, 297-306.
[22]
Lei Li, Dingding Wang, Tao Li, Daniel Knox, and Balaji Padmanabhan. 2011. SCENE: A Scalable Two-stage Personalized News Recommendation System. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval(SIGIR '11). ACM, New York, NY, USA, 125-134.
[23]
Wei Li, Xuerui Wang, Ruofei Zhang, Ying Cui, Jianchang Mao, and Rong Jin. 2010. Exploitation and exploration in a performance based contextual advertising system. In Proceedings of 16th SIGKDD. ACM, 27-36.
[24]
Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. 2010. Personalized news recommendation based on click behavior. In Proceedings of the 15th international conference on Intelligent user interfaces. ACM, 31-40.
[25]
Christos H. Papadimitriou and John N. Tsitsiklis. 1999. The Complexity of Optimal Queuing Network Control. Math. Oper. Res. 24, 2 (May 1999), 293-305.
[26]
Owen Phelan, Kevin McCarthy, Mike Bennett, and Barry Smyth. 2011. Terms of a Feather: Content-based News Recommendation and Discovery Using Twitter. In Proceedings of the 33rd European Conference on Advances in Information Retrieval(ECIR'11). Springer-Verlag, Berlin, Heidelberg, 448-459. http://dl.acm.org/citation.cfm?id=1996889.1996947
[27]
Yi Qi, Qingyun Wu, Hongning Wang, Jie Tang, and Maosong Sun. 2018. Bandit Learning with Implicit Feedback. In Advances in Neural Information Processing Systems. 7287-7297.
[28]
Vishnu Raj and Sheetal Kalyani. 2017. Taming non-stationary bandits: A Bayesian approach. arXiv preprint arXiv:1707.09727(2017).
[29]
Herbert Robbins. 1952. Some aspects of the sequential design of experiments. Bull. Amer. Math. Soc. 58, 5 (09 1952), 527-535. http://projecteuclid.org/euclid.bams/1183517370
[30]
Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In Proceedings of 10th WWW. ACM, 285-295.
[31]
Alex Slivkins and Eli Upfal. 2008. Adapting to a Changing Environment: the Brownian Restless Bandits. In 21st Conference on Learning Theory (COLT).
[32]
Huazheng Wang, Qingyun Wu, and Hongning Wang. 2016. Learning hidden features for contextual bandits. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. ACM, 1633-1642.
[33]
Huazheng Wang, Qingyun Wu, and Hongning Wang. 2017. Factorization bandits for interactive recommendation. In Thirty-First AAAI Conference on Artificial Intelligence.
[34]
P. Whittle. 1988. Restless bandits: activity allocation in a changing world. Journal of Applied Probability 25, A (1988), 287-298.
[35]
Qingyun Wu, Naveen Iyer, and Hongning Wang. 2018. Learning Contextual Bandits in a Non-stationary Environment. In The 41st International ACM SIGIR Conference on Research Development in Information Retrieval(SIGIR '18). ACM, 495-504.
[36]
Qingyun Wu, Huazheng Wang, Quanquan Gu, and Hongning Wang. 2016. Contextual Bandits in a Collaborative Environment. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 529-538.
[37]
Qingyun Wu, Hongning Wang, Liangjie Hong, and Yue Shi. 2017. Returning is believing: Optimizing long-term user engagement in recommender systems. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM, 1927-1936.
[38]
Jia Yuan Yu and Shie Mannor. 2009. Piecewise-stationary Bandit Problems with Side Observations. In Proceedings of the 26th Annual International Conference on Machine Learning(ICML '09). 1177-1184.

Cited By

View all
  • (2024)Contextual Distillation Model for Diversified RecommendationProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671514(5307-5316)Online publication date: 25-Aug-2024
  • (2024)Contextual Multi-Armed Bandit With Costly Feature Observation in Non-Stationary EnvironmentsIEEE Open Journal of Signal Processing10.1109/OJSP.2024.33898095(820-830)Online publication date: 2024
  • (2024)Non-Stationary Linear Bandits With Dimensionality Reduction for Large-Scale Recommender SystemsIEEE Open Journal of Signal Processing10.1109/OJSP.2024.33864905(548-558)Online publication date: 2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
WWW '19: The World Wide Web Conference
May 2019
3620 pages
ISBN:9781450366748
DOI:10.1145/3308558
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

  • IW3C2: International World Wide Web Conference Committee

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 May 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Non-stationary bandits
  2. recommender systems
  3. regret analysis

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

WWW '19
WWW '19: The Web Conference
May 13 - 17, 2019
CA, San Francisco, USA

Acceptance Rates

Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)26
  • Downloads (Last 6 weeks)2
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Contextual Distillation Model for Diversified RecommendationProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671514(5307-5316)Online publication date: 25-Aug-2024
  • (2024)Contextual Multi-Armed Bandit With Costly Feature Observation in Non-Stationary EnvironmentsIEEE Open Journal of Signal Processing10.1109/OJSP.2024.33898095(820-830)Online publication date: 2024
  • (2024)Non-Stationary Linear Bandits With Dimensionality Reduction for Large-Scale Recommender SystemsIEEE Open Journal of Signal Processing10.1109/OJSP.2024.33864905(548-558)Online publication date: 2024
  • (2024)Online Nonstationary and Nonlinear Bandits with Recursive Weighted Gaussian Process2024 IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC61105.2024.00012(11-20)Online publication date: 2-Jul-2024
  • (2023)Exploring Scenarios of Uncertainty about the Users' Preferences in Interactive Recommendation SystemsProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3591684(1178-1187)Online publication date: 19-Jul-2023
  • (2023)Disentangled Representation for Diversified RecommendationsProceedings of the Sixteenth ACM International Conference on Web Search and Data Mining10.1145/3539597.3570389(490-498)Online publication date: 27-Feb-2023
  • (2023)Contextual and Nonstationary Multi-armed Bandits Using the Linear Gaussian State Space Model for the Meta-Recommender System2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC53992.2023.10394517(3138-3145)Online publication date: 1-Oct-2023
  • (2022)Multi-Armed Bandits in Recommendation SystemsExpert Systems with Applications: An International Journal10.1016/j.eswa.2022.116669197:COnline publication date: 18-May-2022
  • (2021)A gang of adversarial banditsProceedings of the 35th International Conference on Neural Information Processing Systems10.5555/3540261.3540435(2265-2279)Online publication date: 6-Dec-2021
  • (2021)Interactive Information Retrieval with Bandit FeedbackProceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3404835.3462810(2658-2661)Online publication date: 11-Jul-2021
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media