Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3397271.3401148acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Reinforcement Learning to Rank with Pairwise Policy Gradient

Published: 25 July 2020 Publication History
  • Get Citation Alerts
  • Abstract

    This paper concerns reinforcement learning~(RL) of the document ranking models for information retrieval~(IR). One branch of the RL approaches to ranking formalize the process of ranking with Markov decision process~(MDP) and determine the model parameters with policy gradient. Though preliminary success has been shown, these approaches are still far from achieving their full potentials. Existing policy gradient methods directly utilize the absolute performance scores (returns) of the sampled document lists in its gradient estimations, which may cause two limitations: 1) fail to reflect the relative goodness of documents within the same query, which usually is close to the nature of IR ranking; 2) generate high variance gradient estimations, resulting in slow learning speed and low ranking accuracy. To deal with the issues, we propose a novel policy gradient algorithm in which the gradients are determined using pairwise comparisons of two document lists sampled within the same query. The algorithm, referred to as Pairwise Policy Gradient (PPG), repeatedly samples pairs of document lists, estimates the gradients with pairwise comparisons, and finally updates the model parameters. Theoretical analysis shows that PPG makes an unbiased and low variance gradient estimations. Experimental results have demonstrated performance gains over the state-of-the-art baselines in search result diversification and text retrieval.

    References

    [1]
    Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to Rank Using Gradient Descent. In Proceedings of the 22nd International Conference on Machine Learning (ICML '05). 89--96.
    [2]
    Chris J.C. Burges. 2010. From RankNet to LambdaRank to LambdaMART: An Overview. Technical Report.
    [3]
    Yunbo Cao, Jun Xu, Tie-Yan Liu, Hang Li, Yalou Huang, and Hsiao-Wuen Hon. 2006. Adapting Ranking SVM to Document Retrieval. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '06). 186--193.
    [4]
    Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to Rank: From Pairwise Approach to Listwise Approach. In Proceedings of the 24th International Conference on Machine Learning (ICML '07). 129--136.
    [5]
    Jaime Carbonell and Jade Goldstein. 1998. The Use of MMR, Diversity-based Reranking for Reordering Documents and Producing Summaries. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '98). 335--336.
    [6]
    David Carmel and Elad Yom-Tov. 2010. Estimating the Query Difficulty for Information Retrieval. Synthesis Lectures on Information Concepts, Retrieval, and Services, Vol. 2, 1 (2010), 1--89.
    [7]
    Charles L.A. Clarke, Maheedhar Kolla, Gordon V. Cormack, Olga Vechtomova, Azin Ashkan, Stefan Büttcher, and Ian MacKinnon. 2008. Novelty and Diversity in Information Retrieval Evaluation. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '08). ACM, New York, NY, USA, 659--666.
    [8]
    Koby Crammer and Yoram Singer. 2002. Pranking with Ranking. In Advances in Neural Information Processing Systems 14, T. G. Dietterich, S. Becker, and Z. Ghahramani (Eds.). MIT Press, 641--647.
    [9]
    Van Dang and W. Bruce Croft. 2012. Diversity by Proportionality: An Election-based Approach to Search Result Diversification. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '12). ACM, New York, NY, USA, 65--74.
    [10]
    Jun Feng, Heng Li, Minlie Huang, Shichen Liu, Wenwu Ou, Zhirong Wang, and Xiaoyan Zhu. 2018a. Learning to Collaborate: Multi-Scenario Ranking via Multi-Agent Reinforcement Learning. In Proceedings of the 2018 World Wide Web Conference (WWW '18). 1939--1948.
    [11]
    Yue Feng, Jun Xu, Yanyan Lan, Jiafeng Guo, Wei Zeng, and Xueqi Cheng. 2018b. From Greedy Selection to Exploratory Decision-Making: Diverse Ranking with Policy-Value Networks. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR '18). 125--134.
    [12]
    Katja Hofmann, Shimon Whiteson, and Maarten de Rijke. 2013a. Balancing exploration and exploitation in listwise and pairwise online learning to rank for information retrieval. Information Retrieval, Vol. 16, 1 (01 Feb 2013), 63--90.
    [13]
    Katja Hofmann, Shimon Whiteson, and Maarten Rijke. 2013b. Balancing Exploration and Exploitation in Listwise and Pairwise Online Learning to Rank for Information Retrieval. Inf. Retr., Vol. 16, 1 (Feb. 2013), 63--90.
    [14]
    Yujing Hu, Qing Da, Anxiang Zeng, Yang Yu, and Yinghui Xu. 2018. Reinforcement Learning to Rank in E-Commerce Search Engine: Formalization, Analysis, and Application. In Proceedings of the 24th SIGKDD (KDD '18). 368--377.
    [15]
    Kalervo Jarvelin and Jaana Kekalainen. 2002. Cumulated Gain-based Evaluation of IR Techniques. ACM Trans. Inf. Syst., Vol. 20, 4 (Oct. 2002), 422--446.
    [16]
    Thorsten Joachims. 2002. Optimizing Search Engines Using Clickthrough Data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '02). 133--142.
    [17]
    Branislav Kveton, Csaba Szepesvá ri, Zheng Wen, and Azin Ashkan. 2015. Cascading Bandits: Learning to Rank in the Cascade Model. CoRR, Vol. abs/1502.02763 (2015).
    [18]
    Hang Li. 2014. Learning to Rank for Information Retrieval and Natural Language Processing, Second Edition. Synthesis Lectures on Human Language Technologies, Vol. 7, 3 (2014), 1--121.
    [19]
    Shuai Li, Alexandros Karatzoglou, and Claudio Gentile. 2016. Collaborative Filtering Bandits. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '16). 539--548.
    [20]
    Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval. Found. Trends Inf. Retr., Vol. 3, 3 (March 2009), 225--331.
    [21]
    Zhongqi Lu and Qiang Yang. 2016. Partially Observable Markov Decision Process for Recommender Systems. CoRR, Vol. abs/1608.07793 (2016).
    [22]
    Jiyun Luo, Sicong Zhang, and Hui Yang. 2014. Win-win Search: Dual-agent Stochastic Game in Session Search. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR '14). ACM, New York, NY, USA, 587--596.
    [23]
    Ramesh Nallapati. 2004. Discriminative Models for Information Retrieval. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '04). 64--71.
    [24]
    Harrie Oosterhuis and Maarten de Rijke. 2018. Ranking for Relevance and Display Preferences in Complex Presentation Layouts. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR '18). 845--854.
    [25]
    Tao Qin, Tie-Yan Liu, Jun Xu, and Hang Li. 2010. LETOR: A Benchmark Collection for Research on Learning to Rank for Information Retrieval. Inf. Retr., Vol. 13, 4 (Aug. 2010), 346--374.
    [26]
    Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. 2008a. Learning Diverse Rankings with Multi-armed Bandits. In Proceedings of the 25th International Conference on Machine Learning (ICML '08). 784--791.
    [27]
    Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. 2008b. Learning Diverse Rankings with Multi-armed Bandits. In Proceedings of the 25th International Conference on Machine Learning (ICML '08). ACM, New York, NY, USA, 784--791.
    [28]
    Rodrygo L.T. Santos, Craig Macdonald, and Iadh Ounis. 2010. Exploiting Query Reformulations for Web Search Result Diversification. In Proceedings of the 19th International Conference on World Wide Web (WWW '10). 881--890.
    [29]
    Guy Shani, David Heckerman, and Ronen I. Brafman. 2005. An MDP-Based Recommender System. J. Mach. Learn. Res., Vol. 6 (Dec. 2005), 1265--1295.
    [30]
    Jing-Cheng Shi, Yang Yu, Qing Da, Shi-Yong Chen, and Anxiang Zeng. 2018. Virtual-Taobao: Virtualizing Real-world Online Retail Environment for Reinforcement Learning. CoRR, Vol. abs/1805.10000 (2018).
    [31]
    Richard S. Sutton and Andrew G. Barto. 2016. Reinforcement Learning: An Introduction 2nd ed.). MIT Press.
    [32]
    Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. 2017. IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '17). 515--524.
    [33]
    Long Xia, Jun Xu, Yanyan Lan, Jiafeng Guo, and Xueqi Cheng. 2015. Learning Maximal Marginal Relevance Model via Directly Optimizing Diversity Evaluation Measures. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '15). 113--122.
    [34]
    Long Xia, Jun Xu, Yanyan Lan, Jiafeng Guo, and Xueqi Cheng. 2016. Modeling Document Novelty with Neural Tensor Network for Search Result Diversification. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '16). 395--404.
    [35]
    Long Xia, Jun Xu, Yanyan Lan, Jiafeng Guo, Wei Zeng, and Xueqi Cheng. 2017. Adapting Markov Decision Process for Search Result Diversification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '17). 535--544.
    [36]
    Jun Xu and Hang Li. 2007. AdaRank: A Boosting Algorithm for Information Retrieval. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '07). 391--398.
    [37]
    Hui Yang, Dongyi Guan, and Sicong Zhang. 2015. The Query Change Model: Modeling Session Search As a Markov Decision Process. ACM Trans. Inf. Syst., Vol. 33, 4, Article 20 (May 2015), 33 pages.
    [38]
    Yisong Yue and Thorsten Joachims. 2008. Predicting Diverse Subsets Using Structural SVMs. In Proceedings of the 25th International Conference on Machine Learning (ICML '08). ACM, New York, NY, USA, 1224--1231.
    [39]
    Yisong Yue and Thorsten Joachims. 2009. Interactively Optimizing Information Retrieval Systems As a Dueling Bandits Problem. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML '09). 1201--1208.
    [40]
    Wei Zeng, Jun Xu, Yanyan Lan, Jiafeng Guo, and Xueqi Cheng. 2017. Reinforcement Learning to Rank with Markov Decision Process. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '17). 945--948.
    [41]
    Wei Zeng, Jun Xu, Yanyan Lan, Jiafeng Guo, and Xueqi Cheng. 2018. Multi Page Search with Reinforcement Learning to Rank. In Proceedings of the 2018 ACM SIGIR International Conference on Theory of Information Retrieval (ICTIR '18). 175--178.
    [42]
    Sicong Zhang, Jiyun Luo, and Hui Yang. 2014. A POMDP Model for Content-free Document Re-ranking. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR '14). ACM, New York, NY, USA, 1139--1142.
    [43]
    Xiangyu Zhao, Long Xia, Liang Zhang, Zhuoye Ding, Dawei Yin, and Jiliang Tang. 2018a. Deep Reinforcement Learning for Page-wise Recommendations. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM, New York, NY, USA, 95--103. https://doi.org/10.1145/3240323.3240374
    [44]
    Xiangyu Zhao, Liang Zhang, Zhuoye Ding, Long Xia, Jiliang Tang, and Dawei Yin. 2018b. Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '18). 1040--1048.
    [45]
    Yadong Zhu, Yanyan Lan, Jiafeng Guo, Xueqi Cheng, and Shuzi Niu. 2014. Learning for Search Result Diversification. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR '14). ACM, New York, NY, USA, 293--302.
    [46]
    Lixin Zou, Long Xia Xia, Zhuoye Ding, Jiaxing Song, Weidong Liu, and Dawei Yin. 2019. Reinforcement Learning to Optimize Long-term User Engagement in Recommender Systems. In Proceedings of the 25th annual ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '19).

    Cited By

    View all
    • (2024)Passage-aware Search Result DiversificationACM Transactions on Information Systems10.1145/365367242:5(1-29)Online publication date: 13-May-2024
    • (2024)A Review of Explainable Recommender Systems Utilizing Knowledge Graphs and Reinforcement LearningIEEE Access10.1109/ACCESS.2024.342241612(91999-92019)Online publication date: 2024
    • (2023)Unified off-policy learning to rankProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666995(19887-19907)Online publication date: 10-Dec-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
    July 2020
    2548 pages
    ISBN:9781450380164
    DOI:10.1145/3397271
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 25 July 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. learning to rank
    2. policy gradient
    3. reinforcement learning

    Qualifiers

    • Research-article

    Funding Sources

    • National Natural Science Foundation of China
    • Beijing Outstanding Young Scientist Program
    • Youth Innovation Promotion Association CAS
    • Fundamental Research Funds for the Central Universities, and Research Funds of Renmin University of China
    • Beijing Academy of Artificial Intelligence

    Conference

    SIGIR '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)109
    • Downloads (Last 6 weeks)11
    Reflects downloads up to 09 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Passage-aware Search Result DiversificationACM Transactions on Information Systems10.1145/365367242:5(1-29)Online publication date: 13-May-2024
    • (2024)A Review of Explainable Recommender Systems Utilizing Knowledge Graphs and Reinforcement LearningIEEE Access10.1109/ACCESS.2024.342241612(91999-92019)Online publication date: 2024
    • (2023)Unified off-policy learning to rankProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666995(19887-19907)Online publication date: 10-Dec-2023
    • (2023)Reinforcement Re-ranking with 2D Grid-based Recommendation PanelsProceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region10.1145/3624918.3625311(282-287)Online publication date: 26-Nov-2023
    • (2023)GDESA: Greedy Diversity Encoder with Self-attention for Search Results DiversificationACM Transactions on Information Systems10.1145/354410341:2(1-36)Online publication date: 3-Apr-2023
    • (2023)Modeling Global-Local Subtopic Distribution with Hypergraph to Diversify Search Results2023 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN54540.2023.10191529(1-8)Online publication date: 18-Jun-2023
    • (2023)Deep reinforcement learning in recommender systems: A survey and new perspectivesKnowledge-Based Systems10.1016/j.knosys.2023.110335264(110335)Online publication date: Mar-2023
    • (2023)AMRankExpert Systems with Applications: An International Journal10.1016/j.eswa.2022.118512211:COnline publication date: 1-Jan-2023
    • (2023)Intrinsically motivated reinforcement learning based recommendation with counterfactual data augmentationWorld Wide Web10.1007/s11280-023-01187-726:5(3253-3274)Online publication date: 15-Jul-2023
    • (2023)An in-depth study on adversarial learning-to-rankInformation Retrieval Journal10.1007/s10791-023-09419-026:1Online publication date: 28-Feb-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media