Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3447548.3467233acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Public Access

Data Poisoning Attack against Recommender System Using Incomplete and Perturbed Data

Published: 14 August 2021 Publication History

Abstract

Recent studies reveal that recommender systems are vulnerable to data poisoning attack due to their openness nature. In data poisoning attack, the attacker typically recruits a group of controlled users to inject well-crafted user-item interaction data into the recommendation model's training set to modify the model parameters as desired. Thus, existing attack approaches usually require full access to the training data to infer items' characteristics and craft the fake interactions for controlled users. However, such attack approaches may not be feasible in practice due to the attacker's limited data collection capability and the restricted access to the training data, which sometimes are even perturbed by the privacy preserving mechanism of the service providers. Such design-reality gap may cause failure of attacks. In this paper, we fill the gap by proposing two novel adversarial attack approaches to handle the incompleteness and perturbations in user-item interaction data. First, we propose a bi-level optimization framework that incorporates a probabilistic generative model to find the users and items whose interaction data is sufficient and has not been significantly perturbed, and leverage these users and items' data to craft fake user-item interactions. Moreover, we reverse the learning process of recommendation models and develop a simple yet effective approach that can incorporate context-specific heuristic rules to handle data incompleteness and perturbations. Extensive experiments on two datasets against three representative recommendation models show that the proposed approaches can achieve better attack performance than existing approaches.

References

[1]
Lars Backstrom and Jure Leskovec. 2011. Supervised random walks: predicting and recommending links in social networks. In Proc. of WSDM 2011.
[2]
Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. 2018. Automatic differentiation in machine learning: a survey. Journal of machine learning research, Vol. 18 (2018).
[3]
Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012).
[4]
Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological), Vol. 39, 1 (1977).
[5]
Minghong Fang, Neil Zhenqiang Gong, and Jia Liu. 2020. Influence function based data poisoning attacks to top-n recommender systems. In Proc. of WWW 2020.
[6]
Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. 2018. Poisoning attacks to graph-based recommender systems. In Proc. of ACSAC 2018.
[7]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017).
[8]
F Maxwell Harper and Joseph A Konstan. 2015. The movielens datasets: History and context. TiiS, Vol. 5, 4 (2015).
[9]
Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proc. of WWW 2016.
[10]
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proc. of SIGIR 2020.
[11]
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proc. of WWW 2017.
[12]
Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In Proc. of ICDM 2008.
[13]
Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. 2018. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In Proc. of S&P 2018.
[14]
Shyong K Lam and John Riedl. 2004. Shilling recommender systems for fun and profit. In Proc. of WWW 2004.
[15]
Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data poisoning attacks on factorization-based collaborative filtering. In Proc. of NIPS 2016.
[16]
Bamshad Mobasher, Robin Burke, Runa Bhaumik, and Chad Williams. 2007. Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness. TOIT, Vol. 7, 4 (2007).
[17]
Luis Mu noz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security.
[18]
Rong Pan, Yunhong Zhou, Bin Cao, Nathan N Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. 2008. One-class collaborative filtering. In Proc. of ICDM 2008.
[19]
Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proc. of UAI 2009.
[20]
Yilin Shen and Hongxia Jin. 2014. Privacy-preserving personalized recommendation: An instance-based approach via differential privacy. In Proc. of ICDM 2014.
[21]
Hyejin Shin, Sungwook Kim, Junbum Shin, and Xiaokui Xiao. 2018. Privacy enhanced matrix factorization for recommendation with local differential privacy. TKDE, Vol. 30, 9 (2018).
[22]
Jiaxi Tang, Hongyi Wen, and Ke Wang. 2020. Revisiting Adversarially Learned Injection Attacks Against Recommender Systems. In Proc. of RecSys 2020.
[23]
Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, Vol. 9, 11 (2008).
[24]
Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. 2015. Is feature selection secure against training data poisoning?. In Proc. of ICML 2015. PMLR.
[25]
Han Xiao, Huang Xiao, and Claudia Eckert. 2012. Adversarial Label Flips Attack on Support Vector Machines. In ECAI 2012.
[26]
Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. 2017. Fake Co-visitation Injection Attacks to Recommender Systems. In NDSS 2017.
[27]
Hengtong Zhang, Yaliang Li, Bolin Ding, and Jing Gao. 2020. Practical Data Poisoning Attack against Next-Item Recommendation. In Proc. of WWW 2020.
[28]
Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, and Kui Ren. 2019. Data Poisoning Attack against Knowledge Graph Embedding. In IJCAI 2019.
[29]
Wayne Xin Zhao, Shanlei Mu, Yupeng Hou, Zihan Lin, Kaiyuan Li, Yushuo Chen, Yujie Lu, Hui Wang, Changxin Tian, Xingyu Pan, Yingqian Min, Zhichao Feng, Xinyan Fan, Xu Chen, Pengfei Wang, Wendi Ji, Yaliang Li, Xiaoling Wang, and Ji-Rong Wen. 2020. RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms. arXiv preprint arXiv:2011.01731.

Cited By

View all
  • (2024)A Sampling-Based Method for Detecting Data Poisoning Attacks in Recommendation SystemsMathematics10.3390/math1202024712:2(247)Online publication date: 12-Jan-2024
  • (2024)Manipulating Recommender Systems: A Survey of Poisoning Attacks and CountermeasuresACM Computing Surveys10.1145/3677328Online publication date: 25-Jul-2024
  • (2024)Attacking Click-through Rate Predictors via Generating Realistic Fake SamplesACM Transactions on Knowledge Discovery from Data10.1145/364368518:5(1-24)Online publication date: 28-Feb-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining
August 2021
4259 pages
ISBN:9781450383325
DOI:10.1145/3447548
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 August 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial learning
  2. data poisoning
  3. recommender system

Qualifiers

  • Research-article

Funding Sources

Conference

KDD '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)508
  • Downloads (Last 6 weeks)65
Reflects downloads up to 12 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Sampling-Based Method for Detecting Data Poisoning Attacks in Recommendation SystemsMathematics10.3390/math1202024712:2(247)Online publication date: 12-Jan-2024
  • (2024)Manipulating Recommender Systems: A Survey of Poisoning Attacks and CountermeasuresACM Computing Surveys10.1145/3677328Online publication date: 25-Jul-2024
  • (2024)Attacking Click-through Rate Predictors via Generating Realistic Fake SamplesACM Transactions on Knowledge Discovery from Data10.1145/364368518:5(1-24)Online publication date: 28-Feb-2024
  • (2024)Revisit Targeted Model Poisoning on Federated Recommendation: Optimize via Multi-objective TransportProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657764(1722-1732)Online publication date: 10-Jul-2024
  • (2024)Uplift Modeling for Target User Attacks on Recommender SystemsProceedings of the ACM Web Conference 202410.1145/3589334.3645403(3343-3354)Online publication date: 13-May-2024
  • (2024)Toward Adversarially Robust Recommendation From Adaptive Fraudster DetectionIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.332787619(907-919)Online publication date: 1-Jan-2024
  • (2024)Recent Developments in Recommender Systems: A Survey [Review Article]IEEE Computational Intelligence Magazine10.1109/MCI.2024.336398419:2(78-95)Online publication date: 8-Apr-2024
  • (2024)Unraveling Attacks to Machine-Learning-Based IoT Systems: A Survey and the Open Libraries Behind ThemIEEE Internet of Things Journal10.1109/JIOT.2024.337773011:11(19232-19255)Online publication date: 1-Jun-2024
  • (2024)A Comprehensive Analysis of Poisoning Attack and Defence Strategies in Machine Learning Techniques2024 IEEE International Conference on Computing, Power and Communication Technologies (IC2PCT)10.1109/IC2PCT60090.2024.10486736(1662-1668)Online publication date: 9-Feb-2024
  • (2024)Poisoning QoS-aware cloud API recommender system with generative adversarial network attackExpert Systems with Applications: An International Journal10.1016/j.eswa.2023.121630238:PBOnline publication date: 27-Feb-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media