Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3298689.3347031acmotherconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
research-article
Public Access

Adversarial attacks on an oblivious recommender

Published: 10 September 2019 Publication History
  • Get Citation Alerts
  • Abstract

    Can machine learning models be easily fooled? Despite the recent surge of interest in learned adversarial attacks in other domains, in the context of recommendation systems this question has mainly been answered using hand-engineered fake user profiles. This paper attempts to reduce this gap. We provide a formulation for learning to attack a recommender as a repeated general-sum game between two players, i.e., an adversary and a recommender oblivious to the adversary's existence. We consider the challenging case of poisoning attacks, which focus on the training phase of the recommender model. We generate adversarial user profiles targeting subsets of users or items, or generally the top-K recommendation quality. Moreover, we ensure that the adversarial user profiles remain unnoticeable by preserving proximity of the real user rating/interaction distribution to the adversarial fake user distribution. To cope with the challenge of the adversary not having access to the gradient of the recommender's objective with respect to the fake user profiles, we provide a non-trivial algorithm building upon zero-order optimization techniques. We offer a wide range of experiments, instantiating the proposed method for the case of the classic popular approach of a low-rank recommender, and illustrating the extent of the recommender's vulnerability to a variety of adversarial intents. These results can serve as a motivating point for more research into recommender defense strategies against machine learned attacks.

    References

    [1]
    Alekh Agarwal, Ofer Dekel, and Lin Xiao. 2010. Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback. In COLT. Citeseer, 28--40.
    [2]
    Charu C Aggarwal. 2016. Attack-resistant recommender systems. In Recommender Systems. Springer, 385--410.
    [3]
    Shalabh Bhatnagar, HL Prasad, and LA Prashanth. 2012. Stochastic recursive algorithms for optimization: simultaneous perturbation methods. Vol. 434. Springer.
    [4]
    Robin Burke, Michael P OMahony, and Neil J Hurley. 2015. Robust collaborative recommendation. In Recommender systems handbook. Springer, 961--995.
    [5]
    Dong-Kyu Chae, Jin-Soo Kang, Sang-Wook Kim, and Jung-Tae Lee. 2018. CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks. In CIKM. ACM, 137--146.
    [6]
    John C Duchi, Michael I Jordan, Martin J Wainwright, and Andre Wibisono. 2015. Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE Transactions on Information Theory 61, 5 (2015), 2788--2806.
    [7]
    Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. 2018. Poisoning Attacks to Graph-Based Recommender Systems. In ACSAC. ACM, 381--392.
    [8]
    Ian Goodfellow, Patrick McDaniel, and Nicolas Papernot. 2018. Making machine learning robust against adversarial inputs. Commun. ACM 61, 7 (2018), 56--66.
    [9]
    Ian Goodfellow, Nicolas Papernot, Patrick McDaniel, R Feinman, F Faghri, A Matyasko, K Hambardzumyan, YL Juang, A Kurakin, R Sheatsley, et al. 2016. cleverhans v0. 1: an adversarial machine learning library. arXiv preprint (2016).
    [10]
    Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS. 2672--2680.
    [11]
    Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
    [12]
    Nikolaus Hansen and Andreas Ostermeier. 2001. Completely derandomized self-adaptation in evolution strategies. Evolutionary computation 9, 2 (2001), 159--195.
    [13]
    F Maxwell Harper and Joseph A Konstan. 2016. The movielens datasets: History and context. TIIS 5, 4 (2016), 19.
    [14]
    Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. 2018. Adversarial personalized ranking for recommendation. In SIGIR. ACM, 355--364.
    [15]
    Wang-Cheng Kang, Chen Fang, Zhaowen Wang, and Julian McAuley. 2017. Visually-aware fashion recommendation and design with generative image models. In ICDM. IEEE, 207--216.
    [16]
    Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data poisoning attacks on factorization-based collaborative filtering. In NIPS. 1885--1893.
    [17]
    Andriy Mnih and Ruslan R Salakhutdinov. 2008. Probabilistic matrix factorization. In NIPS. 1257--1264.
    [18]
    Bamshad Mobasher, Robin Burke, Runa Bhaumik, and Chad Williams. 2007. Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness. TOIT 7, 4 (2007), 23.
    [19]
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2016. Universal adversarial perturbations. arXiv preprint arXiv:1610.08401 (2016).
    [20]
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In CVPR. 2574--2582.
    [21]
    Michael O'Mahony, Neil Hurley, Nicholas Kushmerick, and Guénolé Silvestre. 2004. Collaborative recommendation: A robustness analysis. TOIT 4, 4 (2004), 344--377.
    [22]
    Michael P OâĂŹMahony, Neil J Hurley, and Guenole CM Silvestre. 2002. Promoting recommendations: An attack on collaborative filtering. In DEXA. Springer, 494--503.
    [23]
    Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. 2016. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814 (2016).
    [24]
    Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015).
    [25]
    Jiawei Su, Danilo Vasconcellos Vargas, and Sakurai Kouichi. 2017. One pixel attack for fooling deep neural networks. arXiv preprint arXiv:1710.08864 (2017).
    [26]
    Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. 2017. Irgan: A minimax game for unifying generative and discriminative information retrieval models. In SIGIR. ACM, 515--524.
    [27]
    Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In KDD. ACM, 2847--2856.

    Cited By

    View all
    • (2024)Manipulating Recommender Systems: A Survey of Poisoning Attacks and CountermeasuresACM Computing Surveys10.1145/3677328Online publication date: 25-Jul-2024
    • (2024)A Research on Shilling Attacks Based on Variational graph auto-encoders for Improving the Robustness of Recommendation SystemsProceedings of the 2024 International Conference on Generative Artificial Intelligence and Information Security10.1145/3665348.3665370(120-126)Online publication date: 10-May-2024
    • (2024)Attacking Click-through Rate Predictors via Generating Realistic Fake SamplesACM Transactions on Knowledge Discovery from Data10.1145/364368518:5(1-24)Online publication date: 28-Feb-2024
    • Show More Cited By

    Index Terms

    1. Adversarial attacks on an oblivious recommender

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      RecSys '19: Proceedings of the 13th ACM Conference on Recommender Systems
      September 2019
      635 pages
      ISBN:9781450362436
      DOI:10.1145/3298689
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 10 September 2019

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. learned adversarial attacks
      2. recommender systems

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      RecSys '19
      RecSys '19: Thirteenth ACM Conference on Recommender Systems
      September 16 - 20, 2019
      Copenhagen, Denmark

      Acceptance Rates

      RecSys '19 Paper Acceptance Rate 36 of 189 submissions, 19%;
      Overall Acceptance Rate 254 of 1,295 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)331
      • Downloads (Last 6 weeks)47
      Reflects downloads up to 26 Jul 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Manipulating Recommender Systems: A Survey of Poisoning Attacks and CountermeasuresACM Computing Surveys10.1145/3677328Online publication date: 25-Jul-2024
      • (2024)A Research on Shilling Attacks Based on Variational graph auto-encoders for Improving the Robustness of Recommendation SystemsProceedings of the 2024 International Conference on Generative Artificial Intelligence and Information Security10.1145/3665348.3665370(120-126)Online publication date: 10-May-2024
      • (2024)Attacking Click-through Rate Predictors via Generating Realistic Fake SamplesACM Transactions on Knowledge Discovery from Data10.1145/364368518:5(1-24)Online publication date: 28-Feb-2024
      • (2024)Shilling Black-Box Recommender Systems by Learning to Generate Fake User ProfilesIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2022.318321035:1(1305-1319)Online publication date: Jan-2024
      • (2024)Toward Adversarially Robust Recommendation From Adaptive Fraudster DetectionIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.332787619(907-919)Online publication date: 2024
      • (2024)Secure and Enhanced Online Recommendations: A Federated Intelligence ApproachIEEE Transactions on Consumer Electronics10.1109/TCE.2023.333515670:1(2500-2507)Online publication date: Mar-2024
      • (2024)Recent Developments in Recommender Systems: A Survey [Review Article]IEEE Computational Intelligence Magazine10.1109/MCI.2024.336398419:2(78-95)Online publication date: May-2024
      • (2024)Detecting the adversarially-learned injection attacks via knowledge graphsInformation Systems10.1016/j.is.2024.102419125(102419)Online publication date: Dec-2024
      • (2024)An empirical study on metamorphic testing for recommender systemsInformation and Software Technology10.1016/j.infsof.2024.107410169(107410)Online publication date: May-2024
      • (2024)Robustness in Fairness Against Edge-Level Perturbations in GNN-Based RecommendationAdvances in Information Retrieval10.1007/978-3-031-56063-7_3(38-55)Online publication date: 23-Mar-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media