Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3397271.3401087acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Certifiable Robustness to Discrete Adversarial Perturbations for Factorization Machines

Published: 25 July 2020 Publication History

Abstract

Factorization machines (FMs) have been widely adopted to model the discrete feature interactions in recommender systems. Despite their great success, currently there is no study of their robustness to discrete adversarial perturbations. Whether modifying a certain number of the discrete input features has a dramatic effect on the FM's prediction? Although there exist robust training methods for FMs, they neglect the discrete property of input features and lack of an effective mechanism to verify the model robustness.
In our work, we propose the first method for the certifiable robustness of factorization machines with respect to the discrete perturbation on input features. If an instance is certifiably robust, it is guaranteed to be robust (under the considered space) no matter what the perturbations and attack models are. Likewise, we provide non-robust certificates via the existence of discrete adversarial perturbations that change the FM's prediction. Through such robustness certificates, we show that FMs and the current robust training methods are vulnerable to discrete adversarial perturbations. The vulnerability makes the outcome unreliable and restricts the application of FMs. To enhance the FM's robustness against such perturbations, a robust training procedure is presented whose core idea is to increase the number of instances that are certifiably robust. Extensive experiments on three real-world datasets demonstrate that our method significantly enhances the robustness of the factorization machines with little impact on predictive accuracy.

References

[1]
Maksym Andriushchenko and Matthias Hein. 2019. Provably robust boosted decision stumps and trees against adversarial attacks. In NeurIPS. 12997--13008.
[2]
Linas Baltrunas, Karen Church, Alexandros Karatzoglou, and Nuria Oliver. 2015. Frappe: Understanding the Usage and Perception of Mobile App Recommendations In-The-Wild. CoRR abs/1505.03014 (2015).
[3]
Aleksandar Bojchevski and Stephan Günnemann. 2019. Certifiable Robustness to Graph Perturbations. In NeurIPS. 8317--8328.
[4]
Stefano Calzavara, Claudio Lucchese, and Gabriele Tolomei. 2019. Adversarial Training of Gradient-Boosted Decision Trees. In CIKM. 2429--2432.
[5]
Hongge Chen, Huan Zhang, Duane S. Boning, and Cho-Jui Hsieh. 2019. Robust Decision Trees Against Adversarial Examples. In ICML. 1122--1131.
[6]
Hongge Chen, Huan Zhang, Si Si, Yang Li, Duane S. Boning, and Cho-Jui Hsieh. 2019. Robustness Verification of Tree-based Models. In NeurIPS. 12317--12328.
[7]
Liang Chen, Yang Liu, Zibin Zheng, and Philip S. Yu. 2018. Heterogeneous Neural Attentive Factorization Machine for Rating Prediction. In CIKM. 833--842.
[8]
Konstantina Christakopoulou and Arindam Banerjee. 2019. Adversarial attacks on an oblivious recommender. In RecSys. 322--330.
[9]
Francesco Croce, Maksym Andriushchenko, and Matthias Hein. 2019. Provable Robustness of ReLU networks via Maximization of Linear Regions. In AISTATS. 2057--2066.
[10]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial Attack on Graph Structured Data. In ICML. 1123--1132.
[11]
Fuli Feng, Xiangnan He, Jie Tang, and Tat-Seng Chua. 2019. Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure. arXiv preprint arXiv:1902.08226 (2019).
[12]
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin T. Vechev. 2018. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In IEEE S&P. 3--18.
[13]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In ICLR.
[14]
Gregory Goren, Oren Kurland, Moshe Tennenholtz, and Fiana Raiber. 2018. Ranking Robustness Under Adversarial Document Manipulations. In SIGIR. 395--404.
[15]
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. DeepFM: A Factorization-Machine based Neural Network for CTR Prediction. In IJCAI. 1725--1731.
[16]
Ruining He and Julian J. McAuley. 2016. Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering. In WWW. 507--517.
[17]
Xiangnan He and Tat-Seng Chua. 2017. Neural Factorization Machines for Sparse Predictive Analytics. In SIGIR. 355--364.
[18]
Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. 2018. Adversarial Personalized Ranking for Recommendation. In SIGIR. 355--364.
[19]
Matthias Hein and Maksym Andriushchenko. 2017. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation. In NIPS. 2266--2276.
[20]
Bryan Hooi, Neil Shah, Alex Beutel, Stephan Günnemann, Leman Akoglu, Mohit Kumar, Disha Makhija, and Christos Faloutsos. 2016. BIRDNEST: Bayesian Inference for Ratings-Fraud Detection. In SDM. 495--503.
[21]
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR.
[22]
Chenghao Liu, Teng Zhang, Jundong Li, Jianwen Yin, Peilin Zhao, Jianling Sun, and Steven C. H. Hoi. 2019. Robust Factorization Machine: A Doubly Capped Norms Minimization. In SDM. 738--746.
[23]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In ICLR.
[24]
Surabhi Punjabi and Priyanka Bhatt. 2018. Robust Factorization Machines for User Response Prediction. In WWW. 669--678.
[25]
Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Semidefinite relaxations for certifying robustness to adversarial examples. In NeurIPS. 10900--10910.
[26]
Steffen Rendle. 2010. Factorization Machines. In ICDM. 995--1000.
[27]
Steffen Rendle. 2012. Factorization Machines with libFM. ACM TIST 3, 3 (2012), 57:1--57:22.
[28]
Chuan Shi, Zhiqiang Zhang, Ping Luo, Philip S. Yu, Yading Yue, and Bin Wu. 2015. Semantic Path based Personalized Recommendation on Weighted Heterogeneous Information Networks. In CIKM. 453--462.
[29]
Jinhui Tang, Xiaoyu Du, Xiangnan He, Fajie Yuan, Qi Tian, and Tat-Seng Chua. 2020. Adversarial Training Towards Robust Multimedia Recommender System. IEEE Trans. Knowl. Data Eng. 32, 5 (2020), 855--867.
[30]
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, and Patrick D. McDaniel. 2018. Ensemble Adversarial Training: Attacks and Defenses. In ICLR.
[31]
Ruoxi Wang, Bin Fu, Gang Fu, and Mingliang Wang. 2017. Deep & Cross Network for Ad Click Predictions. In ADKDD. 12:1--12:7.
[32]
Eric Wong and J. Zico Kolter. 2018. Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope. In ICML. 5283--5292.
[33]
Fenfang Xie, Liang Chen, Yongjian Ye, Yang Liu, Zibin Zheng, and Xiaola Lin. 2018. A Weighted Meta-graph Based Approach for Mobile Application Recommendation on Heterogeneous Information Networks. In ICSOC. 404--420.
[34]
Feng Yuan, Lina Yao, and Boualem Benatallah. 2019. Adversarial Collaborative Neural Network for Robust Recommendation. In SIGIR. 1065--1068.
[35]
Stephan Zheng, Yang Song, Thomas Leung, and Ian J. Goodfellow. 2016. Improving the Robustness of Deep Neural Networks via Stability Training. In CVPR. 4480--4488.
[36]
Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2019. Robust Graph Convolutional Networks Against Adversarial Attacks. In KDD. 1399--1407.
[37]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial Attacks on Neural Networks for Graph Data. In KDD. 2847--2856.
[38]
Daniel Zügner and Stephan Günnemann. 2019. Certifiable Robustness and Robust Training for Graph Convolutional Networks. In KDD. 246--256.

Cited By

View all
  • (2024)A Survey on Trustworthy Recommender SystemsACM Transactions on Recommender Systems10.1145/3652891Online publication date: 13-Apr-2024
  • (2024)Graph Adversarial Immunization for Certifiable RobustnessIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.331110536:4(1597-1610)Online publication date: Apr-2024
  • (2024)Logical Relation Modeling and Mining in Hyperbolic Space for Recommendation2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00108(1310-1323)Online publication date: 13-May-2024
  • Show More Cited By

Index Terms

  1. Certifiable Robustness to Discrete Adversarial Perturbations for Factorization Machines

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
      July 2020
      2548 pages
      ISBN:9781450380164
      DOI:10.1145/3397271
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 25 July 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. adversarial examples
      2. factorization machine
      3. robustness
      4. sparse prediction

      Qualifiers

      • Research-article

      Conference

      SIGIR '20
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 792 of 3,983 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)28
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 30 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)A Survey on Trustworthy Recommender SystemsACM Transactions on Recommender Systems10.1145/3652891Online publication date: 13-Apr-2024
      • (2024)Graph Adversarial Immunization for Certifiable RobustnessIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2023.331110536:4(1597-1610)Online publication date: Apr-2024
      • (2024)Logical Relation Modeling and Mining in Hyperbolic Space for Recommendation2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00108(1310-1323)Online publication date: 13-May-2024
      • (2023)POREProceedings of the 32nd USENIX Conference on Security Symposium10.5555/3620237.3620333(1703-1720)Online publication date: 9-Aug-2023
      • (2023)RecAD: Towards A Unified Library for Recommender Attack and DefenseProceedings of the 17th ACM Conference on Recommender Systems10.1145/3604915.3609490(234-244)Online publication date: 14-Sep-2023
      • (2023)PRADA: Practical Black-box Adversarial Attacks against Neural Ranking ModelsACM Transactions on Information Systems10.1145/357692341:4(1-27)Online publication date: 8-Apr-2023
      • (2023)On the Vulnerability of Graph Learning-based Collaborative FilteringACM Transactions on Information Systems10.1145/357283441:4(1-28)Online publication date: 23-Mar-2023
      • (2023)Influence-Driven Data Poisoning for Robust Recommender SystemsIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.3274759(1-17)Online publication date: 2023
      • (2023)Rethinking Label Flipping Attack: From Sample Masking to Sample ThresholdingIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2022.322084945:6(7668-7685)Online publication date: 1-Jun-2023
      • (2022)4SDrug: Symptom-based Set-to-set Small and Safe Drug RecommendationProceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3534678.3539089(3970-3980)Online publication date: 14-Aug-2022
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media