Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

Security and Privacy Issues in Deep Reinforcement Learning: Threats and Countermeasures

Published: 23 February 2024 Publication History

Abstract

Deep Reinforcement Learning (DRL) is an essential subfield of Artificial Intelligence (AI), where agents interact with environments to learn policies for solving complex tasks. In recent years, DRL has achieved remarkable breakthroughs in various tasks, including video games, robotic control, quantitative trading, and autonomous driving. Despite its accomplishments, security and privacy-related issues still prevent us from deploying trustworthy DRL applications. For example, by manipulating the environment, an attacker can influence an agent’s actions, misleading it to behave abnormally. Additionally, an attacker can infer private training data and environmental information by maliciously interacting with DRL models, causing a privacy breach. In this survey, we systematically investigate the recent progress of security and privacy issues in the context of DRL. First, we present a holistic review of security-related attacks within DRL systems from the perspectives of single-agent and multi-agent systems and review privacy-related attacks. Second, we review and classify defense methods used to address security-related challenges, including robust learning, anomaly detection, and game theory approaches. Third, we review and classify privacy-preserving technologies, including encryption, differential privacy, and policy confusion. We conclude the survey by discussing open issues and possible directions for future research in this field.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In ACM SIGSAC Conference on Computer and Communications Security. 308–318.
[2]
Suleiman Abahussein, Zishuo Cheng, Tianqing Zhu, Dayong Ye, and Wanlei Zhou. 2022. Privacy-preserving in double deep-Q-network with differential privacy in continuous spaces. In Australasian Joint Conference on Artificial Intelligence. Springer, 15–26.
[3]
Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor Mordatch, and Pieter Abbeel. 2017. Continuous adaptation via meta-learning in nonstationary and competitive environments. Learning (2017).
[4]
Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. 2020. Learning dexterous in-hand manipulation. Int. J. Robot. Res. 39, 1 (2020), 3–20.
[5]
Chace Ashcraft and Kiran Karra. 2021. Poisoning deep reinforcement learning agents with in-distribution triggers. arXiv: Learning (2021).
[6]
Giuseppe Ateniese, Giovanni Felici, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, and Domenico Vitali. 2013. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. arXiv preprint arXiv:1306.4447 (2013).
[7]
Alex Ayoub, Zeyu Jia, Csaba Szepesvari, Mengdi Wang, and Lin Yang. 2020. Model-based reinforcement learning with value-targeted regression. In International Conference on Machine Learning. PMLR, 463–474.
[8]
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 (2022).
[9]
Borja Balle, Maziar Gomrokchi, and Doina Precup. 2016. Differentially private policy evaluation. In International Conference on Machine Learning. PMLR, 2130–2138.
[10]
Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. 2018. Emergent complexity via multi-agent competition. In International Conference on Learning Representations.
[11]
Vahid Behzadan and William Hsu. 2019. Adversarial exploitation of policy imitation. arXiv preprint arXiv:1906.01121 (2019).
[12]
Vahid Behzadan and Arslan Munir. 2017. Vulnerability of deep reinforcement learning to policy induction attacks. Mach. Learn. Data Min. Pattern Recog. (2017).
[13]
Vahid Behzadan and Arslan Munir. 2018. The faults in our pi stars: Security issues and open challenges in deep reinforcement learning. arXiv: Learning (2018).
[14]
Richard Bellman. 1952. On the theory of dynamic programming. ProcNat’l Acad. Sci. 38, 8 (1952), 716–719.
[15]
Jan Blumenkamp and Amanda Prorok. 2020. The emergence of adversarial communication in multi-agent reinforcement learning. In Conference on Robot Learning.
[16]
Kanting Cai, Xiangbin Zhu, and Zhao-Long Hu. 2022. Black-box reward attacks against deep reinforcement learning based on successor representation.
[17]
Lorenzo Canese, Gian Carlo Cardarilli, Luca Di Nunzio, Rocco Fazzolari, Daniele Giardino, Marco Re, and Sergio Spanò. 2021. Multi-agent reinforcement learning: A review of challenges and applications. Appl. Sci. 11, 11 (2021), 4948.
[18]
Patrick P. K. Chan, Yaxuan Wang, and Daniel S. Yeung. 2020. Adversarial attack against deep reinforcement learning with static reward impact map. Comput. Commun. Secur. (2020).
[19]
Hongyan Chang and Reza Shokri. 2021. On the privacy risks of algorithmic fairness. In IEEE European Symposium on Security and Privacy (EuroS&P’21). IEEE, 292–303.
[20]
Jianyu Chen, Shengbo Eben Li, and Masayoshi Tomizuka. 2020. Interpretable end-to-end urban autonomous driving with latent deep reinforcement learning. IEEE Trans. Intell. Transport. Syst. (2020).
[21]
Kangjie Chen, Shangwei Guo, Tianwei Zhang, Shuxin Li, and Yang Liu. 2021. Temporal watermarks for deep reinforcement learning models. Auton. Agents. Multi-agent Syst. (2021).
[22]
Kangjie Chen, Shangwei Guo, Tianwei Zhang, Xiaofei Xie, and Yang Liu. 2021. Stealing deep reinforcement learning models for fun and profit. In ACM Asia Conference on Computer and Communications Security. 307–319.
[23]
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In 10th ACM Workshop on Artificial Intelligence and Security.
[24]
Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, and Zhangyang Wang. 2022. Linearity grafting: Relaxed neuron pruning helps certifiable robustness. In International Conference on Machine Learning. PMLR, 3760–3772.
[25]
Xiaocong Chen, Lina Yao, Julian McAuley, Guanglin Zhou, and Xianzhi Wang. 2023. Deep reinforcement learning in recommender systems: A survey and new perspectives. Knowl.-based Syst. 264 (2023), 110335.
[26]
Edward Chou, Florian Tramer, and Giancarlo Pellegrino. 2020. SentiNet: Detecting localized universal attacks against deep learning systems. In IEEE Security and Privacy Workshops (SPW’20). IEEE, 48–54.
[27]
Sayak Ray Chowdhury and Xingyu Zhou. 2021. Differentially private regret minimization in episodic Markov decision processes. arXiv preprint arXiv:2112.10599 (2021).
[28]
Sayak Ray Chowdhury, Xingyu Zhou, and Ness Shroff. 2021. Adaptive control of differentially private linear quadratic systems. In IEEE International Symposium on Information Theory (ISIT’21). IEEE, 485–490.
[29]
Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. 2019. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning. PMLR, 1310–1320.
[30]
Christoph Dann, Tor Lattimore, and Emma Brunskill. 2017. Unifying PAC and regret: Uniform PAC bounds for episodic reinforcement learning. Adv. Neural Inf. Process. Syst. 30 (2017).
[31]
Thomas Degris, Martha White, and Richard S. Sutton. 2012. Off-policy actor-critic. arXiv preprint arXiv:1205.4839 (2012).
[32]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference. Springer, 265–284.
[33]
Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In 23rd USENIX Security Symposium (USENIX Security’14). 17–32.
[34]
Ted Fujimoto, Timothy Doster, Adam Attarian, Jill Brandenberger, and Nathan Hodas. 2022. Reward-free attacks in multi-agent reinforcement learning.
[35]
Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov. 2018. Property inference attacks on fully connected neural networks using permutation invariant representations. In ACM SIGSAC Conference on Computer and Communications Security. 619–633.
[36]
Evrard Garcelon, Vianney Perchet, Ciara Pike-Burke, and Matteo Pirotta. 2021. Local differential privacy for regret minimization in reinforcement learning. Adv. Neural Inf. Process. Syst. 34 (2021).
[37]
Hamid Gharagozlou, Javad Mohammadzadeh, Azam Bastanfard, and Saeed Shiry Ghidary. 2022. RLAS-BIABC: A reinforcement learning-based answer selection using the BERT model boosted by an improved ABC algorithm. Computat. Intell. Neurosci. 2022 (2022).
[38]
Adam Gleave, Michael Dennis, Cody Wild, Neel Kant, Sergey Levine, and Stuart Russell. 2019. Adversarial policies: Attacking deep reinforcement learning. In International Conference on Learning Representations.
[39]
Parham Gohari, Bo Chen, Bo Wu, Matthew Hale, and Ufuk Topcu. 2021. Privacy-preserving kickstarting deep reinforcement learning with privacy-aware learners. arXiv preprint arXiv:2102.09599 (2021).
[40]
Parham Gohari, Matthew Hale, and Ufuk Topcu. 2020. Privacy-preserving policy synthesis in Markov decision processes. In 59th IEEE Conference on Decision and Control (CDC’20). IEEE, 6266–6271.
[41]
Parham Gohari, Bo Wu, Matthew Hale, and Ufuk Topcu. 2020. The Dirichlet mechanism for differential privacy on the unit simplex. In American Control Conference (ACC’20). IEEE, 1253–1258.
[42]
Maziar Gomrokchi, Susan Amin, Hossein Aboutalebi, Alexander Wong, and Doina Precup. 2021. Where did you learn that from? Surprising effectiveness of membership inference attacks against temporally correlated data in deep reinforcement learning. arXiv preprint arXiv:2109.03975 (2021).
[43]
Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv: Machine Learning (2014).
[44]
Jun Guo, Yonghong Chen, Yihang Hao, Zixin Yin, Yin Yu, and Simin Li. 2022. Towards comprehensive testing on the robustness of cooperative multi-agent reinforcement learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 115–122.
[45]
Wenbo Guo, Xian Wu, Sui Huang, and Xinyu Xing. 2021. Adversarial policy learning in two-player competitive games. In International Conference on Machine Learning.
[46]
Rob Hall, Alessandro Rinaldo, and Larry Wasserman. 2013. Differential privacy for functions and functional data. J. Mach. Learn. Res. 14, 1 (2013), 703–727.
[47]
Ben Hambly, Renyuan Xu, and Huining Yang. 2023. Recent advances in reinforcement learning in finance. Math. Finance 33, 3 (2023), 437–503.
[48]
Ali Hassan, Deepjyoti Deka, and Yury Dvorkin. 2021. Privacy-aware load ensemble control: A linearly-solvable MDP approach. IEEE Trans. Smart Grid 13, 1 (2021), 255–267.
[49]
Yingzhe He, Guozhu Meng, Kai Chen, Xingbo Hu, and Jinwen He. 2019. Towards privacy and security of deep learning systems: A survey. arXiv preprint arXiv:1911.12562 (2019).
[50]
Thomas Hickling, Nabil Aouf, and Phillippa Spencer. 2022. Robust adversarial attacks detection based on explainable deep reinforcement learning for UAV guidance and planning.
[51]
Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep models under the GAN: Information leakage from collaborative deep learning. In ACM SIGSAC Conference on Computer and Communications Security. 603–618.
[52]
Mengdi Huai, Jianhui Sun, Renqin Cai, Liuyi Yao, and Aidong Zhang. 2020. Malicious attacks against deep reinforcement learning interpretations. Knowl. Discov. Data Min. (2020).
[53]
Sandy H. Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. 2017. Adversarial attacks on neural network policies. Learning (2017).
[54]
Yunhan Huang and Quanyan Zhu. 2019. Deceptive reinforcement learning under adversarial manipulations on cost signals. Decis. Game Theor. Secur. (2019).
[55]
Léonard Hussenot, Matthieu Geist, and Olivier Pietquin. 2019. CopyCAT: Taking control of neural policies with constant attacks. Adapt. Agents Multi-agents Syst. (2019).
[56]
Inaam Ilahi, Muhammad Usama, Junaid Qadir, Muhammad Umar Janjua, Ala Al-Fuqaha, Dinh Thai Hoang, and Dusit Niyato. 2020. Challenges and countermeasures for adversarial attacks on deep reinforcement learning. arXiv: Learning (2020).
[57]
Matthew Inkawhich, Yi Chen, and Hai Li. 2020. Snooping attacks on deep reinforcement learning. Adapt. Agents Multi-agents Syst. (2020).
[58]
Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro A. Ortega, D. J. Strouse, Joel Z. Leibo, and Nando de Freitas. 2018. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. arXiv: Learning (2018).
[59]
Alberto Jesu, Victor-Alexandru Darvariu, Alessandro Staffolani, Rebecca Montanari, and Mirco Musolesi. 2021. Reinforcement learning on encrypted data. arXiv preprint arXiv:2109.08236 (2021).
[60]
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. 2019. MemGuard: Defending against black-box membership inference attacks via adversarial examples. In ACM SIGSAC Conference on Computer and Communications Security. 259–274.
[61]
Jiechuan Jiang and Zongqing Lu. 2018. Learning attentional communication for multi-agent cooperation. Neural Inf. Process. Syst. (2018).
[62]
Chen Jin-Yin, Yan Zhang, Wang Xue-Ke, Hong-Bin Cai, Wang Jue, J. I. Shou-Ling, Zhang Yan, Cai Hong-Bin, and Ji Shou. 2022. A survey of attack, defense and related security analysis for deep reinforcement learning.
[63]
Mika Juuti, Sebastian Szyller, Samuel Marchal, and N. Asokan. 2019. PRADA: Protecting against DNN model stealing attacks. In IEEE European Symposium on Security and Privacy (EuroS&P’19). IEEE, 512–527.
[64]
Manish Kesarwani, Bhaskar Mukhoty, Vijay Arya, and Sameep Mehta. 2018. Model extraction warning in MLaaS paradigm. In 34th Annual Computer Security Applications Conference. 371–380.
[65]
Panagiota Kiourti, Kacper Wardega, Susmit Jha, and Wenchao Li. 2020. TrojDRL: Evaluation of backdoor attacks on deep reinforcement learning. In 57th ACM/IEEE Design Automation Conference (DAC’20). IEEE, 1–6.
[66]
B. Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A. Al Sallab, Senthil Yogamani, and Patrick Pérez. 2021. Deep reinforcement learning for autonomous driving: A survey. IEEE Trans. Intell. Transport. Syst. 23, 6 (2021), 4909–4926.
[67]
Jernej Kos and Dawn Song. 2017. Delving into adversarial attacks on deep policies. Learning (2017).
[68]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. Learning (2016).
[69]
Li-Cheng Lan, Huan Zhang, and Cho-Jui Hsieh. 2023. Can agents run relay race with strangers? Generalization of RL to out-of-distribution trajectories. arXiv preprint arXiv:2304.13424 (2023).
[70]
Jonathan Lebensold, William Hamilton, Borja Balle, and Doina Precup. 2019. Actor critic with differentially private critic. arXiv preprint arXiv:1910.05876 (2019).
[71]
Xian Yeow Lee, Sambit Ghadai, Kai Liang Tan, Chinmay Hegde, and Soumik Sarkar. 2020. Spatiotemporally constrained action space attacks on deep reinforcement learning agents. In National Conference on Artificial Intelligence.
[72]
Xian Yeow Lee, Aaron J. Havens, Girish Chowdhary, and Soumik Sarkar. 2019. Learning to cope with adversarial attacks. arXiv: Learning (2019).
[73]
Chonghua Liao, Jiafan He, and Quanquan Gu. 2021. Locally differentially private reinforcement learning for linear mixture Markov decision processes. arXiv preprint arXiv:2110.10133 (2021).
[74]
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).
[75]
Jieyu Lin, Kristina Dzeparoska, Sai Qian Zhang, Alberto Leon-Garcia, and Nicolas Papernot. 2020. On the robustness of cooperative multi-agent reinforcement learning. In IEEE Symposium on Security and Privacy.
[76]
Yuanguo Lin, Yong Liu, Fan Lin, Lixin Zou, Pengcheng Wu, Wenhua Zeng, Huanhuan Chen, and Chunyan Miao. 2023. A survey on reinforcement learning for recommender systems. IEEE Trans. Neural Netw. Learn. Syst. (2023).
[77]
Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. 2017. Tactics of adversarial attack on deep reinforcement learning agents. In International Conference on Learning Representations.
[78]
Yen-Chen Lin, Ming-Yu Liu, Min Sun, and Jia-Bin Huang. 2017. Detecting adversarial attacks on neural network policies with visual foresight. arXiv preprint arXiv:1710.00814 (2017).
[79]
Michael L. Littman. 1994. Markov games as a framework for multi-agent reinforcement learning. In International Conference on Machine Learning.
[80]
Siqi Liu, Guy Lever, Josh Merel, Saran Tunyasuvunakool, Nicolas Heess, and Thore Graepel. 2018. Emergent coordination through competition. In International Conference on Learning Representations.
[81]
Ximeng Liu, Robert H. Deng, Kim-Kwang Raymond Choo, and Yang Yang. 2019. Privacy-preserving reinforcement learning design for patient-centric dynamic treatment regimes. IEEE Trans. Emerg. Topics Comput. 9, 1 (2019), 456–470.
[82]
Zhengshang Liu, Yue Yang, Tim Miller, and Peta Masters. 2021. Deceptive reinforcement learning for privacy-preserving planning. arXiv preprint arXiv:2102.03022 (2021).
[83]
Björn Lütjens, Michael Everett, and Jonathan P. How. 2020. Certified adversarial robustness for deep reinforcement learning. In Conference on Robot Learning. PMLR, 1328–1337.
[84]
Paul Luyo, Evrard Garcelon, Alessandro Lazaric, and Matteo Pirotta. 2021. Differentially private exploration in reinforcement learning with linear representation. arXiv preprint arXiv:2112.01585 (2021).
[85]
Ajay Mandlekar, Yuke Zhu, Animesh Garg, Li Fei-Fei, and Silvio Savarese. 2017. Adversarially robust policy learning: Active construction of physically-plausible perturbations. Intell. Robot. Syst. (2017).
[86]
Peta Masters and Sebastian Sardina. 2019. Goal recognition for rational and irrational agents. In 18th International Conference on Autonomous Agents and MultiAgent Systems. 440–448.
[87]
Seyyed Amir Hadi Minoofam, Azam Bastanfard, and Mohammad Reza Keyvanpour. 2022. RALF: An adaptive reinforcement learning framework for teaching dyslexic students. Multim. Tools Applic. 81, 5 (2022), 6389–6412.
[88]
Hirofumi Miyajima, Noritaka Shigei, Hiromi Miyajima, and Norio Shiratori. 2018. Analog Q-learning methods for secure multiparty computation. IAENG Int. J. Comput. Sci. 45, 4 (2018), 623–629.
[89]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature (2015).
[90]
Kanghua Mo, Weixuan Tang, Jin Li, and Xu Yuan. 2022. Attacking deep reinforcement learning with decoupled adversarial policy. IEEE Trans. Depend. Sec. Comput. (2022).
[91]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. Comput. Vis. Pattern Recog. (2017).
[92]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition. 2574–2582.
[93]
Zahra Movahedi and Azam Bastanfard. 2021. Toward competitive multi-agents in polo game based on reinforcement learning. Multim. Tools Applic. 80 (2021), 26773–26793.
[94]
Dung Daniel T. Ngo, Giuseppe Vietri, and Steven Wu. 2022. Improved regret for differentially private exploration in linear MDP. In International Conference on Machine Learning. PMLR, 16529–16552.
[95]
Thanh Thi Nguyen and Vijay Janapa Reddi. 2021. Deep reinforcement learning for cyber security. IEEE Trans. Neural Netw. Learn. Syst. (2021).
[96]
Rui Nian, Jinfeng Liu, and Biao Huang. 2020. A review on reinforcement learning: Introduction and applications in industrial process control. Comput. Chem. Eng. 139 (2020), 106886.
[97]
Olalekan Ogunmolu, Nicholas Gans, and Tyler H. Summers. 2017. Minimax iterative dynamic game: Application to nonlinear robot control tasks. Intell. Robot. Syst. (2017).
[98]
Tuomas P. Oikarinen, Tsui-Wei Weng, and Luca Daniel. 2020. Robust deep reinforcement learning through adversarial loss. Neural Inf. Process. Syst. (2020).
[99]
Hajime Ono and Tsubasa Takahashi. 2020. Locally private distributed reinforcement learning. arXiv preprint arXiv:2001.11718 (2020).
[100]
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 35 (2022), 27730–27744.
[101]
Xinlei Pan, Weiyao Wang, Xiaoshuai Zhang, Bo Li, Jinfeng Yi, and Dawn Song. 2019. How you act tells a lot: Privacy-leaking attack on deep reinforcement learning. In 18th International Conference on Autonomous Agents and MultiAgent Systems. 368–376.
[102]
Jaehyoung Park, Dong Seong Kim, and Hyuk Lim. 2020. Privacy-preserving reinforcement learning using homomorphic encryption in cloud computing infrastructures. IEEE Access 8 (2020), 203564–203579.
[103]
Arpita Patra and Ajith Suresh. 2020. BLAZE: Blazing fast privacy-preserving machine learning. arXiv preprint arXiv:2005.09042 (2020).
[104]
Nhan H. Pham, Lam M. Nguyen, Jie Chen, Hoang Thanh Lam, Subhro Das, and Tsui-Wei Weng. 2022. Evaluating robustness of cooperative MARL: A model-based approach. arXiv preprint arXiv:2202.03558 (2022).
[105]
Rafael Pinot, Laurent Meunier, Alexandre Araujo, Hisashi Kashima, Florian Yger, Cédric Gouy-Pailler, and Jamal Atif. 2019. Theoretical evidence for adversarial robustness through randomization. Adv. Neural Inf. Process. Syst. 32 (2019).
[106]
Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. 2017. Robust adversarial reinforcement learning.
[107]
Geong Sen Poh and Kok-Lim Alvin Yau. 2016. Preserving privacy of agents in reinforcement learning for distributed cognitive radio networks. In 23rd International Conference on Neural Information Processing (ICONIP’16). Springer, 555–562.
[108]
Kritika Prakash, Fiza Husain, Praveen Paruchuri, and Sujit P. Gujar. 2021. How private is your RL policy? An inverse RL based analysis framework. arXiv preprint arXiv:2112.05495 (2021).
[109]
Dan Qiao and Yu-Xiang Wang. 2022. Offline reinforcement learning with differential privacy. arXiv preprint arXiv:2206.00810 (2022).
[110]
Erwin Quiring and Konrad Rieck. 2020. Backdooring and poisoning neural networks with image-scaling attacks. In IEEE Symposium on Security and Privacy.
[111]
Roberta Raileanu, Emily Denton, Arthur Szlam, and Rob Fergus. 2018. Modeling others using oneself in multi-agent reinforcement learning. In International Conference on Machine Learning. PMLR, 4257–4266.
[112]
Amin Rakhsha, Goran Radanovic, Rati Devidze, Xiaojin Zhu, and Adish Singla. 2020. Policy teaching via environment poisoning: Training-time adversarial attacks against reinforcement learning. In International Conference on Machine Learning.
[113]
Amin Rakhsha, Xuezhou Zhang, Xiaojin Zhu, and Adish Singla. 2021. Reward poisoning in reinforcement learning: Attacks against unknown learners in unknown environments. arXiv: Learning (2021).
[114]
Kui Ren, Tianhang Zheng, Zhan Qin, and Xue Liu. 2020. Adversarial attacks and defenses in deep learning. Engineering (2020).
[115]
Alessio Russo and Alexandre Proutiere. 2019. Optimal attacks on reinforcement learning policies. arXiv: Learning (2019).
[116]
Jun Sakuma, Shigenobu Kobayashi, and Rebecca N. Wright. 2008. Privacy-preserving reinforcement learning. In 25th International Conference on Machine Learning. 864–871.
[117]
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2018. ML-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246 (2018).
[118]
Vladimir Samsonov, Karim Ben Hicham, and Tobias Meisen. 2022. Reinforcement learning in manufacturing control: Baselines, challenges and ways forward. Eng. Applic. Artif. Intell. 112 (2022), 104868.
[119]
Soumik Sarkar, Zhanhong Jiang, and Aaron J. Havens. 2018. Online robust policy learning in the presence of unknown adversaries. Neural Inf. Process. Syst. (2018).
[120]
Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. 2020. Mastering Atari, Go, chess and shogi by planning with a learned model. Nature 588, 7839 (2020), 604–609.
[121]
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. 2015. Trust region policy optimization. In International Conference on Machine Learning. PMLR, 1889–1897.
[122]
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
[123]
Kanghyeon Seo and Jihoon Yang. 2020. Differentially private actor and its eligibility trace. Electronics 9, 9 (2020), 1486.
[124]
Qianli Shen, Yan Li, Haoming Jiang, Zhaoran Wang, and Tuo Zhao. 2020. Deep reinforcement learning with robust and smooth policy. In International Conference on Machine Learning. PMLR, 8707–8718.
[125]
Zhouxing Shi, Yihan Wang, Huan Zhang, J. Zico Kolter, and Cho-Jui Hsieh. 2022. Efficiently computing local Lipschitz constants of neural networks via bound propagation. Adv. Neural Inf. Process. Syst. 35 (2022), 2350–2364.
[126]
Hocheol Shin, Yunmok Son, Youngseok Park, Yujin Kwon, and Yongdae Kim. 2016. Sampling race: Bypassing timing-based analog active sensor spoofing detection on analog-digital systems. In 10th USENIX Conference on Offensive Technologies (WOOT’16).
[127]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In IEEE Symposium on Security and Privacy (SP’17). IEEE, 3–18.
[128]
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. 2017. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815 (2017).
[129]
David Silver, Satinder Singh, Doina Precup, and Richard S. Sutton. 2021. Reward is enough. Artif. Intell. 299 (2021), 103535.
[130]
Elena Smirnova, Elvis Dohmatob, and Jérémie Mary. 2019. Distributionally robust reinforcement learning. arXiv: Machine Learning (2019).
[131]
Yunmok Son, Hocheol Shin, Dongkwan Kim, Youngseok Park, Juhwan Noh, Kibum Choi, Jung-Woo Choi, and Yongdae Kim. 2015. Rocking drones with intentional sound noise on gyroscopic sensors. In USENIX Security Symposium.
[132]
Jianwen Sun, Tianwei Zhang, Xiaofei Xie, Lei Ma, Yan Zheng, Kangjie Chen, and Yang Liu. 2020. Stealthy and efficient adversarial attacks against deep reinforcement learning. In National Conference on Artificial Intelligence.
[133]
Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, and Furong Huang. 2022. Certifiably robust policy learning against adversarial communication in multi-agent systems. arXiv preprint arXiv:2206.10158 (2022).
[134]
Richard S. Sutton. 1991. Dyna, an integrated architecture for learning, planning, and reacting. ACM SIGART Bull. 2, 4 (1991), 160–163.
[135]
Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction. MIT Press.
[136]
Weixuan Tang, Bin Li, Mauro Barni, Jin Li, and Jiwu Huang. 2020. An automatic cost learning framework for image steganography using deep reinforcement learning. IEEE Trans. Inf. Forens. Secur. 16 (2020), 952–967.
[137]
Weixuan Tang, Bin Li, Mauro Barni, Jin Li, and Jiwu Huang. 2021. Improving cost learning for JPEG steganography by exploiting JPEG domain knowledge. IEEE Trans. Circ. Syst. Vid. Technol. (2021).
[138]
Chen Tessler, Yonathan Efroni, and Shie Mannor. 2019. Action robust reinforcement learning and applications in continuous control. In International Conference on Machine Learning.
[139]
Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction APIs. In 25th USENIX Security Symposium (USENIX Security’16). 601–618.
[140]
Edgar Tretschk, Seong Joon Oh, and Mario Fritz. 2018. Sequential attacks on agents for long-term adversarial goals. arXiv: Learning (2018).
[141]
James Tu, Tsun-Hsuan Wang, Jingkang Wang, Sivabalan Manivasagam, Mengye Ren, and Raquel Urtasun. 2021. Adversarial attacks on multi-agent communication. In International Conference on Computer Vision.
[142]
Giuseppe Vietri, Borja Balle, Akshay Krishnamurthy, and Steven Wu. 2020. Private reinforcement learning with PAC and regret guarantees. In International Conference on Machine Learning. PMLR, 9754–9764.
[143]
Nelson Vithayathil Varghese and Qusay H. Mahmoud. 2020. A survey of multi-task deep reinforcement learning. Electronics 9, 9 (2020), 1363.
[144]
Binghui Wang and Neil Zhenqiang Gong. 2018. Stealing hyperparameters in machine learning. In IEEE Symposium on Security and Privacy (SP’18). IEEE, 36–52.
[145]
Baoxiang Wang and Nidhi Hegde. 2019. Privacy-preserving Q-learning with functional noise in continuous spaces. Adv. Neural Inf. Process. Syst. 32 (2019).
[146]
Haoyu Wang, Guozheng Ma, Cong Yu, Ning Gui, Linrui Zhang, Zhiqi Huang, Suwei Ma, Yongzhe Chang, Sen Zhang, Li Shen, et al. 2023. Are large language models really robust to word-level perturbations? arXiv preprint arXiv:2309.11166 (2023).
[147]
Jingkang Wang, Yang Liu, and Bo Li. 2020. Reinforcement learning with perturbed rewards. In National Conference on Artificial Intelligence.
[148]
Lun Wang, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, and Dawn Song. 2021. BACKDOORL: Backdoor attack against competitive reinforcement learning. In International Joint Conference on Artificial Intelligence.
[149]
Ling Wang, Cheng Zhang, and Jie Liu. 2020. Deep learning defense method against adversarial attacks. Syst., Man Cybern. (2020).
[150]
Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. 2021. Beta-CROWN: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. Adv. Neural Inf. Process. Syst. 34 (2021).
[151]
Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond inferring class representatives: User-level privacy leakage from federated learning. In IEEE Conference on Computer Communications (INFOCOM’19). IEEE, 2512–2520.
[152]
Lily Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane Boning, and Inderjit Dhillon. 2018. Towards fast computation of certified robustness for ReLU networks. In International Conference on Machine Learning. PMLR, 5276–5285.
[153]
Tsui-Wei Weng, Krishnamurthy Dvijotham, Jonathan Uesato, Kai Xiao, Sven Gowal, Robert Stanforth, and Pushmeet Kohli. 2020. Toward evaluating robustness of deep reinforcement learning with continuous control. Learning (2020).
[154]
Eric Wong and Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning. PMLR, 5286–5295.
[155]
Fan Wu, Linyi Li, Huan Zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, and Bo Li. 2021. COPA: Certifying robust policies for offline reinforcement learning against poisoning attacks. In International Conference on Learning Representations.
[156]
Nan Wu, Farhad Farokhi, David Smith, and Mohamed Ali Kaafar. 2020. The value of collaboration in convex machine learning with differential privacy. In IEEE Symposium on Security and Privacy (SP’20). IEEE, 304–317.
[157]
Xian Wu, Wenbo Guo, Hua Wei, and Xinyu Xing. 2021. Adversarial policy training against deep reinforcement learning. In 30th USENIX Security Symposium (USENIX Security’21). 1883–1900.
[158]
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. 2023. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864 (2023).
[159]
Yingxiao Xiang, Wenjia Niu, Jiqiang Liu, Tong Chen, and Zhen Han. 2018. A PCA-based model to predict adversarial examples on Q-learning of path finding. In IEEE International Conference on Data Science in Cyberspace.
[160]
Chaowei Xiao, Xinlei Pan, Warren He, Bo Li, Jian Peng, Mingjie Sun, Jinfeng Yi, Mingyan Liu, and Dawn Song. 2018. Characterizing attacks on deep reinforcement learning. arXiv: Learning (2018).
[161]
Qixue Xiao, Yufei Chen, Chao Shen, Yu Chen, and Kang Li. 2019. Seeing is not believing: Camouflage attacks on image scaling algorithms. In USENIX Security Symposium.
[162]
Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. 2018. Mitigating adversarial effects through randomization. In International Conference on Learning Representations.
[163]
Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L. Yuille, and Kaiming He. 2019. Feature denoising for improving adversarial robustness. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. 501–509.
[164]
Tengyang Xie, Philip S. Thomas, and Gerome Miklau. 2019. Privacy preserving off-policy evaluation. arXiv preprint arXiv:1902.00174 (2019).
[165]
Zikang Xiong, Joe Eappen, He Zhu, and Suresh Jagannathan. 2022. Defending observation attacks in deep reinforcement learning via detection and denoising. arXiv preprint arXiv:2206.07188 (2022).
[166]
Hang Xu. 2022. Transferable environment poisoning: Training-time attack on reinforcement learner with limited prior knowledge. In 21st International Conference on Autonomous Agents and Multiagent Systems. 1884–1886.
[167]
Hang Xu, Rundong Wang, Lev Raizman, and Zinovi Rabinovich. 2021. Transferable environment poisoning: Training-time attack on reinforcement learning. In 20th International Conference on Autonomous Agents and Multiagent Systems. 1398–1406.
[168]
Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, and Cho-Jui Hsieh. 2020. Automatic perturbation analysis for scalable certified robustness and beyond. Adv. Neural Inf. Process. Syst. 33 (2020).
[169]
Mengdi Xu, Zuxin Liu, Peide Huang, Wenhao Ding, Zhepeng Cen, Bo Li, and Ding Zhao. 2022. Trustworthy reinforcement learning against intrinsic vulnerabilities: Robustness, safety, and generalizability. arXiv preprint arXiv:2209.08025 (2022).
[170]
Wanqi Xue, Wei Qiu, Bo An, Zinovi Rabinovich, Svetlana Obraztsova, and Chai Kiat Yeo. 2021. Mis-spoke or mis-lead: Achieving robustness in multi-agent communicative reinforcement learning. arXiv: Learning (2021).
[171]
Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Yi Ouyang, I-Te Danny Hung, Chin-Hui Lee, and Xiaoli Ma. 2020. Enhanced adversarial strategically-timed attacks against deep reinforcement learning. In International Conference on Acoustics, Speech, and Signal Processing.
[172]
Guoyu Yang, Yilei Wang, Zhaojie Wang, Youliang Tian, Xiaomei Yu, and Shouzhe Li. 2020. IPBSM: An optimal bribery selfish mining in the presence of intelligent and pure attackers. Int. J. Intell. Syst. 35, 11 (2020), 1735–1748.
[173]
Hongyang Yang, Xiao-Yang Liu, Shan Zhong, and Anwar Walid. 2020. Deep reinforcement learning for automated stock trading: An ensemble strategy. In 1st ACM International Conference on AI in Finance. 1–8.
[174]
Honggang Yu, Kaichen Yang, Teng Zhang, Yun-Yun Tsai, Tsung-Yi Ho, and Yier Jin. 2020. CloudLeak: Large-scale deep learning models stealing through adversarial examples. In Network and Distributed System Security Symposium (NDSS’20).
[175]
Jiahao Yu, Xingwei Lin, and Xinyu Xing. 2023. GPTFUZZER: Red teaming large language models with auto-generated jailbreak prompts. arXiv preprint arXiv:2309.10253 (2023).
[176]
Liang Yu, Shuqi Qin, Meng Zhang, Chao Shen, Tao Jiang, and Xiaohong Guan. 2021. A review of deep reinforcement learning for smart building energy management. IEEE Internet Things J. 8, 15 (2021), 12046–12063.
[177]
Yinbo Yu, Jiajia Liu, Shouqing Li, Kepu Huang, and Xudong Feng. 2022. A temporal-pattern backdoor attack to deep reinforcement learning. In IEEE Global Communications Conference (GLOBECOM’22). IEEE, 2710–2715.
[178]
Yinlong Yuan, Zhu Liang Yu, Zhenghui Gu, Xiaoyan Deng, and Yuanqing Li. 2019. A novel multi-step reinforcement learning method for solving reward hacking. Appl. Intell. (2019).
[179]
Albert Zhan, Stas Tiomkin, and Pieter Abbeel. 2020. Preventing imitation learning with adversarial policy ensembles. arXiv preprint arXiv:2002.01059 (2020).
[180]
Huan Zhang, Hongge Chen, Duane S. Boning, and Cho-Jui Hsieh. 2021. Robust reinforcement learning on state observations with learned optimal adversary. In International Conference on Learning Representations.
[181]
Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane S. Boning, and Cho-Jui Hsieh. 2020. Robust deep reinforcement learning against adversarial perturbations on state observations. Neural Inf. Process. Syst. (2020).
[182]
Haoqi Zhang and David C. Parkes. 2008. Value-based policy teaching with active indirect elicitation. In National Conference on Artificial Intelligence.
[183]
Haoqi Zhang, David C. Parkes, and Yiling Chen. 2009. Policy teaching through reward function learning. Electron. Commerce (2009).
[184]
Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. Adv. Neural Inf. Process. Syst. 31 (2018).
[185]
Kaiqing Zhang, Zhuoran Yang, and Tamer Başar. 2021. Multi-agent reinforcement learning: A selective overview of theories and algorithms. In Handbook of Reinforcement Learning and Control. Springer, 321–384.
[186]
Sai Qian Zhang, Qi Zhang, and Jieyu Lin. 2020. Succinct and robust multi-agent communication with temporal message control. Neural Inf. Process. Syst. (2020).
[187]
Xueru Zhang and Mingyan Liu. 2021. Fairness in learning-based sequential decision algorithms: A survey. In Handbook of Reinforcement Learning and Control. Springer, 525–555.
[188]
Wenshuai Zhao, Jorge Peña Queralta, and Tomi Westerlund. 2020. Sim-to-real transfer in deep reinforcement learning for robotics: A survey. In IEEE Symposium Series on Computational Intelligence (SSCI’20). IEEE, 737–744.
[189]
Huaicheng Zhou, Kanghua Mo, Teng Huang, and Yongjin Li. 2023. Empirical study of privacy inference attack against deep reinforcement learning models. Connect. Sci. 35, 1 (2023), 2211240.
[190]
Xingyu Zhou. 2022. Differentially private reinforcement learning with linear function approximation. Proceedings of the ACM Measur. Anal. Comput. Syst. 6, 1 (2022), 1–27.
[191]
Ziyuan Zhou and Guanjun Liu. 2022. RoMFAC: A robust mean-field actor-critic reinforcement learning against adversarial perturbations on states. arXiv preprint arXiv:2205.07229 (2022).
[192]
Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et al. 2023. PromptBench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528 (2023).
[193]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. Adv. Neural Inf. Process. Syst. 32 (2019).

Cited By

View all
  • (2024)Mitigating Deep Reinforcement Learning Backdoors in the Neural Activation Space2024 IEEE Security and Privacy Workshops (SPW)10.1109/SPW63631.2024.00013(76-86)Online publication date: 23-May-2024
  • (2024)From farm to market: Research progress and application prospects of artificial intelligence in the frozen fruits and vegetables supply chainTrends in Food Science & Technology10.1016/j.tifs.2024.104730153(104730)Online publication date: Nov-2024

Index Terms

  1. Security and Privacy Issues in Deep Reinforcement Learning: Threats and Countermeasures

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 56, Issue 6
      June 2024
      963 pages
      EISSN:1557-7341
      DOI:10.1145/3613600
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 23 February 2024
      Online AM: 12 January 2024
      Accepted: 01 December 2023
      Revised: 19 October 2023
      Received: 13 September 2022
      Published in CSUR Volume 56, Issue 6

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Deep reinforcement learning
      2. adversarial attack
      3. defense

      Qualifiers

      • Survey

      Funding Sources

      • National Natural Science Foundation of China for Joint Fund Project
      • National Natural Science Foundation of China
      • Natural Science Foundation of Guangdong Province of China

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,295
      • Downloads (Last 6 weeks)135
      Reflects downloads up to 09 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Mitigating Deep Reinforcement Learning Backdoors in the Neural Activation Space2024 IEEE Security and Privacy Workshops (SPW)10.1109/SPW63631.2024.00013(76-86)Online publication date: 23-May-2024
      • (2024)From farm to market: Research progress and application prospects of artificial intelligence in the frozen fruits and vegetables supply chainTrends in Food Science & Technology10.1016/j.tifs.2024.104730153(104730)Online publication date: Nov-2024

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media