Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Machine learning security attacks and defense approaches for emerging cyber physical applications: : A comprehensive survey

Published: 01 August 2022 Publication History

Abstract

The cyber physical systems integrate the sensing, computation, control and networking processes into physical objects and infrastructure, which are connected through the Internet to execute a common task. Cyber physical systems can be applied in various applications, like healthcare, transportation, industrial production, environment and sustainability, and security and surveillance. However, the tight coupling of cyber systems with physical systems introduce challenges in addressing stability, security, efficiency and reliability. The machine learning (ML) security is the inclusion of cyber security mechanism to provide protection to the machine learning models against various cyber attacks. The ML models work through the traditional training and testing approaches. However, execution of such kind of approaches may not function effectively in case if a system is connected to the Internet. As online hackers can exploit deployed security mechanisms and poison the data. This data is then taken as the input by the ML models. In this article, we provide the details of various machine learning security attacks in cyber physical systems. We then discuss some defense mechanisms to protect against these attacks. We also present a threat model of ML security mechanisms deployed in cyber systems. Furthermore, we discuss various issues and challenges of ML security mechanisms deployed in cyber systems. Finally, we provide a detailed comparative study on performance of the ML models under the influence of various ML attacks in cyber physical systems.

References

[1]
Rahman Ziaur, Khalil Ibrahim, Yi Xun, Atiquzzaman Mohammed, Blockchain-based security framework for a critical industry 4.0 cyber-physical system, IEEE Commun. Mag. 59 (5) (2021) 128–134.
[2]
Rao Aakarsh, Carreon Nadir, Lysecky Roman, Rozenblit Jerzy, Probabilistic threat detection for risk management in cyber-physical medical systems, IEEE Softw. 35 (1) (2018) 38–43.
[3]
Kordestani Mojtaba, Saif Mehrdad, Observer-based attack detection and mitigation for cyberphysical systems: A review, IEEE Syst. Man Cybern. Mag. 7 (2) (2021) 35–60.
[4]
Giraldo Jairo, Sarkar Esha, Cardenas Alvaro A., Maniatakos Michail, Kantarcioglu Murat, Security and privacy in cyber-physical systems: A survey of surveys, IEEE Des. Test 34 (4) (2017) 7–17.
[5]
Humayed Abdulmalik, Lin Jingqiang, Li Fengjun, Luo Bo, Cyber-physical systems security-A survey, IEEE Internet Things J. 4 (6) (2017) 1802–1831.
[6]
Rahman Ziaur, Khalil Ibrahim, Yi Xun, Atiquzzaman Mohammed, Blockchain-based security framework for a critical industry 4.0 cyber-physical system, IEEE Commun. Mag. 59 (5) (2021) 128–134.
[7]
Mothukuri Viraaji, Khare Prachi, Parizi Reza M., Pouriyeh Seyedamin, Dehghantanha Ali, Srivastava Gautam, Federated-learning-based anomaly detection for IoT security attacks, IEEE Internet Things J. 9 (4) (2022) 2545–2554.
[8]
K. Shailaja, B. Seetharamulu, M. A. Jabbar, Machine Learning in Healthcare: A Review, in: Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2018, pp. 910–914.
[9]
Kamilaris Andreas, Prenafeta-Boldú Francesc X., Deep learning in agriculture: A survey, Comput. Electron. Agric. 147 (2018) 70–90.
[10]
Chamola Vinay, Hassija Vikas, Gupta Sakshi, Goyal Adit, Guizani Mohsen, Sikdar Biplab, Disaster and pandemic management using machine learning: A survey, IEEE Internet Things J. PP (2020).
[11]
Luo Zhengping, Zhao Shangqing, Lu Zhuo, Xu Jie, Sagduyu Yalin E., When attackers meet AI: Learning-empowered attacks in cooperative spectrum sensing, IEEE Trans. Mob. Comput. 21 (5) (2022) 1892–1908.
[12]
Afuwape Afeez Ajani, Xu Ying, Anajemba Joseph Henry, Srivastava Gautam, Performance evaluation of secured network traffic classification using a machine learning approach, Comput. Stand. Interfaces 78 (2021).
[13]
Khan Rabia, Kumar Pardeep, Jayakody Dushantha Nalin K., Liyanage Madhusanka, A survey on security and privacy of 5G technologies: Potential solutions, recent advancements, and future directions, IEEE Commun. Surv. Tutor. 22 (1) (2020) 196–248.
[14]
Sun Gan, Cong Yang, Dong Jiahua, Wang Qiang, Lyu Lingjuan, Liu Ji, Data poisoning attacks on federated machine learning, IEEE Internet Things J. (2021).
[15]
Rathee Geetanjali, Jaglan Naveen, Garg Sahil, Choi Bong Jun, Jayakody Dushantha Nalin K., Handoff security using artificial neural networks in cognitive radio networks, IEEE Internet Things Mag. 3 (4) (2020) 20–28.
[16]
Zheng Huadi, Ye Qingqing, Hu Haibo, Fang Chengfang, Shi Jie, Protecting decision boundary of machine learning model with differentially private perturbation, IEEE Trans. Dependable Secure Comput. 19 (3) (2022) 2007–2022.
[17]
Kumar Prabhat, Kumar Randhir, Srivastava Gautam, Gupta Govind P., Tripathi Rakesh, Gadekallu Thippa Reddy, Xiong Neal N., PPSF: A privacy-preserving and secure framework using blockchain-based machine-learning for IoT-driven smart cities, IEEE Trans. Netw. Sci. Eng. 8 (3) (2021) 2326–2341.
[18]
Pundir S., Wazid M., Singh D.P., Das A.K., Rodrigues J.J.P.C., Park Y., Intrusion detection protocols in wireless sensor networks integrated to internet of things deployment: Survey and future challenges, IEEE Access 8 (2020) 3343–3363.
[19]
Wazid Mohammad, Das Ashok Kumar, Shetty Sachin, Gope Prosanta, Rodrigues Joel J.P.C., Security in 5G-enabled internet of things communication: Issues, challenges, and future research roadmap, IEEE Access 9 (2021) 4466–4489.
[20]
Hou Yichen, Garg Sahil, Hui Lin, Jayakody Dushantha Nalin K., Jin Rui, Hossain M. Shamim, A data security enhanced access control mechanism in mobile edge computing, IEEE Access 8 (2020) 136119–136130.
[21]
Uttam Ghosh, Pushpita Chatterjee, Deepak Tosh, Sachin Shetty, Kaiqi Xiong, Charles Kamhoua, An SDN Based Framework for Guaranteeing Security and Performance in Information-Centric Cloud Networks, in: IEEE 10th International Conference on Cloud Computing (CLOUD), Honololu, USA, 2017, pp. 749–752.
[22]
Sisejkovic Dominik, Merchant Farhad, Reimann Lennart M., Leupers Rainer, Deceptive logic locking for hardware integrity protection against machine learning attacks, IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 41 (6) (2022) 1716–1729.
[23]
More Sujeet, Singla Jimmy, Verma Sahil, Kavita, Ghosh Uttam, Rodrigues Joel J.P.C., Hosen A.S.M. Sanwar, Ra In-Ho, Security assured CNN-based model for reconstruction of medical images on the internet of healthcare things, IEEE Access 8 (2020) 126333–126346.
[24]
Chen Tong, Liu Jiqiang, Xiang Yingxiao, Niu Wenjia, Tong Endong, Han Zhen, Adversarial attack and defense in reinforcement learning-from AI security view, Cybersecurity 2 (1) (2019) 11.
[25]
Berman Daniel S., Buczak Anna L., Chavis Jeffrey S., Corbett Cherita L., A survey of deep learning methods for cyber security, Information 10 (4) (2019).
[26]
Dasgupta Dipankar, Akhtar Zahid, Sen Sajib, Machine learning in cybersecurity: a comprehensive survey, J. Def. Model. Simul. (2020).
[27]
Rosenberg Ishai, Shabtai Asaf, Elovici Yuval, Rokach Lior, Adversarial machine learning attacks and defense methods in the cyber security domain, ACM Comput. Surv. 54 (5) (2021).
[28]
M. Barreno, Blaine Nelson, R. Sears, A. Joseph, J. Tygar, Can machine learning be secure?, in: ACM Symposium on Information, Computer and Communications Security (ASIACCS)’06, Taipei Taiwan, 2006, pp. 1–10.
[29]
Barreno Marco, Nelson Blaine, Joseph Anthony D., Tygar J.D., The security of machine learning, Mach. Learn. 81 (2) (2010) 121–148.
[30]
Nicolas Papernot, A Marauder’s Map of Security and Privacy in Machine Learning, in: 11th ACM Workshop on Artificial Intelligence and Security, Toronto, Canada, 2018.
[31]
Xue Mingfu, Yuan Chengxiang, Wu Heyi, Zhang Yushu, Liu Weiqiang, Machine learning security: Threats, countermeasures, and evaluations, IEEE Access 8 (2020) 74720–74742.
[32]
Liu Qiang, Li Pan, Zhao Wentao, Cai Wei, Yu Shui, Leung Victor C.M., A survey on security threats and defensive techniques of machine learning: A data driven view, IEEE Access 6 (2018) 12103–12117.
[33]
Spring Jonathan M., Galyardt April, Householder Allen D., VanHoudnos Nathan M., On managing vulnerabilities in AI/ML systems, 2020, CoRR, abs/2101.10865.
[34]
Katja Auernhammer, Ramin Tavakoli Kolagari, Markus Zoppelt, Attacks on Machine Learning: Lurking Danger for Accountability, in: AAAI Workshop on Artificial Intelligence, Honolulu, Hawaii, USA, 2019.
[35]
Evtimov Ivan, Eykholt Kevin, Fernandes Earlence, Kohno Tadayoshi, Li Bo, Prakash Atul, Rahmati Amir, Song Dawn, Robust physical-world attacks on machine learning models, 2017, CoRR, arXiv:1707.08945.
[36]
Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, Michael P. Wellman, SoK: Security and Privacy in Machine Learning, in: IEEE European Symposium on Security and Privacy (Euro S & P), London, UK, 2018, pp. 399–414.
[37]
Guo Sensen, Zhao Jinxiong, Li Xiaoyu, Duan Junhong, Mu Dejun, Jing Xiao, A black-box attack method against machine-learning-based anomaly network flow detection models, Secur. Commun. Netw. 2021 (2021).
[38]
Kaichen Yang, Jianqing Liu, Chi Zhang, Yuguang Fang, Adversarial Examples Against the Deep Learning Based Network Intrusion Detection Systems, in: MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM), Los Angeles, USA, 2018, pp. 559–564.
[39]
Song Congzheng, Shmatikov Vitaly, The natural auditor: How to tell if someone used your words to train their model, 2018, ArXiv, abs/1811.00513.
[40]
Nicholas Carlini, Chang Liu, Jernej Kos, Úlfar Erlingsson, Dawn Song, The secret sharer: Evaluating and testing unintended memorization in neural networks, in: Proceedings of the 28th USENIX Conference on Security Symposium, 2019, pp. 267–284.
[41]
Ateniese Giuseppe, Felici Giovanni, Mancini Luigi V., Spognardi Angelo, Villani Antonio, Vitali Domenico, Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers, Int. J. Secur. Netw. 10 (2015) 137–150.
[42]
Liu Ximeng, Xie Lehui, Wang Yaopeng, Zou Jian, Xiong Jinbo, Ying Zuobin, Vasilakos Athanasios V., Privacy and security issues in deep learning: A survey, IEEE Access 9 (2021) 4566–4593.
[43]
Rigaki Maria, Garcia Sebastian, A survey of privacy attacks in machine learning, 2020, abs/2007.07646.
[44]
Gu Tianyu, Dolan-Gavitt Brendan, Garg Siddharth, BadNets: Identifying vulnerabilities in the machine learning model supply chain, 2017, ArXiv, abs/1708.06733.
[45]
Florian Tramer, Fan Zhang, Ari Juels, Michael K. Reiter, Thomas Ristenpart, Stealing Machine Learning Models via Prediction APIs, in: 25th USENIX Conference on Security Symposium, Vancouver, Canada, 2016, pp. 601–618.
[46]
Matt Fredrikson, Somesh Jha, Thomas Ristenpart, Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures, in: 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, USA, 2015, pp. 1322–1333.
[47]
Seira Hidano, Takao Murakami, Shuichi Katsumata, Shinsaku Kiyomoto, Goichiro Hanaoka, Model Inversion Attacks for Prediction Systems: Without Knowledge of Non-Sensitive Attributes, in: 15th Annual Conference on Privacy, Security and Trust (PST), Calgary, Canada, 2017, pp. 115–11509.
[48]
Al-Rubaie Mohammad, Chang J. Morris, Privacy-preserving machine learning: Threats and solutions, IEEE Secur. Priv. 17 (2) (2019) 49–58.
[49]
Dolev D., Yao A.C., On the security of public key protocols, IEEE Trans. Inform. Theory 29 (2) (1983) 198–208.
[50]
Ghosh Uttam, Dong Xinshu, Tan Rui, Kalbarczyk Zbigniew, Yau David K.Y., Iyer Ravishankar K., A simulation study on smart grid resilience under software-defined networking controller failures, in: Proceedings of the 2nd ACM International Workshop on Cyber-Physical System Security, Association for Computing Machinery, Xi’an, China, 2016, pp. 52–58.
[51]
Uttam Ghosh, Pushpita Chatterjee, Sachin Shetty, A Security Framework for SDN-Enabled Smart Power Grids, in: IEEE 37th International Conference on Distributed Computing Systems Workshops (ICDCSW), Atlanta, USA, 2017, pp. 113–118.
[52]
Kurakin Alexey, Goodfellow Ian J., Bengio Samy, Adversarial examples in the physical world, 2016, CoRR, abs/1607.02533.
[53]
Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami, Practical black-box attacks against deep learning systems using adversarial examples, in: Proceedings of the ACM Asia Conference on Computer and Communications Security, Abu Dhabi, UAE, 2017.
[54]
Chen Xinyun, Liu Chang, Li Bo, Lu Kimberly, Song Dawn, Targeted backdoor attacks on deep learning systems using data poisoning, 2017, ArXiv, abs/1712.05526.
[55]
Matthew Jagielski, Alina Oprea, B. Biggio, Chang Liu, C. Nita-Rotaru, Bo Li, Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning, in: IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 2018, pp. 19–35.
[56]
Gu Tianyu, Liu Kang, Dolan-Gavitt Brendan, Garg Siddharth, BadNets: Evaluating backdooring attacks on deep neural networks, IEEE Access 7 (2019) 47230–47244.
[57]
Barni Mauro, Kallas Kassem, Tondi Benedetta, A new backdoor attack in CNNs by training set corruption without label poisoning, in: IEEE International Conference on Image Processing (ICIP), 2019, pp. 101–105.
[58]
Lorenz Tobias, Kwiatkowska Marta, Fritz Mario, Backdoor attacks on network certification via data poisoning, 2021, ArVix, abs/2108.11299.
[59]
Chen Bryant, Carvalho Wilka, Baracaldo Nathalie, Ludwig Heiko, Edwards Benjamin, Lee Taesung, Molloy Ian, Srivastava Biplav, Detecting backdoor attacks on deep neural networks by activation clustering, in: SafeAI@AAAI, 2019.
[60]
Weber Maurice, Xu Xiaojun, Karlas Bojan, Zhang Ce, Li Bo, RAB: Provable robustness against backdoor attacks, 2020, ArXiv, abs/2003.08904.
[61]
Liu K., Dolan-Gavitt Brendan, Garg S., Fine-pruning: Defending against backdooring attacks on deep neural networks, in: RAID, 2018.
[62]
Merve Aladag, Ferhat Ozgur Catak, Ensar Gul, Preventing Data Poisoning Attacks By Using Generative Models, in: 1st International Informatics and Software Engineering Conference (UBMYK), Ankara, Turkey, 2019, pp. 1–5.
[63]
Steinhardt Jacob, Koh Pang Wei, Liang Percy, Certified defenses for data poisoning attacks, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 3520–3532. abs/1706.03691.
[64]
Shen Juncheng, Zhu Xiaolei, Ma De, TensorClog: An imperceptible poisoning attack on deep neural network applications, IEEE Access 7 (2019) 41498–41506.
[65]
Wang Binghui, Gong Neil Zhenqiang, Stealing hyperparameters in machine learning, in: IEEE Symposium on Security and Privacy (SP), 2018, pp. 36–52.
[66]
Y. Zhang, R. Jia, H. Pei, W. Wang, B. Li, D. Song, The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020, pp. 250–258.
[67]
Phong Le Trieu, Aono Yoshinori, Hayashi Takuya, Wang Lihua, Moriai Shiho, Privacy-preserving deep learning via additively homomorphic encryption, IEEE Trans. Inf. Forensics Secur. 13 (2018) 1333–1345.
[68]
Abadi Martin, Chu Andy, Goodfellow Ian, McMahan H. Brendan, Mironov Ilya, Talwar Kunal, Zhang Li, Deep learning with differential privacy, in: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, Vienna Austria, 2016.
[69]
Payman Mohassel, Yupeng Zhang, SecureML: A System for Scalable Privacy-Preserving Machine Learning, in: IEEE Symposium on Security and Privacy (S&P), San Jose, USA, 2017, pp. 19–38.
[70]
Shokri Reza, Stronati Marco, Shmatikov Vitaly, Membership inference attacks against machine learning models, in: Proceedings of the IEEE Symposium on Security and Privacy, 2016, abs/1610.05820.
[71]
Liu Gaoyang, Wang Chen, Peng Kai, Huang Haojun, Li Yutong, Cheng Wenqing, SocInf: Membership inference attacks on social media health data with machine learning, IEEE Trans. Comput. Soc. Syst. 6 (5) (2019) 907–921.
[72]
Pyrgelis Apostolos, Troncoso Carmela, Cristofaro Emiliano De, Knock knock, who’s there? Membership inference on aggregate location data, in: Proceedings of the 25th Network and Distributed System Security Symposium, 2017, abs/1708.06145.
[73]
Truex Stacey, Liu Ling, Gursoy Mehmet Emre, Yu Lei, Wei Wenqi, Demystifying membership inference attacks in machine learning as a service, IEEE Trans. Serv. Comput. (2019).
[74]
Nasr Milad, Shokri Reza, Houmansadr Amir, Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning, in: 2019 IEEE Symposium on Security and Privacy (SP), 2019.
[75]
Sablayrolles Alexandre, Douze Matthijs, Ollivier Yann, Schmid Cordelia, Jégou Hervé, White-box vs black-box: Bayes optimal strategies for membership inference, 2019.
[76]
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong, MemGuard: Defending against black-box membership inference attacks via adversarial examples, pp. 259–274.
[77]
Yang Ziqi, Shao Bin, Xuan Bohan, Chang E., Zhang Fangfang, Defending model inversion and membership inference attacks via prediction purification, 2020, ArXiv, abs/2005.03915.
[78]
Paleyes Andrei, Urma Raoul-Gabriel, Lawrence N., Challenges in deploying machine learning: a survey of case studies, 2020, ArXiv, abs/2011.09926.
[79]
Jang-Jaccard Julian, Nepal Surya, A survey of emerging threats in cybersecurity, J. Comput. System Sci. 80 (5) (2014) 973–993. Special Issue on Dependable and Secure Computing.
[80]
Sehatbakhsh Nader, Daw Ellie, Savas Onur, Hassanzadeh Amin, McCulloh Ian, Security and privacy considerations for machine learning models deployed in the government and public sector (white paper), 2020, CoRR, abs/2010.05809.
[81]
Alaasam Ameer, Radchenko Gleb, Tchernykh Andrei, Comparative analysis of virtualization methods in big data processing, Supercomput. Front. Innov.: Int. J. 61 (2019) 48–79.
[82]
Drungilas Vaidotas, Vaičiukynas Evaldas, Jurgelaitis Mantas, Butkienė Rita, Čeponienė Lina, Towards blockchain-based federated machine learning: Smart contract for model inference, Appl. Sci. 11 (2021).
[83]
Gupta Neal, Huang W. Ronny, Fowl Liam, Zhu Chen, Feizi Soheil, Goldstein Tom, Dickerson John P., Strong baseline defenses against clean-label poisoning attacks, in: ECCV Workshop, 2020, pp. 55–70.
[84]
Chen Jian, Zhang Xuxin, Zhang Rui, Wang Chen, Liu Ling, De-pois: An attack-agnostic defense against data poisoning attacks, 2021, abs/2105.03592.
[85]
Zhu Chen, Huang W. Ronny, Shafahi Ali, Li Hengduo, Taylor Gavin, Studer Christoph, Goldstein Tom, Transferable clean-label poisoning attacks on deep neural nets, 2019.
[86]
Han Xiao, Huang Xiao, Claudia Eckert, Adversarial label flips attack on support vector machines, in: Proceedings of the 20th European Conference on Artificial Intelligence, Montpellier, France, 2012, pp. 870–875.
[87]
Biggio Battista, Nelson Blaine, Laskov Pavel, Support vector machines under adversarial label noise., J. Mach. Learn. Res. - Proc. Track 20 (2011) 97–112.

Cited By

View all

Index Terms

  1. Machine learning security attacks and defense approaches for emerging cyber physical applications: A comprehensive survey
              Index terms have been assigned to the content through auto-classification.

              Recommendations

              Comments

              Information & Contributors

              Information

              Published In

              cover image Computer Communications
              Computer Communications  Volume 192, Issue C
              Aug 2022
              431 pages

              Publisher

              Elsevier Science Publishers B. V.

              Netherlands

              Publication History

              Published: 01 August 2022

              Author Tags

              1. Cyber physical systems (CPS)
              2. Machine learning (ML) security
              3. Intrusion detection
              4. Authentication
              5. Privacy and security

              Qualifiers

              • Research-article

              Contributors

              Other Metrics

              Bibliometrics & Citations

              Bibliometrics

              Article Metrics

              • 0
                Total Citations
              • 0
                Total Downloads
              • Downloads (Last 12 months)0
              • Downloads (Last 6 weeks)0
              Reflects downloads up to 04 Oct 2024

              Other Metrics

              Citations

              Cited By

              View all

              View Options

              View options

              Get Access

              Login options

              Media

              Figures

              Other

              Tables

              Share

              Share

              Share this Publication link

              Share on social media