Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

Artificial Intelligence Security: Threats and Countermeasures

Published: 23 November 2021 Publication History
  • Get Citation Alerts
  • Abstract

    In recent years, with rapid technological advancement in both computing hardware and algorithm, Artificial Intelligence (AI) has demonstrated significant advantage over human being in a wide range of fields, such as image recognition, education, autonomous vehicles, finance, and medical diagnosis. However, AI-based systems are generally vulnerable to various security threats throughout the whole process, ranging from the initial data collection and preparation to the training, inference, and final deployment. In an AI-based system, the data collection and pre-processing phase are vulnerable to sensor spoofing attacks and scaling attacks, respectively, while the training and inference phases of the model are subject to poisoning attacks and adversarial attacks, respectively. To address these severe security threats against the AI-based systems, in this article, we review the challenges and recent research advances for security issues in AI, so as to depict an overall blueprint for AI security. More specifically, we first take the lifecycle of an AI-based system as a guide to introduce the security threats that emerge at each stage, which is followed by a detailed summary for corresponding countermeasures. Finally, some of the future challenges and opportunities for the security issues in AI will also be discussed.

    References

    [1]
    Abdullah Al-Dujaili, Alex Huang, Erik Hemberg, and Una-May O’Reilly. 2018. Adversarial deep learning for robust detection of binary encoded malware. In Proceedings of the 2018 IEEE Security and Privacy Workshops. IEEE, 76–82.
    [2]
    Moustafa Alzantot, Bharathan Balaji, and Mani B. Srivastava. 2018. Did you hear that? Adversarial Examples Against Automatic Speech Recognition. CoRR abs/1801.00554 (2018). arXiv:1801.00554. http://arxiv.org/abs/1801.00554.
    [3]
    Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, Jie Chen, Jingdong Chen, Zhijie Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Ke Ding, Niandong Du, Erich Elsen, Jesse Engel, Weiwei Fang, Linxi Fan, Christopher Fougner, Liang Gao, Caixia Gong, Awni Hannun, Tony Han, Lappi Vaino Johannes, Bing Jiang, Cai Ju, Billy Jun, Patrick LeGresley, Libby Lin, Junjie Liu, Yang Liu, Weigao Li, Xiangang Li, Dongpeng Ma, Sharan Narang, Andrew Ng, Sherjil Ozair, Yiping Peng, Ryan Prenger, Sheng Qian, Zongfeng Quan, Jonathan Raiman, Vinay Rao, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Kavya Srinet, Anuroop Sriram, Haiyuan Tang, Liliang Tang, Chong Wang, Jidong Wang, Kaifu Wang, Yi Wang, Zhijian Wang, Zhiqian Wang, Shuang Wu, Likai Wei, Bo Xiao, Wen Xie, Yan Xie, Dani Yogatama, Bin Yuan, Jun Zhan, and Zhenyao Zhu. 2016. Deep speech 2: End-to-end speech recognition in English and Mandarin. In Proceedings of the 33rd International Conference on International Conference on Machine Learning. 173–182.
    [4]
    Hyrum S. Anderson, Anant Kharkar, Bobby Filar, and Phil Roth. 2017. Evading machine learning malware detection. Black Hat (2017), 1–6.
    [5]
    Martin Arjovsky and Léon Bottou. 2017. Towards Principled Methods for Training Generative Adversarial Networks. arXiv:1701.04862 [stat.ML] https://arxiv.org/abs/1701.04862
    [6]
    Eugene Bagdasaryan and Vitaly Shmatikov. 2020. Blind Backdoors in Deep Learning Models. CoRR abs/2005.03823 (2020). arXiv:2005.03823 https://arxiv.org/abs/2005.03823
    [7]
    Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics. PMLR, 2938–2948.
    [8]
    Shumeet Baluja and Ian Fischer. 2017. Adversarial Transformation Networks: Learning to Generate Adversarial Examples. CoRR abs/1703.09387 (2017). arXiv:1703.09387 http://arxiv.org/abs/1703.09387
    [9]
    Mauro Barni, Kassem Kallas, and Benedetta Tondi. 2019. A new backdoor attack in CNNs by training set corruption without label poisoning. In Proceedings of the 2019 IEEE International Conference on Image Processing. IEEE, 101–105.
    [10]
    E. Bertino. 2021. Attacks on artificial intelligence [last word]. IEEE Security & Privacy 19, 01 (Jan 2021), 103–104. DOI:https://doi.org/10.1109/MSEC.2020.3037619
    [11]
    Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing federated learning through an adversarial lens. In Proceedings of the International Conference on Machine Learning. PMLR, 634–643.
    [12]
    Bryan Biegel and James F. Kurose. 2016. The National Artificial Intelligence Research and Development Strategic Plan_NSTC and NITRD. White House (2016). https://www.nitrd.gov/pubs/national_ai_rd_strategic_plan.pdf.
    [13]
    Battista Biggio, Igino Corona, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli. 2011. Bagging classifiers for fighting poisoning attacks in adversarial classification tasks. In Multiple Classifier Systems. C. Sansone, J. Kittler, and F. Roli (Eds.), Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 6713, Springer, 350–359. DOI:https://doi.org/10.1007/978-3-642-21557-5_37
    [14]
    Battista Biggio, Blaine Nelson, and Pavel Laskov. 2013. Poisoning Attacks against Support Vector Machines. arXiv:1206.6389 [cs.LG].
    [15]
    Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (Toronto, Canada) (CCS’18). Association for Computing Machinery, New York, NY, USA, 2154–2156. https://doi.org/10.1145/3243734.3264418
    [16]
    Benjamin Birnbaum, Brian DeRenzi, Abraham D. Flaxman, and Neal Lesh. 2012. Automated quality control for mobile data collection. In Proceedings of the 2nd ACM Symposium on Computing for Development. ACM, New York, NY.DOI:https://doi.org/10.1145/2160601.2160603
    [17]
    Jakramate Bootkrajang and Ata Kabán. 2014. Learning kernel logistic regression in the presence of class label noise. Pattern Recognition 47, 11 (2014), 3641–3655. DOI:https://doi.org/10.1016/j.patcog.2014.05.007
    [18]
    Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. 2017. Adversarial Patch. CoRR abs/1712.09665 (2017). arXiv:1712.09665 http://arxiv.org/abs/1712.09665
    [19]
    Nicholas Carlini and David A. Wagner. 2016. Defensive Distillation is Not Robust to Adversarial Examples. CoRR abs/1607.04311 (2016). arXiv:1607.04311 http://arxiv.org/abs/1607.04311
    [20]
    Nicholas Carlini and David Wagner. 2017. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 3–14.
    [21]
    Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy. IEEE, 39–57.
    [22]
    Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In Proceedings of the 2018 IEEE Security and Privacy Workshops. 1–7.
    [23]
    Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial Attacks and Defences: A Survey. CoRR abs/1810.00069 (2018). arXiv:1810.00069 http://arxiv.org/abs/1810.00069
    [24]
    Guillaume M. J.-B. Chaslot, Mark H. M. Winands, H. Jaap van den Herik, Jos W. H. M. Uiterwijk, and Bruno Bouzy. 2008. Progressive strategies for Monte-Carlo tree search. New Mathematics and Natural Computation 04, 03 (nov 2008), 343–357. DOI:https://doi.org/10.1142/s1793005708001094
    [25]
    Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian M. Molloy, and Biplav Srivastava. 2018. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering. CoRR abs/1811.03728 (2018). arXiv:1811.03728 http://arxiv.org/abs/1811.03728
    [26]
    Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 15–26.
    [27]
    Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. CoRR abs/1712.05526 (2017). arXiv:1712.05526 http://arxiv.org/abs/1712.05526
    [28]
    Andreas Christmann and Ingo Steinwart. 2004. On robustness properties of convex risk minimization methods for pattern recognition. The Journal of Machine Learning Research 5 (Dec. 2004), 1007–1034.
    [29]
    Moustapha Cisse, Yossi Adi, Natalia Neverova, and Joseph Keshet. 2017. Houdini: Fooling Deep Structured Prediction Models. arXiv:1707.05373.
    [30]
    Camille Cobb, Samuel Sudar, Nicholas Reiter, Richard Anderson, Franziska Roesner, and Tadayoshi Kohno. 2016. Computer security for data collection technologies. In Proceedings of the Eighth International Conference on Information and Communication Technologies and Development (Ann Arbor, MI, USA) (ICTD’16). Association for Computing Machinery, New York, NY, USA, Article 2, 11 pages. https://doi.org/10.1145/2909609.2909660
    [31]
    D. Cockburn and N. R. Jennings. 1996. ARCHON: A distributed artificial intelligence system for industrial applications. In Foundations of Distributed Artificial Intelligence. G. M. P. O’Hare and N. R. Jennings (Eds.), Wiley, 319–344
    [32]
    Ronan Collobert, Christian Puhrsch, and Gabriel Synnaeve. 2016. Wav2Letter: an End-to-End ConvNet-based Speech Recognition System. CoRR abs/1609.03193 (2016). arXiv:1609.03193 http://arxiv.org/abs/1609.03193.
    [33]
    Jamie Condliffe. 2015. Robotic Surgery Has Been Connected to 144 US Deaths Since 2000. Retrieved October 28, 2021 from https://gizmodo.com/robotic-surgery-has-been-connected-to-144-u-s-deaths-s-1719202166.
    [34]
    Gabriela F. Cretu, Angelos Stavrou, Michael E. Locasto, Salvatore J. Stolfo, and Angelos D. Keromytis. 2008. Casting out demons: Sanitizing training data for anomaly sensors. In Proceedings of the 2008 IEEE Symposium on Security and Privacy. IEEE, 81–95.
    [35]
    Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, and Deepak Verma. 2004. Adversarial classification. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 99–108.
    [36]
    Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Li Chen, Michael E. Kounavis, and Duen Horng Chau. 2018. Adagio: Interactive experimentation with adversarial attack and defense for audio. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 677–681.
    [37]
    Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved October 28, 2021 from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
    [38]
    Saharnaz Dilmaghani, Matthias R. Brust, Grégoire Danoy, Natalia Cassagnes, Johnatan Pecero, and Pascal Bouvry. 2019. Privacy and security of big data in AI systems: A research and standards perspective. In Proceedings of the 2019 IEEE International Conference on Big Data. IEEE, 5737–5743.
    [39]
    Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 9185–9193. DOI:https://doi.org/10.1109/CVPR.2018.00957
    [40]
    Cynthia Dwork. 2006. Differential privacy. In Automata, Languages and Programming. M. Bugliesi, B. Preneel, V. Sassone, and I. Wegener (Eds.), Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4052, Springer, 1–12. DOI:https://doi.org/10.1007/11787006_1
    [41]
    Gintare Karolina Dziugaite, Zoubin Ghahramani, and Daniel M. Roy. 2016. A study of the effect of JPG compression on adversarial images. CoRR abs/1608.00853 (2016). arXiv:1608.00853 http://arxiv.org/abs/1608.00853
    [42]
    Bill Eidson. 2020. MITRE, MICROSOFT, AND 11 OTHER ORGANIZATIONS TAKE ON MACHINE-LEARNING THREATS. Retrieved October 28, 2021 from https://www.mitre.org/publications/project-stories/mitre-microsoft-others-take-on-machine-learning-threats.
    [43]
    Paul Triolo Graham Webster, Rogier Creemers, and Elsa Kania. 2017. A Next Generation Artificial Intelligence Development Plan: China. Retrieved October 28, 2021 from https://www.newamerica.org/cybersecurity-initiative/digichina/blog/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/.
    [44]
    Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. 2014. RAPPOR: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the ACM Conference on Computer and Communications Security. DOI:https://doi.org/10.1145/2660267.2660348
    [45]
    Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. 2017. Robust Physical-World Attacks on Machine Learning Models. CoRR abs/1707.08945 (2017). arXiv:1707.08945 http://arxiv.org/abs/1707.08945
    [46]
    Reuben Feinman, Ryan R. Curtin, Saurabh Shintre, and Andrew B. Gardner. 2017. Detecting Adversarial Samples from Artifacts. arXiv:1703.00410 [stat.ML].
    [47]
    Financial Stability Board. 2017. Artificial intelligence and machine learning in financial services - market developments and financial stability implications. Financial Stability Board 45 (2017).
    [48]
    Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In Proceedings of the 23rd USENIX Security Symposium. 17–32.
    [49]
    Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In Proceedings of the 2018 IEEE Security and Privacy Workshops. IEEE, 50–56.
    [50]
    Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, and Hyoungshick Kim. 2020. Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review. CoRR abs/2007.10760 (2020). arXiv:2007.10760 https://arxiv.org/abs/2007.10760
    [51]
    Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, and Surya Nepal. 2019. Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference. 113–125.
    [52]
    Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In Proceedings of the International Conference on Machine Learning. 201–210.
    [53]
    Yuan Gong and Christian Poellabauer. 2017. Crafting Adversarial Examples For Speech Paralinguistics Applications. CoRR abs/1711.03280 (2017). arXiv:1711.03280 http://arxiv.org/abs/1711.03280
    [54]
    Zhitao Gong, Wenlu Wang, Bo Li, Dawn Song, and Wei-Shinn Ku. 2018. Adversarial Texts with Gradient Methods. arXiv:1801.07175 [cs.CL] https://arxiv.org/abs/1801.07175
    [55]
    Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the 3rd International Conference on Learning Representations. ICLR.
    [56]
    Divya Gopinath, Guy Katz, Corina S. Păsăreanu, and Clark Barrett. 2018. Deepsafe: A data-driven approach for assessing robustness of neural networks. In Proceedings of the International Symposium on Automated Technology for Verification and Analysis. Springer, 3–19.
    [57]
    Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd International Conference on Machine Learning. 369–376.
    [58]
    Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, and Patrick D. McDaniel. 2017. On the (Statistical) Detection of Adversarial Examples. CoRR abs/1702.06280 (2017). arXiv:1702.06280 http://arxiv.org/abs/1702.06280
    [59]
    Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2017. Adversarial examples for malware detection. In Proceedings of the European Symposium on Research in Computer Security. Springer, 62–79.
    [60]
    Shixiang Gu and Luca Rigazio. 2015. Towards Deep Neural Network Architectures Robust to Adversarial Examples. arXiv:1412.5068 [cs.LG] https://arxiv.org/abs/1412.5068
    [61]
    Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. CoRR abs/1708.06733 (2017). arXiv:1708.06733 http://arxiv.org/abs/1708.06733
    [62]
    Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. BadNets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 47230–47244. https://doi.org/10.1109/ACCESS.2019.2909068
    [63]
    Stephen Hardy, Wilko Henecka, Hamish Ivey-Law, Richard Nock, Giorgio Patrini, Guillaume Smith, and Brian Thorne. 2017. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. CoRR abs/1711.10677 (2017). arXiv:1711.10677 http://arxiv.org/abs/1711.10677
    [64]
    J. Henry Hinnefeld, Peter Cooman, Nat Mammo, and Rupert Deese. 2018. Evaluating Fairness Metrics in the Presence of Dataset Bias. CoRR abs/1809.09245 (2018). arXiv:1809.09245 http://arxiv.org/abs/1809.09245
    [65]
    Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [stat.ML] https://arxiv.org/abs/1503.02531
    [66]
    Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Computation 18, 7 (2006), 1527–1554. DOI:https://doi.org/10.1162/neco.2006.18.7.1527
    [67]
    Shengshan Hu, Xingcan Shang, Zhan Qin, Minghui Li, Qian Wang, and Cong Wang. 2019. Adversarial examples for automatic speech recognition: Attacks and countermeasures. IEEE Communications Magazine 57, 10 (2019), 120–126.
    [68]
    Weiwei Hu and Ying Tan. 2017. Black-Box Attacks against RNN based Malware Detection Algorithms. CoRR abs/1705.08131 (2017). arXiv:1705.08131 http://arxiv.org/abs/1705.08131
    [69]
    Weiwei Hu and Ying Tan. 2017. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN. CoRR abs/1702.05983 (2017). arXiv:1702.05983 http://arxiv.org/abs/1702.05983
    [70]
    Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. 2019. Adversarial examples are not bugs, they are features. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. 125–136.
    [71]
    Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. 2018. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In Proceedings of the 2018 IEEE Symposium on Security and Privacy, Vol. 2018-May. 19–35. DOI:https://doi.org/10.1109/SP.2018.00057
    [72]
    Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. 2019. Memguard: Defending against black-box membership inference attacks via adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 259–274.
    [73]
    Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, 2021–2031.
    [74]
    Guoqing Jin, Shiwei Shen, Dongming Zhang, Feng Dai, and Yongdong Zhang. 2019. APE-GAN: Adversarial perturbation elimination with GAN. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 3842–3846.
    [75]
    Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proceedings of the International Conference on Computer Aided Verification. Springer, 97–117.
    [76]
    Bedeuro Kim, Alsharif Abuadbba, Yansong Gao, Yifeng Zheng, Muhammad Ejaz Ahmed, Hyoungshick Kim, and Surya Nepal. 2020. Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks. CoRR abs/2010.03735 (2020). arXiv:2010.03735 https://arxiv.org/abs/2010.03735
    [77]
    Pang Wei Koh and Percy Liang. 2020. Understanding Black-box Predictions via Influence Functions. arXiv:1703.04730 [stat.ML] https://arxiv.org/abs/1703.04730
    [78]
    Yehao Kong and Jiliang Zhang. 2020. Adversarial audio: A new information hiding method. In Proceedings of the Interspeech, 2287–2291.
    [79]
    Denis Foo Kune, John Backes, Shane S. Clark, Daniel Kramer, Matthew Reynolds, Kevin Fu, Yongdae Kim, and Wenyuan Xu. 2013. Ghost talk: Mitigating EMI signal injection attacks against analog sensors. In Proceedings of the IEEE Symposium on Security and Privacy. 145–159. DOI:https://doi.org/10.1109/SP.2013.20
    [80]
    Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. CoRR abs/1607.02533 (2016). arXiv:1607.02533 http://arxiv.org/abs/1607.02533
    [81]
    Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. In Proceedings of the 5th International Conference on Learning Representations. ICLR.
    [82]
    Hyeungill Lee, Sungyeob Han, and Jungwoo Lee. 2017. Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN. CoRR abs/1705.03387 (2017). arXiv:1705.03387 http://arxiv.org/abs/1705.03387
    [83]
    Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversarial Attack Against BERT Using BERT. CoRR abs/2004.09984 (2020). arXiv:2004.09984 https://arxiv.org/abs/2004.09984
    [84]
    Wenjia Li, Neha Bala, Aemun Ahmar, Fernanda Tovar, Arpit Battu, and Prachi Bambarkar. 2019. A robust malware detection approach for android system against adversarial example attacks. In Proceedings of the 2019 IEEE 5th International Conference on Collaboration and Internet Computing. IEEE, 360–365.
    [85]
    Wenjia Li and Houbing Song. 2015. ART: An attack-resistant trust management scheme for securing vehicular ad hoc networks. IEEE Transactions on Intelligent Transportation Systems 17, 4 (2015), 960–969.
    [86]
    Wenjia Li, Houbing Song, and Feng Zeng. 2017. Policy-based secure and trustworthy sensing for internet of things in smart cities. IEEE Internet of Things Journal 5, 2 (2017), 716–723.
    [87]
    Xin Li and Fuxin Li. 2017. Adversarial examples detection in deep networks with convolutional filter statistics. In Proceedings of the IEEE International Conference on Computer Vision. 5764–5772.
    [88]
    Wei Liang, Songyou Xie, Jiahong Cai, Jianbo Xu, Yupeng Hu, Yang Xu, and Meikang Qiu. 2021. Deep neural network security collaborative filtering scheme for service recommendation in intelligent cyber-physical systems. IEEE Internet of Things Journal (2021), 1–1. https://doi.org/10.1109/JIOT.2021.3086845
    [89]
    Huaqing Lin, Zheng Yan, Yu Chen, and Lifang Zhang. 2018. A survey on network security-related data collection technologies. IEEE Access 6, 1 (2018), 18345–18365.
    [90]
    Chang Liu, Bo Li, Yevgeniy Vorobeychik, and Alina Oprea. 2017. Robust linear regression against training data poisoning. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 91–102. DOI:https://doi.org/10.1145/3128572.3140447
    [91]
    Gao Liu, Zheng Yan, and Witold Pedrycz. 2018. Data collection for attack detection and security measurement in Mobile Ad Hoc Networks: A survey. Journal of Network and Computer Applications 105 (2018), 105–122. https://doi.org/10.1016/j.jnca.2018.01.004
    [92]
    Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2018. Fine-pruning: Defending against backdooring attacks on deep neural networks. In Proceedings of the International Symposium on Research in Attacks, Intrusions, and Defenses. Springer, 273–294.
    [93]
    Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen Chuan Lee, and Xiangyu Zhang. 2017. Trojaning attack on neural networks. In Proceedings of the Network and Distributed System Security Symposium.
    [94]
    Auranuch Lorsakul and Jackrit Suthakorn. 2007. Traffic sign recognition using neural network on OpenCV: Toward intelligent vehicle/driver assistance system. In Proceedings of the 4th International Conference on Ubiquitous Robots and Ambient Intelligence, 1–19. Retrieved from http://crit2007.bartlab.org/Dr.Jackrit'sPapers/ney/1.TRAFFIC_SIGN_Lorsakul_ISR.pdf.
    [95]
    Jiajun Lu, Theerasit Issaranon, and David Forsyth. 2017. Safetynet: Detecting and rejecting adversarial examples robustly. In Proceedings of the IEEE International Conference on Computer Vision. 446–454.
    [96]
    Nitin Madnani and Bonnie J. Dorr. 2010. Generating phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics 36, 3 (2010), 341–387. DOI:https://doi.org/10.1162/coli_a_00002
    [97]
    John McCarthy. 1956. Artificial Intelligence (AI) Coined at Dartmouth. Retrieved October 28, 2021 from https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth.
    [98]
    Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics. PMLR, 1273–1282.
    [99]
    Shike Mei and Xiaojin Zhu. 2015. Using machine teaching to identify optimal training-set attacks on machine learners. In Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 29.
    [100]
    Gys Albertus Marthinus Meiring and Hermanus Carel Myburgh. 2015. A review of intelligent driving style analysis systems and related artificial intelligence algorithms. Sensors 15, 12 (Dec 2015), 30653–30682. DOI:https://doi.org/10.3390/s151229822
    [101]
    Dongyu Meng and Hao Chen. 2017. Magnet: A two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 135–147.
    [102]
    Sachit Menon, Alexandru Damian, Shijia Hu, Nikhil Ravi, and Cynthia Rudin. 2020. PULSE: Self-supervised photo upsampling via latent space exploration of generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2437–2445.
    [103]
    Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. 2017. On Detecting Adversarial Perturbations. arXiv:1702.04267 [stat.ML] https://arxiv.org/abs/1702.04267
    [104]
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1765–1773.
    [105]
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2574–2582.
    [106]
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Jonathan Uesato, and Pascal Frossard. 2019. Robustness via curvature regularization, and vice versa. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 9078–9086.
    [107]
    Konda Reddy Mopuri, Aditya Ganeshan, and R. Venkatesh Babu. 2018. Generalizable data-free objective for crafting universal adversarial perturbations. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, 10 (2018), 2452–2465.
    [108]
    Lindasalwa Muda, Mumtaj Begam, and I. Elamvazuthi. 2010. Voice Recognition Algorithms using Mel Frequency Cepstral Coefficient (MFCC) and Dynamic TimeWarping (DTW) Techniques. CoRR abs/1003.4083 (2010). arXiv:1003.4083 http://arxiv.org/abs/1003.4083
    [109]
    Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 27–38.
    [110]
    Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D. Joseph, Benjamin I. P. Rubinstein, Udam Saini, Charles Sutton, J. D. Tygar, and Kai Xia. 2008. Exploiting machine learning to subvert your spam filter. In Proceedings of the 1st USENIX Workshop on Large-Scale Exploits and Emergent Threats: Botnets, Spyware, Worms, and More.
    [111]
    Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kıcıman. 2019. Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Frontiers in Big Data 2 (2019), 13. https://doi.org/10.3389/fdata.2019.00013
    [112]
    Douglas O’Shaughnessy. 2008. Automatic speech recognition: History, methods and challenges. Pattern Recognition 41, 10 (2008), 2965–2979.
    [113]
    Mesut Ozdag. 2018. Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey. Procedia Computer Science 140 (2018), 152–161. https://doi.org/10.1016/j.procs.2018.10.315 Cyber Physical Systems and Deep Learning Chicago, Illinois November 5-7, 2018.
    [114]
    Nicolas Papernot, Patrick Mcdaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy. DOI:https://doi.org/10.1109/EuroSP.2016.36
    [115]
    Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of the 2016 IEEE Symposium on Security and Privacy. IEEE, 582–597.
    [116]
    George Philipp and Jaime G. Carbonell. 2018. The Nonlinearity Coefficient - Predicting Overfitting in Deep Neural Networks. CoRR abs/1806.00179 (2018). arXiv:1806.00179 http://arxiv.org/abs/1806.00179
    [117]
    Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In Proceedings of the IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society.
    [118]
    Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, and James Storer. 2018. Deflecting adversarial attacks with pixel deflection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8571–8580.
    [119]
    Katyanna Quach. 2020. Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves. Retrieved October 28, 2021 from https://www.theregister.com/2020/10/28/gpt3_medical_chatbot_experiment/.
    [120]
    Erwin Quiring, David Klein, Daniel Arp, Martin Johns, and Konrad Rieck. 2020. adversarial preprocessing: Understanding and preventing image-scaling attacks in machine learning. In Proceedings of the 29th USENIX Security Symposium, 1363–1380.
    [121]
    Erwin Quiring and Konrad Rieck. 2020. Backdooring and Poisoning Neural Networks with Image-Scaling Attacks. CoRR abs/2003.08633 (2020). arXiv:2003.08633 https://arxiv.org/abs/2003.08633
    [122]
    Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Certified Defenses against Adversarial Examples. CoRR abs/1801.09344 (2018). arXiv:1801.09344 http://arxiv.org/abs/1801.09344
    [123]
    Kanishka Rao, Haşim Sak, and Rohit Prabhavalkar. 2017. Exploring architectures, data and units for streaming end-to-end speech recognition with rnn-transducer. In Proceedings of the 2017 IEEE Automatic Speech Recognition and Understanding Workshop. 193–199.
    [124]
    Konda Reddy Mopuri, Phani Krishna Uppala, and R. Venkatesh Babu. 2018. Ask, acquire, and attack: Data-free UAP generation using class impressions. In Proceedings of the European Conference on Computer Vision. 19–34.
    [125]
    Konda Reddy Mopuri, Utkarsh Ojha, Utsav Garg, and R. Venkatesh Babu. 2018. NAG: Network for adversary generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 742–751.
    [126]
    Mohsen Rezvani, Aleksandar Ignjatovic, Elisa Bertino, and Sanjay Jha. 2014. Secure data aggregation technique for wireless sensor networks in the presence of collusion attacks. IEEE Transactions on Dependable and Secure Computing 12, 1 (2014), 98–110.
    [127]
    Benjamin I. P. Rubinstein, Blaine Nelson, Ling Huang, Anthony D. Joseph, Shing Hon Lau, Satish Rao, Nina Taft, and J. D. Tygar. 2009. Antidote: Understanding and defending against poisoning of anomaly detectors. In Proceedings of the ACM SIGCOMM Internet Measurement Conference. DOI:https://doi.org/10.1145/1644893.1644895
    [128]
    Pouya Samangouei, Maya Kabkab, and Rama Chellappa. 2018. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models. CoRR abs/1805.06605 (2018). arXiv:1805.06605 http://arxiv.org/abs/1805.06605
    [129]
    Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable Adversarial Perturbation in Input Embedding Space for Text. CoRR abs/1805.02917 (2018). arXiv:1805.02917 http://arxiv.org/abs/1805.02917
    [130]
    Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. 2018. Poison frogs! Targeted clean-label poisoning attacks on neural networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 6103–6113.
    [131]
    Hocheol Shin, Yunmok Son, Youngseok Park, Yujin Kwon, and Yongdae Kim. 2016. Sampling race: Bypassing timing-based analog active sensor spoofing detection on analog-digital systems. In Proceedings of the 10th USENIX Workshop on Offensive Technologies.
    [132]
    Yasser Shoukry, Paul Martin, Paulo Tabuada, and Mani Srivastava. 2013. Non-invasive spoofing attacks for anti-lock braking systems. In Cryptographic Hardware and Embedded Systems. G. Bertoni and J. S. Coron (Eds.), Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 8086, Springer, 55–72. DOI:https://doi.org/10.1007/978-3-642-40349-1-4
    [133]
    Yasser Shoukry, Paul Martin, Yair Yona, Suhas Diggavi, and Mani Srivastava. 2015. PyCRA : Physical challenge-response authentication for active sensors under spoofing attacks categories and subject descriptors. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1004–1015.
    [134]
    Marco della Cava. 2018. Uber self-driving car kills Arizona pedestrian, realizing worst fears of the new tech. Retrieved October 28, 2021 from https://www.usatoday.com/story/tech/2018/03/19/uber-self-driving-car-kills-arizona-woman/438473002/.
    [135]
    Yunmok Son, Hocheol Shin, Dongkwan Kim, Youngseok Park, Juhwan Noh, Kibum Choi, Jungwoo Choi, and Yongdae Kim. 2015. Rocking drones with intentional sound noise on gyroscopic sensors. In Proceedings of the 24th USENIX Security Symposium.
    [136]
    Congzheng Song and Vitaly Shmatikov. 2018. Fooling OCR Systems withAdversarial Text Images. CoRR abs/1802.05385 (2018). arXiv:1802.05385 http://arxiv.org/abs/1802.05385
    [137]
    Liwei Song, Reza Shokri, and Prateek Mittal. 2019. Privacy risks of securing machine learning models against adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 241–257.
    [138]
    Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. 2017. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples. CoRR abs/1710.10766 (2017). arXiv:1710.10766 http://arxiv.org/abs/1710.10766
    [139]
    Jacob Steinhardt, Pang Wei Koh, and Percy Liang. 2017. Certified defenses for data poisoning attacks. In Proceedings of the 31st International Conference on Neural Information Processing Systems. Vol. 2017-Decem, 3518–3530.
    [140]
    Carsten Stephan, Henning Cammann, Axel Semjonow, Eleftherios P. Diamandis, Leon F. A. Wymenga, Michael Lein, Pranav Sinha, Stefan A. Loening, and Klaus Jung. 2002. Multicenter evaluation of an artificial neural network to increase the prostate cancer detection rate and reduce unnecessary biopsies. Clinical Chemistry 48, 8 (2002), 1279–1287. DOI:https://doi.org/10.1093/clinchem/48.8.1279
    [141]
    Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation 23, 5 (2019), 828–841.
    [142]
    Lichao Sun, Ji Wang, Philip S. Yu, and Bo Li. 2018. Adversarial Attack and Defense on Graph Data: A Survey. CoRR abs/1812.10528 (2018). arXiv:1812.10528 http://arxiv.org/abs/1812.10528
    [143]
    Mengying Sun, Fengyi Tang, Jinfeng Yi, Fei Wang, and Jiayu Zhou. 2018. Identify susceptible locations in medical records via adversarial attacks on deep predictive models. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. DOI:https://doi.org/10.1145/3219819.3219909
    [144]
    Sining Sun, Ching-Feng Yeh, Mari Ostendorf, Mei-Yuh Hwang, and Lei Xie. 2018. Training Augmentation with Adversarial Examples for Robust Speech Recognition. CoRR abs/1806.02782 (2018). arXiv:1806.02782 http://arxiv.org/abs/1806.02782
    [145]
    Latanya Sweeney. 2000. Simple demographics often identify people uniquely. Health 671, 2000 (2000), 1–34.
    [146]
    Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations. 1–10.
    [147]
    Yasmin Tadjdeh. 2017. DARPA’s ‘AI next’ program bearing fruit. NDIA’s Business & Technology Magazine. Retrieved from https://www.nationaldefensemagazine.org/articles/2019/7/2/algorithmic-warfare-darpas-ai-next-program-bearing-fruit.
    [148]
    Rohan Taori, Amog Kamsetty, Brenton Chu, and Nikita Vemuri. 2018. Targeted Adversarial Examples for Black Box Audio Systems. CoRR abs/1805.07820 (2018). arXiv:1805.07820 http://arxiv.org/abs/1805.07820
    [149]
    Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, and Patrick D. McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018. OpenReview.net. https://openreview.net/forum?id=rkZvSe-RZ.
    [150]
    Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In Proceedings of the 25th USENIX Security Symposium. 601–618.
    [151]
    Alexander Turner, Dimitris Tsipras, and Aleksander Madry. 2019. Label-Consistent Backdoor Attacks. arXiv:1912.02771 [stat.ML] https://arxiv.org/abs/1912.02771
    [152]
    Daniele Ucci, Leonardo Aniello, and Roberto Baldoni. 2019. Survey of machine learning techniques for malware analysis. Computers & Security 81 (2019), 123–147. https://doi.org/10.1016/j.cose.2018.11.001
    [153]
    Ashish Venugopal, Jakob Uszkoreit, David Talbot, Franz J. Och, and Juri Ganitkevitch. 2011. Watermarking the outputs of structured prediction with an application in statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 1363–1372.
    [154]
    James Vincent. 2020. What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias. Retrieved October 28, 2021 from https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-toolpulse-stylegan-obama-bias.
    [155]
    Putra Wanda and Huang Jin Jie. 2020. DeepProfile: Finding fake profile in online social network using dynamic CNN. Journal of Information Security and Applications 52 (2020), 102465. https://doi.org/10.1016/j.jisa.2020.102465
    [156]
    Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2019. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. IEEE, 707–723.
    [157]
    Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond inferring class representatives: User-level privacy leakage from federated learning. In Proceedings of the IEEE Conference on Computer Communications. IEEE, 2512–2520.
    [158]
    Guanchao Wen, Yupeng Hu, Chen Jiang, Na Cao, and Zheng Qin. 2017. A image texture and BP neural network basec malicious files detection technique for cloud storage systems. In Proceedings of the 2017 IEEE Conference on Computer Communications Workshops. IEEE, 426–431.
    [159]
    Fei Xia and Ruishan Liu. 2016. Adversarial examples generation and defense based on generative adversarial network. arXiv preprint arXiv:1712.00170 (2016).
    [160]
    Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. 2015. Is feature selection secure against training data poisoning? In Proceedings of the International Conference on Machine Learning. 1689–1698.
    [161]
    Qixue Xiao, Yufei Chen, Chao Shen, Yu Chen, and Kang Li. 2019. Seeing is not believing: Camouflage attacks on image scaling algorithms. In Proceedings of the 28th USENIX Security Symposium, 443–460.
    [162]
    Qixue Xiao, Kang Li, Deyue Zhang, and Weilin Xu. 2018. Security risks in deep learning implementations. In Proceedings of the 2018 IEEE Security and Privacy Workshops. IEEE, 123–128.
    [163]
    Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. 2017. Mitigating adversarial effects through randomization. CoRR abs/1711.01991 (2017). arXiv:1711.01991 http://arxiv.org/abs/1711.01991
    [164]
    Guibiao Xu, Zheng Cao, Bao Gang Hu, and Jose C. Principe. 2017. Robust support vector machines based on the rescaled hinge loss function. Pattern Recognition 100, 63 (2017), 139–148. DOI:https://doi.org/10.1016/j.patcog.2016.09.045
    [165]
    Han Xu, Yao Ma, Hao-Chen Liu, Debayan Deb, Hui Liu, Ji-Liang Tang, and Anil K. Jain. 2020. Adversarial attacks and defenses in images, graphs and text: A review. International Journal of Automation and Computing 17, 2 (2020), 151–178.
    [166]
    Weilin Xu, David Evans, and Yanjun Qi. 2017. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. CoRR abs/1704.01155 (2017). arXiv:1704.01155 http://arxiv.org/abs/1704.01155
    [167]
    Weilin Xu, Yanjun Qi, and David Evans. 2016. Automatically evading classifiers. In Proceedings of the 2016 Network and Distributed Systems Symposium. Vol. 10.
    [168]
    Hiromu Yakura and Jun Sakuma. 2018. Robust Audio Adversarial Example for a Physical Attack. CoRR abs/1810.11793 (2018). arXiv:1810.11793 http://arxiv.org/abs/1810.11793
    [169]
    Chaofei Yang, Qing Wu, Hai Li, and Yiran Chen. 2017. Generative Poisoning Attack Method Against Neural Networks. CoRR abs/1703.01340 (2017). arXiv:1703.01340 http://arxiv.org/abs/1703.01340.
    [170]
    Yanfang Ye, Tao Li, Donald Adjeroh, and S. Sitharama Iyengar. 2017. A survey on malware detection using data mining techniques. ACM Computing Surveys 50, 3 (2017), 1–40.
    [171]
    Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, XiaoFeng Wang, and Carl A. Gunter. 2018. Commandersong: A systematic approach for practical adversarial voice recognition. In Proceedings of the 27th USENIX Security Symposium. 49–64.
    [172]
    Chenyue Zhang, Wenjia Li, Yuansheng Luo, and Yupeng Hu. 2020. AIT: An AI-enabled trust management system for vehicular networks using blockchain technology. IEEE Internet of Things Journal 8, 5 (2020), 3157–3169.
    [173]
    Guoming Zhang, Chen Yan, Xiaoyu Ji, Tianchen Zhang, Taimin Zhang, and Wenyuan Xu. 2017. Dolphinattack: Inaudible voice commands. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 103–117.
    [174]
    Jiliang Zhang and Chen Li. 2020. Adversarial examples: Opportunities and challenges. IEEE Transactions on Neural Networks and Learning Systems 31, 7 (2020), 2578–2593. DOI:https://doi.org/10.1109/TNNLS.2019.2933524
    [175]
    Jiliang Zhang, Chen Li, Jing Ye, and Gang Qu. 2020. Privacy threats and protection in machine learning. In Proceedings of the 2020 on Great Lakes Symposium on VLSI. 531–536.
    [176]
    Jiliang Zhang, Shuang Peng, Yupeng Hu, Fei Peng, Wei Hu, Jinmei Lai, Jing Ye, and Xiangqi Wang. 2020. HRAE: Hardware-assisted randomization against adversarial example attacks. In Proceedings of the 2020 IEEE 29th Asian Test Symposium. IEEE, 1–6.

    Cited By

    View all
    • (2024)Explainable AI for CybersecurityAdvances in Explainable AI Applications for Smart Cities10.4018/978-1-6684-6361-1.ch002(31-97)Online publication date: 18-Jan-2024
    • (2024)An LLM-Based Inventory Construction Framework of Urban Ground Collapse Events with Spatiotemporal LocationsISPRS International Journal of Geo-Information10.3390/ijgi1304013313:4(133)Online publication date: 16-Apr-2024
    • (2024)Generative AI in Medical Practice: In-Depth Exploration of Privacy and Security ChallengesJournal of Medical Internet Research10.2196/5300826(e53008)Online publication date: 8-Mar-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Computing Surveys
    ACM Computing Surveys  Volume 55, Issue 1
    January 2023
    860 pages
    ISSN:0360-0300
    EISSN:1557-7341
    DOI:10.1145/3492451
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 23 November 2021
    Accepted: 01 September 2021
    Revised: 01 September 2021
    Received: 01 February 2021
    Published in CSUR Volume 55, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Adversarial example attack
    2. artificial intelligence security
    3. poisoning attack
    4. image scaling attack
    5. data collection related attack

    Qualifiers

    • Survey
    • Refereed

    Funding Sources

    • National Natural Science Foundation of China
    • Science and Technology Project of Department of Communications of Hunan Provincial
    • Hunan Natural Science Foundation for Distinguished Young Scholars
    • Hunan Science and Technology Innovation Leading Talents Project
    • Natural Science Foundation of Fujian Province
    • Key R & D Projects of Changsha
    • National Natural Science Foundation of JiangSu

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)4,805
    • Downloads (Last 6 weeks)337
    Reflects downloads up to 09 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Explainable AI for CybersecurityAdvances in Explainable AI Applications for Smart Cities10.4018/978-1-6684-6361-1.ch002(31-97)Online publication date: 18-Jan-2024
    • (2024)An LLM-Based Inventory Construction Framework of Urban Ground Collapse Events with Spatiotemporal LocationsISPRS International Journal of Geo-Information10.3390/ijgi1304013313:4(133)Online publication date: 16-Apr-2024
    • (2024)Generative AI in Medical Practice: In-Depth Exploration of Privacy and Security ChallengesJournal of Medical Internet Research10.2196/5300826(e53008)Online publication date: 8-Mar-2024
    • (2024)Responsible Development of Internal GenAI SystemsSSRN Electronic Journal10.2139/ssrn.4834767Online publication date: 2024
    • (2024)Critical review of self‐diagnosis of mental health conditions using artificial intelligenceInternational Journal of Mental Health Nursing10.1111/inm.1330333:2(344-358)Online publication date: 12-Feb-2024
    • (2024)Toward a Critical Evaluation of Robustness for Deep Learning Backdoor CountermeasuresIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.332431819(455-468)Online publication date: 1-Jan-2024
    • (2024)Towards Trustworthy Autonomous Systems: Taxonomies and Future PerspectivesIEEE Transactions on Emerging Topics in Computing10.1109/TETC.2022.322711312:2(601-614)Online publication date: Apr-2024
    • (2024)Securing Artificial Intelligence: Exploring Attack Scenarios and Defense Strategies2024 12th International Symposium on Digital Forensics and Security (ISDFS)10.1109/ISDFS60797.2024.10527288(1-6)Online publication date: 29-Apr-2024
    • (2024)REN-A.I.: A Video Game for AI Security Education Leveraging Episodic MemoryIEEE Access10.1109/ACCESS.2024.337769912(47359-47372)Online publication date: 2024
    • (2024)Artificial intelligence driving perception, cognition, decision‐making and deduction in energy systems: State‐of‐the‐art and potential directionsEnergy Internet10.1049/ein2.12010Online publication date: 14-Aug-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media