Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3427228.3427268acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacsacConference Proceedingsconference-collections
research-article

StegoNet: Turn Deep Neural Network into a Stegomalware

Published: 08 December 2020 Publication History

Abstract

Deep Neural Networks (DNNs) are now presenting human-level performance on many real-world applications, and DNN-based intelligent services are becoming more and more popular across all aspects of our lives. Unfortunately, the ever-increasing DNN service implies a dangerous feature which has not yet been well studied–allowing the marriage of existing malware and DNN model for any pre-defined malicious purpose. In this paper, we comprehensively investigate how to turn DNN into a new breed evasive self-contained stegomalware, namely StegoNet, using model parameter as a novel payload injection channel, with no service quality degradation (i.e. accuracy) and the triggering event connected to the physical world by specified DNN inputs. A series of payload injection techniques which take advantage of a variety of unique neural network natures like complex structure, high error resilience capability and huge parameter size, are developed for both uncompressed models (with model redundancy) and deeply compressed models tailored for resource-limited devices (no model redundancy), including LSB substitution, resilience training, value mapping, and sign-mapping. We also proposed a set of triggering techniques like logits trigger, rank trigger and fine-tuned rank trigger to trigger StegoNet by specific physical events under realistic environment variations. We implement the StegoNet prototype on Nvidia Jetson TX2 testbed. Extensive experimental results and discussions on the evasiveness, integrity of proposed payload injection techniques, and the reliability and sensitivity of the triggering techniques, well demonstrate the feasibility and practicality of StegoNet.

References

[1]
Amazon. 2018. Amazon Machine Learning. https://aws.amazon.com/machine-learning/.
[2]
Christopher M Bishop. 2006. Pattern recognition and machine learning. springer.
[3]
Benedikt Boehm. 2014. Stegexpose-A tool for detecting LSB steganography. https://github.com/b3dk7/StegExpose/. arXiv preprint arXiv:1410.6656(2014).
[4]
BVLC/caffe. 2018. DNN Model Zoo. https://github.com/BVLC/caffe/wiki/Model-Zoo/.
[5]
Abbas Cheddad, Joan Condell, Kevin Curran, and Paul Mc Kevitt. 2010. Digital image steganography: Survey and analysis of current methods. Signal processing 90, 3 (2010), 727–752.
[6]
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526(2017).
[7]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 248–255.
[8]
Sorina Dumitrescu, Xiaolin Wu, and Nasir Memon. 2002. On steganalysis of random LSB embedding in continuous-tone images. In Proceedings. International Conference on Image Processing, Vol. 3. IEEE, 641–644.
[9]
Sorina Dumitrescu, Xiaolin Wu, and Zhe Wang. 2002. Detection of LSB steganography via sample pair analysis. In International Workshop on Information Hiding. Springer, 355–372.
[10]
Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. 2017. Robust Physical-World Attacks on Deep Learning Models. arXiv preprint arXiv:1707.08945 1 (2017).
[11]
Jessica Fridrich, Miroslav Goljan, and Rui Du. 2001. Reliable detection of LSB steganography in color and grayscale images. In Proceedings of the 2001 workshop on Multimedia and security: new challenges. ACM, 27–30.
[12]
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT press.
[13]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572(2014).
[14]
Google. 2018. Google Cloud Machine Learning. https://cloud.google.com/products/machine-learning/.
[15]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733(2017).
[16]
Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149(2015).
[17]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
[18]
Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. science 313, 5786 (2006), 504–507.
[19]
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861(2017).
[20]
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely Connected Convolutional Networks. In CVPR, Vol. 1. 3.
[21]
Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360(2016).
[22]
Facebook Inc.2017. Open Neural Network Exchange (ONNX). https://onnx.ai/.
[23]
Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. 2018. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 19–35.
[24]
Mehdi Kharrazi, Husrev T Sencar, and Nasir Memon. 2006. Improving steganalysis by fusion techniques: A case study with image steganography. In Transactions on Data Hiding and Multimedia Security I. Springer, 123–137.
[25]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097–1105.
[26]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (2015), 436–444.
[27]
Bin Li, Junhui He, Jiwu Huang, and Yun Qing Shi. 2011. A survey on image steganography and steganalysis. Journal of Information Hiding and Multimedia Signal Processing 2, 2(2011), 142–172.
[28]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2017. Trojaning Attack on Neural Networks. In Department of Computer Science Technical Reports, Purdue University. Purdue e-Pubs, 1781.
[29]
Yuntao Liu, Yang Xie, and Ankur Srivastava. 2017. Neural trojans. In Computer Design (ICCD), 2017 IEEE International Conference on. IEEE, 45–48.
[30]
MalwareWiki. 2018. Fork Bomb. http://malware.wikia.com/wiki/Fork_Bomb/.
[31]
D McMillen. 2017. Steganography: A safe haven for malware. https://securityintelligence.com/steganography-a-safe-haven-for-malware/. (2017).
[32]
Metadefender. 2019. Multiple Security Engines. http://www.metadefender.com/.
[33]
Microsoft. 2018. Microsoft Azure Machine Learning. https://azure.microsoft.com/en-us/services/machine-learning/.
[34]
Yuval Nativ. 2015. theZoo aka Malware DB. http://thezoo.morirt.com/.
[35]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 372–387.
[36]
David Silver and Demis Hassabis. 2016. Alphago: Mastering the ancient game of go with machine learning. Research Blog (2016).
[37]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556(2014).
[38]
Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. 2017. Machine learning models that remember too much. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 587–601.
[39]
Guillermo Suarez-Tangil, Juan E Tapiador, and Pedro Peris-Lopez. 2014. Stegomalware: Playing hide and seek with malicious components in smartphone apps. In International Conference on Information Security and Cryptology. Springer, 496–515.
[40]
Christian Szegedy. 2016. An overview of deep learning. AITP 2016 (2016).
[41]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1–9.
[42]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2818–2826.
[43]
Tensorflow. 2019. TensorFlow models are programs. https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md/.
[44]
Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. The Space of Transferable Adversarial Examples. arXiv preprint arXiv:1704.03453(2017).
[45]
Samir Vaidya. 2019. OpenStego. https://github.com/syvaidya/openstego/.
[46]
Andreas Westfeld and Andreas Pfitzmann. 1999. Attacks on steganographic systems. In International workshop on information hiding. Springer, 61–76.

Cited By

View all

Index Terms

  1. StegoNet: Turn Deep Neural Network into a Stegomalware
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Other conferences
          ACSAC '20: Proceedings of the 36th Annual Computer Security Applications Conference
          December 2020
          962 pages
          ISBN:9781450388580
          DOI:10.1145/3427228
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 08 December 2020

          Permissions

          Request permissions for this article.

          Check for updates

          Qualifiers

          • Research-article
          • Research
          • Refereed limited

          Funding Sources

          • National Science Foundation

          Conference

          ACSAC '20

          Acceptance Rates

          Overall Acceptance Rate 104 of 497 submissions, 21%

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)178
          • Downloads (Last 6 weeks)7
          Reflects downloads up to 01 Sep 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Attacks on Machine Learning Models Based on the PyTorch FrameworkАвтоматика и телемеханика10.31857/S0005231024030038Online publication date: 15-Dec-2024
          • (2024)Attacks on Machine Learning Models Based on the PyTorch FrameworkAutomation and Remote Control10.31857/S000511792403004585:3Online publication date: Mar-2024
          • (2024)Steganalysis of AI Models LSB AttacksIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.338377019(4767-4779)Online publication date: 2024
          • (2024)A Comparative Analysis on Exploration of Stegosploits across Various Media Formats2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS)10.1109/ICKECS61492.2024.10616568(1-8)Online publication date: 18-Apr-2024
          • (2024)Exploiting Neural Network Model for Hiding and Triggering MalwareProceedings of World Conference on Information Systems for Business Management10.1007/978-981-99-8346-9_18(209-220)Online publication date: 1-Mar-2024
          • (2023)FreePart: Hardening Data Processing Software via Framework-based Partitioning and IsolationProceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 410.1145/3623278.3624760(169-188)Online publication date: 25-Mar-2023
          • (2023)Calibration-based Steganalysis for Neural Network SteganographyProceedings of the 2023 ACM Workshop on Information Hiding and Multimedia Security10.1145/3577163.3595100(91-96)Online publication date: 28-Jun-2023
          • (2023)Reusing Deep Learning Models: Challenges and Directions in Software Engineering2023 IEEE John Vincent Atanasoff International Symposium on Modern Computing (JVA)10.1109/JVA60410.2023.00015(17-30)Online publication date: 5-Jul-2023
          • (2023)Disarming Attacks Inside Neural Network ModelsIEEE Access10.1109/ACCESS.2023.333014111(124295-124303)Online publication date: 2023
          • (2022)House of CansProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3602071(24838-24850)Online publication date: 28-Nov-2022
          • Show More Cited By

          View Options

          Get Access

          Login options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media