Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

On the Neural Backdoor of Federated Generative Models in Edge Computing

Published: 22 October 2021 Publication History

Abstract

Edge computing, as a relatively recent evolution of cloud computing architecture, is the newest way for enterprises to distribute computational power and lower repetitive referrals to central authorities. In the edge computing environment, Generative Models (GMs) have been found to be valuable and useful in machine learning tasks such as data augmentation and data pre-processing. Federated learning and distributed learning refer to training machine learning models in the edge computing network. However, federated learning and distributed learning also bring additional risks to GMs since all peers in the network have access to the model under training. In this article, we study the vulnerabilities of federated GMs to data-poisoning-based backdoor attacks via gradient uploading. We additionally enhance the attack to reduce the required poisonous data samples and cope with dynamic network environments. Last but not least, the attacks are formally proven to be stealthy and effective toward federated GMs. According to the experiments, neural backdoors can be successfully embedded by including merely \(5\%\) poisonous samples in the local training dataset of an attacker.

References

[1]
Ulrich Matchi Aïvodji, Sébastien Gambs, and Alexandre Martin. 2019. IOTFLA: A secured and privacy-preserving smart home architecture implementing federated learning. In Proceedings of the 2019 IEEE Security and Privacy Workshops (SPW’19). IEEE Los Alamitos, CA, 175–180.
[2]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2018. How to backdoor federated learning. arxiv:1807.00459 (2018).
[3]
Shumeet Baluja and Ian Fischer. 2017. Adversarial transformation networks: Learning to generate adversarial examples. arxiv:1703.09387 (2017).
[4]
Gilad Baruch, Moran Baruch, and Yoav Goldberg. 2019. A little is enough: Circumventing defenses for distributed learning. In Advances in Neural Information Processing Systems. 8635–8645.
[5]
Mounir Bensalem, Jasenka Dizdarević, and Admela Jukan. 2020. Modeling of deep neural network (DNN) placement and inference in edge computing. arxiv:2001.06901 (2020).
[6]
Arjun Nitin Bhagoji, Supriyo Chakraborty, P. Mittal, and S. Calo. 2018. Model poisoning attacks in federated learning. In Proceedings of the Workshop on Security in Machine Learning (SecML’18), collocated with the 32nd Conference on Neural Information Processing Systems (NeurIPS’18).
[7]
Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing federated learning through an adversarial lens. In Proceedings of the International Conference on Machine Learning. 634–643.
[8]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems. 119–129.
[9]
Joseph Clements and Yingjie Lao. 2018. Hardware trojan attacks on neural networks. arxiv:1806.05768 (2018).
[10]
Antonia Creswell, Anil A. Bharath, and Biswa Sengupta. 2017. Latent poison-adversarial attacks on the latent space. arxiv:1711.02879 (2017).
[11]
Wang Derui, Chaoran Li, Sheng Wen, Surya Nepal, and Yang Xiang. 2019. Man-in-the-middle attacks against machine learning classifiers via malicious generative models. arxiv:1910.06838 (2019).
[12]
Shaohua Ding, Yulong Tian, Fengyuan Xu, Qun Li, and Sheng Zhong. 2019. Poisoning Attack on Deep Generative Models in Autonomous Driving. Retrieved September 22, 2021 from https://www.cs.wm.edu/~liqun/paper/securecomm19.pdf.
[13]
Carl Doersch. 2016. Tutorial on variational autoencoders. arxiv:1606.05908 (2016).
[14]
Yingjun Du, Jun Xu, Qiang Qiu, Xiantong Zhen, and Lei Zhang. 2020. Variational image deraining. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision. 2406–2415.
[15]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to Byzantine-robust federated learning. In Proceedings of the 29th USENIX Security Symposium (USENIX Security’20). 1605–1622. https://www.usenix.org/conference/usenixsecurity20/presentation/fang.
[16]
Clement Fung, Chris J. M. Yoon, and Ivan Beschastnikh. 2018. Mitigating Sybils in federated learning poisoning. arxiv:1808.04866 (2018).
[17]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672–2680.
[18]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying vulnerabilities in the machine learning model supply chain. arxiv:1708.06733 (2017).
[19]
El Mahdi El Mhamdi, Rachid Guerraoui, and Sebastien Rouault. 2018. The hidden vulnerability of distributed learning in Byzantium. In Proceedings of the International Conference on Machine Learning (ICML’18). 3521–3530.
[20]
Jamie Hayes and Olga Ohrimenko. 2018. Contamination attacks and mitigation in multi-party machine learning. In Advances in Neural Information Processing Systems. 6604–6615.
[21]
Jernej Kos, Ian Fischer, and Dawn Song. 2018. Adversarial examples for generative models. In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW’18). IEEE, Los Alamitos, CA, 36–42.
[22]
Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2016. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of the International Conference on Machine Learning (ICML’16). 1558–1566.
[23]
Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2019. Federated learning: Challenges, methods, and future directions. arxiv:1908.07873 (2019).
[24]
Wenshuo Li, Jincheng Yu, Xuefei Ning, Pengjun Wang, Qi Wei, Yu Wang, and Huazhong Yang. 2018. Hu-Fu: Hardware and software collaborative attack framework against neural networks. In Proceedings of the 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI’18). IEEE, Los Alamitos, CA, 482–487.
[25]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018. Trojaning attack on neural networks. In Proceedings of the Network and Distributed Systems Security Symposium (NDSS’18).
[26]
Adrian Nilsson, Simon Smith, Gregor Ulm, Emil Gustavsson, and Mats Jirstrand. 2018. A performance evaluation of federated learning algorithms. In Proceedings of the 2nd Workshop on Distributed Infrastructures for Deep Learning. 1–8.
[27]
Sumudu Samarakoon, Mehdi Bennis, Walid Saad, and Mérouane Debbah. 2019. Distributed federated learning for ultra-reliable low-latency vehicular communications. IEEE Transactions on Communications 68, 2 (2019), 1146–1159.
[28]
Jignyasa Sanghavi. 2020. Review of smart healthcare systems and applications for smart cities. In ICCCE 2019. Lecture Notes in Electrical Engineering, Vol. 570. Springer, 325–331.
[29]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1310–1321.
[30]
Aleksei Triastcyn and Boi Faltings. 2019. Federated generative privacy. arxiv:1910.08385 (2019).
[31]
Praneeth Vepakomma, Otkrist Gupta, Tristan Swedish, and Ramesh Raskar. 2018. Split learning for health: Distributed deep learning without sharing raw patient data. arxiv:1812.00564 (2018).
[32]
Joost Verbraeken, Matthijs Wolting, Jonathan Katzy, Jeroen Kloppenburg, Tim Verbelen, and Jan S. Rellermeyer. 2020. A survey on distributed machine learning. ACM Computing Surveys 53, 2 (2020), 1–33.
[33]
Chaowei Xiao, Bo Li, Jun Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating adversarial examples with adversarial networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI’18). 3905–3911.
[34]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology 10, 2 (2019), 1–19.
[35]
Yuanshun Yao, Huiying Li, Haitao Zheng, and Ben Y. Zhao. 2019. Latent backdoor attacks on deep neural networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2041–2055.
[36]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the International Conference on Machine Learning (ICML’18). 5650–5659.

Cited By

View all
  • (2024)Federated Learning in Industrial IoT: A Privacy-Preserving Solution That Enables Sharing of Data in Hydrocarbon ExplorationsIEEE Transactions on Industrial Informatics10.1109/TII.2023.330693120:3(4337-4346)Online publication date: Mar-2024
  • (2024)AgrAmplifier: Defending Federated Learning Against Poisoning Attacks Through Local Update AmplificationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.333355519(1241-1250)Online publication date: 1-Jan-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Internet Technology
ACM Transactions on Internet Technology  Volume 22, Issue 2
May 2022
582 pages
ISSN:1533-5399
EISSN:1557-6051
DOI:10.1145/3490674
  • Editor:
  • Ling Liu
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2021
Accepted: 01 September 2020
Revised: 01 August 2020
Received: 01 July 2020
Published in TOIT Volume 22, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Deep learning
  2. neural backdoor
  3. generative neural networks
  4. federated learning
  5. edge computing
  6. cloud computing

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • NSFC

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)95
  • Downloads (Last 6 weeks)18
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Federated Learning in Industrial IoT: A Privacy-Preserving Solution That Enables Sharing of Data in Hydrocarbon ExplorationsIEEE Transactions on Industrial Informatics10.1109/TII.2023.330693120:3(4337-4346)Online publication date: Mar-2024
  • (2024)AgrAmplifier: Defending Federated Learning Against Poisoning Attacks Through Local Update AmplificationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.333355519(1241-1250)Online publication date: 1-Jan-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media