Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3359789.3359824acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacsacConference Proceedingsconference-collections
research-article

Model inversion attacks against collaborative inference

Published: 09 December 2019 Publication History

Abstract

The prevalence of deep learning has drawn attention to the privacy protection of sensitive data. Various privacy threats have been presented, where an adversary can steal model owners' private data. Meanwhile, countermeasures have also been introduced to achieve privacy-preserving deep learning. However, most studies only focused on data privacy during training, and ignored privacy during inference.
In this paper, we devise a new set of attacks to compromise the inference data privacy in collaborative deep learning systems. Specifically, when a deep neural network and the corresponding inference task are split and distributed to different participants, one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants' data or computations, or to prediction APIs to query this system. We evaluate our attacks under different settings, models and datasets, to show their effectiveness and generalization. We also study the characteristics of deep learning models that make them susceptible to such inference privacy threats. This provides insights and guidelines to develop more privacy-preserving collaborative systems and algorithms.

References

[1]
2018. https://pytorch.org/docs/0.4.0/torchvision/datasets.html.
[2]
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In ACM Conference on Computer and Communications Security.
[3]
Giuseppe Ateniese, Luigi V Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. International Journal of Security and Networks (2015).
[4]
Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser. 2015. Machine learning classification over encrypted data. In Network and Distributed System Security Symposium.
[5]
Yinzhi Cao and Junfeng Yang. 2015. Towards Making Systems Forget with Machine Unlearning. In IEEE Symposium on Security and Privacy.
[6]
Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. 2014. Project adam: Building an efficient and scalable deep learning training system. In USENIX Symposium on Operating Systems Design and Implementation.
[7]
Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. 2012. Large scale distributed deep networks. In Advances in neural information processing systems.
[8]
Amir Erfan Eshratifar, Mohammad Saeed Abrishami, and Massoud Pedram. 2018. JointDNN: an efficient training and inference engine for intelligent mobile cloud computing services. arXiv preprint arXiv:1801.08618 (2018).
[9]
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In ACM Conference on Computer and Communications Security.
[10]
Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In USENIX Security Symposium.
[11]
Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov. 2018. Property Inference A acks on Fully Connected Neural Networks using Permutation Invariant Representations. In ACM Conference on Computer and Communications Security.
[12]
Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov. 2018. Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations. In ACM conference on computer and communications security. 619--633.
[13]
Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics. 249--256.
[14]
Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep learning. Vol. 1. MIT press Cambridge.
[15]
Jihun Hamm, Adam C Champion, Guoxing Chen, Mikhail Belkin, and Dong Xuan. 2015. Crowd-ml: A privacy-preserving learning framework for a crowd of smart devices. In IEEE International Conference on Distributed Computing Systems.
[16]
Awni Y. Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, and Andrew Y. Ng. 2014. Deep Speech: Scaling Up End-to-end Speech Recognition. CoRR abs/1412.5567 (2014). arXiv:1412.5567 http://arxiv.org/abs/1412.5567
[17]
Johann Hauswald, Thomas Manville, Qi Zheng, Ronald Dreslinski, Chaitali Chakrabarti, and Trevor Mudge. 2014. A hybrid approach to offloading mobile image classification. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 8375--8379.
[18]
Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. 2017. LOGAN: evaluating privacy leakage of generative models using generative adversarial networks. arXiv preprint arXiv:1705.07663 (2017).
[19]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recognition. CoRR abs/1512.03385 (2015). arXiv:1512.03385 http://arxiv.org/abs/1512.03385
[20]
Zecheng He, Aswin Raghavan, Guangyuan Hu, Sek Chai, and Ruby Lee. 2019. Power-Grid Controller Anomaly Detection with Enhanced Temporal Deep Learning. In 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications.
[21]
Zecheng He, Tianwei Zhang, and Ruby Lee. 2019. Sensitive-Sample Fingerprinting of Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4729--4737.
[22]
Briland Hitaj, Giuseppe Ateniese, and Fernando Pérez-Cruz. 2017. Deep models under the GAN: information leakage from collaborative deep learning. In ACM Conference on Computer and Communications Security.
[23]
Weizhe Hua, Zhiru Zhang, and G Edward Suh. 2018. Reverse engineering convolutional neural networks through side-channel information leaks. In ACM/ESDA/IEEE Design Automation Conference.
[24]
Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel. 2018. Chiron: Privacy-preserving Machine Learning as a Service. arXiv preprint arXiv:1803.05961 (2018).
[25]
Yiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Jason Mars, and Lingjia Tang. 2017. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. Acm Sigplan Notices 52, 4 (2017), 615--629.
[26]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[27]
Jong Hwan Ko, Taesik Na, Mohammad Faisal Amir, and Saibal Mukhopadhyay. 2018. Edge-host partitioning of deep neural networks with feature space encoding for resource-constrained internet-of-things platforms. In IEEE International Conference on Advanced Video and Signal Based Surveillance.
[28]
Yann Le Cun, LD Jackel, B Boser, JS Denker, HP Graf, I Guyon, D Henderson, RE Howard, and W Hubbard. 1989. Handwritten Digit Recognition: Applications of Neural Network Chips and Automatic Learning. IEEE Communications Magazine 27, 11 (1989), 41--46.
[29]
Kin Sum Liu, Bo Li, and Jie Gao. 2018. Generative Model: Membership Attack, Generalization and Diversity. arXiv preprint arXiv:1805.09898 (2018).
[30]
Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiaofeng Wang, Haixu Tang, Carl A Gunter, and Kai Chen. 2018. Understanding Membership Inferences on Well-Generalized Learning Models. arXiv preprint arXiv:1802.04889 (2018).
[31]
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attention-based Neural Machine Translation. CoRR abs/1508.04025 (2015). arXiv:1508.04025 http://arxiv.org/abs/1508.04025
[32]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In IEEE Symposium on Security and Privacy.
[33]
Seong Joon Oh, Max Augustin, Mario Fritz, and Bernt Schiele. 2018. Towards reverse-engineering black-box neural networks. In INternational Conference on Learning Representations.
[34]
Olga Ohrimenko, Felix Schuster, Cédric Fournet, Aastha Mehta, Sebastian Nowozin, Kapil Vaswani, and Manuel Costa. 2016. Oblivious Multi-Party Machine Learning on Trusted Processors. In USENIX Security Symposium.
[35]
Herbert Robbins and Sutton Monro. 1951. A stochastic approximation method. The annals of mathematical statistics (1951), 400--407.
[36]
Frank Rosenblatt. 1958. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological review 65, 6 (1958), 386.
[37]
Leonid I Rudin, Stanley Osher, and Emad Fatemi. 1992. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena 60, 1-4 (1992), 259--268.
[38]
David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1986. Learning Representations by Back-propagating Errors. nature 323, 6088 (1986), 533.
[39]
Ahmed Salem, Yang Zhang, Mathias Humbert, Mario Fritz, and Michael Backes. 2018. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Network and Distributed System Security Symposium.
[40]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In ACM conference on computer and communications security.
[41]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In IEEE Symposium on Security and Privacy.
[42]
Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. 2017. Machine Learning Models that Remember Too Much. In ACM Conference on Computer and Communications Security.
[43]
Surat Teerapittayanon, Bradley McDanel, and HT Kung. 2017. Distributed deep neural networks over the cloud, the edge and end devices. In IEEE International Conference on Distributed Computing Systems.
[44]
Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing Machine Learning Models via Prediction APIs. In USENIX Security Symposium.
[45]
Aleksei Triastcyn and Boi Faltings. 2018. Generating Artificial Data for Private Deep Learning. arXiv preprint arXiv:1803.03148 (2018).
[46]
Binghui Wang and Neil Zhenqiang Gong. 2018. Stealing Hyperparameters in Machine Learning. In IEEE Symposium on Security and Privacy.
[47]
Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13, 4 (2004), 600--612.
[48]
Lingxiao Wei, Bo Luo, Yu Li, Yannan Liu, and Qiang Xu. 2018. I know what you see: Power side-channel attack on convolutional neural network accelerators. In Annual Computer Security Applications Conference.
[49]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting. In IEEE Computer Security Foundations Symposium.
[50]
Hongxu Yin, Zeyu Wang, and Niraj K Jha. 2018. A hierarchical inference model for internet-of-things. IEEE Transactions on Multi-Scale Computing Systems 4, 3 (2018), 260--271.
[51]
Tianwei Zhang, Zecheng He, and Ruby B Lee. 2018. Privacy-preserving machine learning through data obfuscation. arXiv preprint arXiv:1807.01860 (2018).
[52]
Xinyang Zhang, Shouling Ji, and Ting Wang. 2018. Differentially Private Releasing via Deep Generative Model (Technical Report). arXiv preprint arXiv:1801.01594 (2018).

Cited By

View all
  • (2025)Large Language Models for Electronic Health Record De-Identification in English and GermanInformation10.3390/info1602011216:2(112)Online publication date: 6-Feb-2025
  • (2025)Provable Privacy Advantages of Decentralized Federated Learning via Distributed OptimizationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.351656420(822-838)Online publication date: 2025
  • (2025)Enhancing Accuracy-Privacy Trade-Off in Differentially Private Split LearningIEEE Transactions on Emerging Topics in Computational Intelligence10.1109/TETCI.2024.34857239:1(988-1000)Online publication date: Feb-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ACSAC '19: Proceedings of the 35th Annual Computer Security Applications Conference
December 2019
821 pages
ISBN:9781450376280
DOI:10.1145/3359789
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 December 2019

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. deep neural network
  2. distributed computation
  3. model inversion attack

Qualifiers

  • Research-article

Conference

ACSAC '19
ACSAC '19: 2019 Annual Computer Security Applications Conference
December 9 - 13, 2019
Puerto Rico, San Juan, USA

Acceptance Rates

ACSAC '19 Paper Acceptance Rate 60 of 266 submissions, 23%;
Overall Acceptance Rate 104 of 497 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)483
  • Downloads (Last 6 weeks)52
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Large Language Models for Electronic Health Record De-Identification in English and GermanInformation10.3390/info1602011216:2(112)Online publication date: 6-Feb-2025
  • (2025)Provable Privacy Advantages of Decentralized Federated Learning via Distributed OptimizationIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.351656420(822-838)Online publication date: 2025
  • (2025)Enhancing Accuracy-Privacy Trade-Off in Differentially Private Split LearningIEEE Transactions on Emerging Topics in Computational Intelligence10.1109/TETCI.2024.34857239:1(988-1000)Online publication date: Feb-2025
  • (2025)Functionality and Data Stealing by Pseudo-Client Attack and Target Defenses in Split LearningIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2024.338739622:1(84-100)Online publication date: Jan-2025
  • (2025)Real world federated learning with a knowledge distilled transformer for cardiac CT imagingnpj Digital Medicine10.1038/s41746-025-01434-38:1Online publication date: 6-Feb-2025
  • (2025)GAN-based data reconstruction attacks in split learningNeural Networks10.1016/j.neunet.2025.107150185(107150)Online publication date: May-2025
  • (2025)A survey on Deep Learning in Edge-Cloud Collaboration: Model partitioning, privacy preservation, and prospectsKnowledge-Based Systems10.1016/j.knosys.2025.112965(112965)Online publication date: Jan-2025
  • (2025)Mind your indices! Index hijacking attacks on collaborative unpooling autoencoder systemsInternet of Things10.1016/j.iot.2024.10146229(101462)Online publication date: Jan-2025
  • (2025)Privacy and security vulnerabilities in edge intelligence: An analysis and countermeasuresComputers and Electrical Engineering10.1016/j.compeleceng.2025.110146123(110146)Online publication date: Apr-2025
  • (2025)Approximate homomorphic encryption based privacy-preserving machine learning: a surveyArtificial Intelligence Review10.1007/s10462-024-11076-858:3Online publication date: 6-Jan-2025
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media