Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3665314.3670821acmconferencesArticle/Chapter ViewAbstractPublication PagesislpedConference Proceedingsconference-collections
research-article
Open access

Securing Deep Neural Networks on Edge from Membership Inference Attacks Using Trusted Execution Environments

Published: 09 September 2024 Publication History

Abstract

Privacy concerns arise from malicious attacks on Deep Neural Network (DNN) applications during sensitive data inference on edge devices. Membership Inference Attack (MIA) is developed by adversaries to determine whether sensitive data is used to train the DNN applications. Prior work uses Trusted Execution Environments (TEEs) to hide DNN model inference from adversaries on edge devices. Unfortunately, existing methods have two major problems. First, due to the restricted memory of TEEs, prior work cannot secure large-size DNNs from gradient-based MIAs. Second, prior work is ineffective on output-based MIAs. To mitigate the problems, we present a depth-wise layer partitioning method to run large sensitive layers inside TEEs. We further propose a model quantization strategy to improve the defense capability of DNNs against output-based MIAs and accelerate the computation. We also automate the process of securing PyTorch-based DNN models inside TEEs. Experiments on Raspberry Pi 3B+ show that our method can reduce the accuracy of gradient-based MIAs on AlexNet, VGG-16, and ResNet-20 evaluated on the CIFAR-100 dataset by 28.8%, 11%, and 35.3%. The accuracy of output-based MIAs on the three models is also reduced by 18.5%, 13.4%, and 29.6%, respectively.

References

[1]
Sebastian P Bayerl et al. "Offline model guard: Secure and private ML on mobile devices". Design, Automation & Test in Europe Conference & Exhibition. 2020.
[2]
Han Cai et al. "Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications". ACM Transactions on Design Automation of Electronic System 27.3 (2022).
[3]
Gianmarco Cerutti et al. "Sound event detection with binary neural networks on tightly power-constrained IoT devices". Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design. 2020.
[4]
Nicholas Carlini et al. "Membership inference attacks from first principles". IEEE Symposium on Security and Privacy. 2022.
[5]
ARM Security Technology Building a Secure System using TrustZone Technology.
[6]
Fan Mo et al. "DarkneTZ: towards model privacy at the edge using trusted execution environments". International Conference on Mobile Systems, Applications, and Services. 2020.
[7]
Md Shihabul Islam et al. "Confidential Execution of Deep Learning Inference at the Untrusted Edge with ARM TrustZone". ACM Conference on Data and Application Security and Privacy. 2023.
[8]
Milad Nasr et al. "Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning". IEEE Symposium on Security and Privacy. 2019.
[9]
Reza Shokri et al. "Membership inference attacks against machine learning models". IEEE Symposium on Security and Privacy. 2017.
[10]
Jiacheng Li et al. "Membership Inference Attacks and Defenses in Classification Models". Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy. 2021.
[11]
Xiaoyong Yuan et al. "Membership Inference Attacks and Defenses in Neural Network Pruning". 31st USENIX Security Symposium. 2022.
[12]
Fan Yao et al. "DeepHammer: Depleting the intelligence of deep neural networks through targeted chain of bit flips". 29th USENIX Security Symposium. 2020.
[13]
Simon J. Shepherd. "The Tiny Encryption Algorithm". Cryptologia 31.3 (2007).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ISLPED '24: Proceedings of the 29th ACM/IEEE International Symposium on Low Power Electronics and Design
August 2024
384 pages
ISBN:9798400706882
DOI:10.1145/3665314
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 September 2024

Check for updates

Author Tags

  1. membership inference attack
  2. ARM TrustZone
  3. trusted execution environment
  4. model partitioning
  5. model quantization

Qualifiers

  • Research-article

Conference

ISLPED '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 398 of 1,159 submissions, 34%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 114
    Total Downloads
  • Downloads (Last 12 months)114
  • Downloads (Last 6 weeks)53
Reflects downloads up to 24 Dec 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media