Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3627106.3627107acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacsacConference Proceedingsconference-collections
research-article

DeepContract: Controllable Authorization of Deep Learning Models

Published: 04 December 2023 Publication History

Abstract

Well-trained deep learning (DL) models are widely used in various fields and recognized as valuable intellectual property. However, most existing efforts to fully exploit their value either require users to upload input data to provide machine learning services, which raises serious privacy concerns, or deploy DL models on the user side, resulting in a loss of control over the models. While a few active model authorization methods protect the model from unauthorized users, they cannot prevent the model from being redistributed or abused by authorized users. To address the urgent need to efficiently protect both model confidentiality and input data privacy, and achieve uninterrupted model controllability, we propose a contract-based model authorization framework called DeepContract. This framework enables model owners to deploy their models on the user side for local inference without revealing original model weights. Moreover, it allows them to grant and revoke the right to use their models at any time. Specifically, we propose a generic model encryption method that significantly outperforms the state-of-the-art method in both efficiency and security. Leveraging the integrity verification in the Trusted Execution Environment, contract-based and verifiable enclave codes generated by DeepContract can perform controlled inference using the encrypted model distributed on the user side. Our extensive evaluations show that DeepContract can achieve efficient and secure controllable model authorization for the pre-signed contract.

References

[1]
Yossi Adi, Carsten Baum, Moustapha Cisse, Benny Pinkas, and Joseph Keshet. 2018. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In 27th USENIX Security Symposium. 1615–1631.
[2]
Manaar Alam, Sayandeep Saha, Debdeep Mukhopadhyay, and Sandip Kundu. 2020. Deep-lock: Secure authorization for deep neural networks. arXiv preprint arXiv:2008.05966 (2020).
[3]
Fabian Boemer, Anamaria Costache, Rosario Cammarota, and Casimir Wierzynski. 2019. nGraph-HE2: A high-throughput framework for neural network inference on encrypted data. In Proceedings of the 7th ACM Workshop on Encrypted Computing & Applied Homomorphic Cryptography. 45–56.
[4]
Alon Brutzkus, Ran Gilad-Bachrach, and Oren Elisha. 2019. Low latency privacy preserving inference. In International Conference on Machine Learning. PMLR, 812–821.
[5]
Hervé Chabanne, Amaury De Wargny, Jonathan Milgram, Constance Morel, and Emmanuel Prouff. 2017. Privacy-preserving classification on deep neural network. Cryptology ePrint Archive (2017).
[6]
Abhishek Chakraborty, Ankit Mondai, and Ankur Srivastava. 2020. Hardware-assisted intellectual property protection of deep learning models. In 57th ACM/IEEE Design Automation Conference. IEEE, 1–6.
[7]
Nishanth Chandran, Divya Gupta, Aseem Rastogi, Rahul Sharma, and Shardul Tripathi. 2019. EzPC: Programmable and efficient secure two-party computation for machine learning. In 2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 496–511.
[8]
Huili Chen, Cheng Fu, Bita Darvish Rouhani, Jishen Zhao, and Farinaz Koushanfar. 2019. Deepattest: An end-to-end attestation framework for deep neural networks. In 46th ACM/IEEE Annual International Symposium on Computer Architecture. IEEE, 487–498.
[9]
Mingliang Chen and Min Wu. 2018. Protect your deep neural networks from piracy. In 2018 IEEE International Workshop on Information Forensics and Security. IEEE, 1–7.
[10]
Zitai Chen, Georgios Vasilakis, Kit Murdock, Edward Dean, David Oswald, and Flavio D Garcia. 2021. VoltPillager: Hardware-based fault injection attacks against Intel SGXEnclaves using the SVID voltage scaling interface. In 30th USENIX Security Symposium. 699–716.
[11]
Edward Chou 2018. Faster cryptonets: Leveraging sparsity for real-world encrypted inference. arXiv preprint arXiv:1811.09953 (2018).
[12]
Victor Costan and Srinivas Devadas. 2016. Intel SGX Explained. Cryptology ePrint Archive, Paper 2016/086.
[13]
Bita Darvish Rouhani, Huili Chen, and Farinaz Koushanfar. 2019. Deepsigns: An end-to-end watermarking framework for ownership protection of deep neural networks. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems. 485–497.
[14]
Imad El Hanouti, Hakim El Fadili, and Khalid Zenkouar. 2021. Breaking an image encryption scheme based on Arnold map and Lucas series. Multimedia Tools and Applications 80, 4 (2021), 4975–4997.
[15]
Lixin Fan, Kam Woh Ng, and Chee Seng Chan. 2019. Rethinking deep neural network ownership verification: Embedding passports to defeat ambiguity attacks. Advances in Neural Information Processing Systems 32 (2019).
[16]
Lixin Fan, Kam Woh Ng, Chee Seng Chan, and Qiang Yang. 2021. Deepip: Deep neural network intellectual property protection with passports. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).
[17]
Jiri Fridrich. 1998. Symmetric ciphers based on two-dimensional chaotic maps. International Journal of Bifurcation and Chaos 8, 06 (1998), 1259–1284.
[18]
Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In International Conference on Machine Learning. PMLR, 201–210.
[19]
Karan Grover, Shruti Tople, Shweta Shinde, Ranjita Bhagwan, and Ramachandran Ramjee. 2018. Privado: Practical and secure DNN inference with enclaves. arXiv preprint arXiv:1810.00602 (2018).
[20]
Jia Guo and Miodrag Potkonjak. 2018. Watermarking deep neural networks for embedded systems. In 2018 IEEE/ACM International Conference on Computer-Aided Design. IEEE, 1–8.
[21]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
[22]
Ehsan Hesamifard, Hassan Takabi, and Mehdi Ghasemi. 2017. Cryptodl: Deep neural networks over encrypted data. arXiv preprint arXiv:1711.05189 (2017).
[23]
Chiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. 2018. GAZELLE: A low latency framework for secure neural network inference. In 27th USENIX Security Symposium. 1651–1669.
[24]
Jesper Kers 2022. Deep learning-based classification of kidney transplant pathology: a retrospective, multicentre, proof-of-concept study. The Lancet Digital Health 4, 1 (2022), e18–e26.
[25]
Thomas Knauth 2018. Integrating remote attestation with transport layer security. arXiv preprint arXiv:1801.05863 (2018).
[26]
Robert Krahn 2018. Pesos: Policy enhanced secure object store. In Proceedings of the Thirteenth EuroSys Conference. 1–17.
[27]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2017. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 6 (2017), 84–90.
[28]
Taegyeong Lee 2019. Occlumency: Privacy-preserving remote deep-learning inference using SGX. In The 25th Annual International Conference on Mobile Computing and Networking. 1–17.
[29]
Ryan Lehmkuhl, Pratyush Mishra, Akshayaram Srinivasan, and Raluca Ada Popa. 2021. Muse: Secure inference resilient to malicious clients. In 30th USENIX Security Symposium. 2201–2218.
[30]
Ning Lin, Xiaoming Chen, Hang Lu, and Xiaowei Li. 2020. Chaotic weights: A novel approach to protect intellectual property of deep neural networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 40, 7 (2020), 1327–1339.
[31]
Hanwen Liu, Zhenyu Weng, and Yuesheng Zhu. 2021. Watermarking deep neural networks with greedy residuals. In International Conference on Machine Learning. PMLR, 6978–6988.
[32]
Jian Liu, Mika Juuti, Yao Lu, and Nadarajah Asokan. 2017. Oblivious neural network predictions via minionn transformations. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 619–631.
[33]
Qian Lou and Lei Jiang. 2019. She: A fast and accurate deep neural network for encrypted data. Advances in neural information processing systems 32 (2019).
[34]
Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, and Raluca Ada Popa. 2020. Delphi: A cryptographic inference service for neural networks. In 29th USENIX Security Symposium. 2505–2522.
[35]
Payman Mohassel and Yupeng Zhang. 2017. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE Symposium on Security and Privacy. IEEE, 19–38.
[36]
Oleksii Oleksenko, Bohdan Trach, Robert Krahn, Mark Silberstein, and Christof Fetzer. 2018. Varys: Protecting SGX Enclaves from Practical Side-Channel Attacks. In 2018 USENIX Annual Technical Conference. 227–240.
[37]
Gabriel Peterson. 1997. Arnold’s cat map. Math Linear Algebra 45 (1997), 1–7.
[38]
Deevashwer Rathee 2020. CrypTFlow2: Practical 2-party secure inference. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. 325–342.
[39]
M Sadegh Riazi, Mohammad Samragh, Hao Chen, Kim Laine, Kristin Lauter, and Farinaz Koushanfar. 2019. XONN: XNOR-based oblivious deep neural network inference. In 28th USENIX Security Symposium (USENIX Security 19). 1501–1518.
[40]
M Sadegh Riazi, Christian Weinert, Oleksandr Tkachenko, Ebrahim M Songhori, Thomas Schneider, and Farinaz Koushanfar. 2018. Chameleon: A hybrid secure computation framework for machine learning applications. In Proceedings of the 2018 on Asia conference on computer and communications security. 707–721.
[41]
Mauro Ribeiro, Katarina Grolinger, and Miriam AM Capretz. 2015. Mlaas: Machine learning as a service. In IEEE International Conference on Machine Learning and Applications. IEEE, 896–902.
[42]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[43]
Ziwei Song, Wei Jiang, Jinyu Zhan, Xiangyu Wen, and Chen Bian. 2021. Work-in-Progress: Critical-Weight Based Locking Scheme for DNN IP Protection in Edge Computing. In 2021 International Conference on Hardware/Software Codesign and System Synthesis. IEEE, 33–34.
[44]
Christian Szegedy 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1–9.
[45]
Florian Tramer and Dan Boneh. 2018. Slalom: Fast, verifiable and private execution of neural networks in trusted hardware. arXiv preprint arXiv:1806.03287 (2018).
[46]
Yusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin’ichi Satoh. 2017. Embedding watermarks into deep neural networks. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. 269–277.
[47]
Jo Van Bulck 2018. Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution. In 27th USENIX Security Symposium. 991–1008.
[48]
Jialong Zhang 2018. Protecting intellectual property of deep neural networks with watermarking. In Proceedings of the 2018 on Asia Conference on Computer and Communications Security. 159–172.
[49]
Xiuli Zhang and Zhongqiu Cao. 2021. A Framework of an Intelligent Education System for Higher Education Based on Deep Learning.International Journal of Emerging Technologies in Learning 16, 7 (2021).

Index Terms

  1. DeepContract: Controllable Authorization of Deep Learning Models

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ACSAC '23: Proceedings of the 39th Annual Computer Security Applications Conference
      December 2023
      836 pages
      ISBN:9798400708862
      DOI:10.1145/3627106
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 December 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Model authorization
      2. Trusted Execution Environment
      3. model encryption

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • the National Key R&D Program of China
      • the University Synergy Innovation Program of Anhui Province
      • the Fundamental Research Funds for the Central Universities
      • China National Natural Science Foundation

      Conference

      ACSAC '23

      Acceptance Rates

      Overall Acceptance Rate 104 of 497 submissions, 21%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 116
        Total Downloads
      • Downloads (Last 12 months)116
      • Downloads (Last 6 weeks)10
      Reflects downloads up to 13 Nov 2024

      Other Metrics

      Citations

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media