Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3297858.3304051acmconferencesArticle/Chapter ViewAbstractPublication PagesasplosConference Proceedingsconference-collections
research-article
Open access

DeepSigns: An End-to-End Watermarking Framework for Ownership Protection of Deep Neural Networks

Published: 04 April 2019 Publication History

Abstract

Deep Learning (DL) models have created a paradigm shift in our ability to comprehend raw data in various important fields, ranging from intelligence warfare and healthcare to autonomous transportation and automated manufacturing. A practical concern, in the rush to adopt DL models as a service, is protecting the models against Intellectual Property (IP) infringement. DL models are commonly built by allocating substantial computational resources that process vast amounts of proprietary training data. The resulting models are therefore considered to be an IP of the model builder and need to be protected to preserve the owner's competitive advantage. We propose DeepSigns, the first end-to-end IP protection framework that enables developers to systematically insert digital watermarks in the target DL model before distributing the model. DeepSigns is encapsulated as a high-level wrapper that can be leveraged within common deep learning frameworks including TensorFlow and PyTorch. The libraries in DeepSigns work by dynamically learning the Probability Density Function (pdf) of activation maps obtained in different layers of a DL model. DeepSigns uses the low probabilistic regions within the model to gradually embed the owner's signature (watermark) during DL training while minimally affecting the overall accuracy and training overhead. DeepSigns can demonstrably withstand various removal and transformation attacks, including model pruning, model fine-tuning, and watermark overwriting. We evaluate DeepSigns performance on a wide variety of DL architectures including wide residual convolution neural networks, multi-layer perceptrons, and long short-term memory models. Our extensive evaluations corroborate DeepSigns' effectiveness and applicability. We further provide a highly-optimized accompanying API to facilitate training watermarked neural networks with a training overhead as low as 2.2%.

References

[1]
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, 2015.
[2]
L. Deng and D. Yu, “Deep learning: methods and applications,” Foundations and Trends® in Signal Processing, vol. 7, no. 3--4, 2014.
[3]
I. Goodfellow, Y. Bengio, and A. Courville, Deep learning, vol. 1. MIT press Cambridge, 2016.
[4]
M. Ribeiro, K. Grolinger, and M. A. Capretz, “Mlaas: Machine learning as a service,” in IEEE 14th International Conference on Machine Learning and Applications (ICMLA), 2015.
[5]
E. Chung, J. Fowers, K. Ovtcharov, M. Papamichael, A. Caulfield, T. Massengill, M. Liu, D. Lo, S. Alkalay, M. Haselman, et al., “Serving dnns in real time at datacenter scale with project brainwave,” IEEE Micro, vol. 38, no. 2, pp. 8--20, 2018.
[6]
B. Furht and D. Kirovski, Multimedia security handbook. CRC press, 2004.
[7]
F. Hartung and M. Kutter, “Multimedia watermarking techniques,” Proceedings of the IEEE, vol. 87, no. 7, 1999.
[8]
G. Qu and M. Potkonjak, Intellectual property protection in VLSI designs: theory and practice. Springer Science & Business Media, 2007.
[9]
I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon, “Secure spread spectrum watermarking for multimedia,” IEEE transactions on image processing, vol. 6, no. 12, 1997.
[10]
C.-S. Lu, Multimedia Security: Steganography and Digital Watermarking Techniques for Protection of Intellectual Property: Steganography and Digital Watermarking Techniques for Protection of Intellectual Property. Igi Global, 2004.
[11]
Y. Uchida, Y. Nagai, S. Sakazawa, and S. Satoh, “Embedding watermarks into deep neural networks,” in Proceedings of the ACM on International Conference on Multimedia Retrieval, 2017.
[12]
Y. Nagai, Y. Uchida, S. Sakazawa, and S. Satoh, “Digital watermarking for deep neural networks,” International Journal of Multimedia Information Retrieval, vol. 7, no. 1, 2018.
[13]
E. L. Merrer, P. Perez, and G. Trédan, “Adversarial frontier stitching for remote neural network watermarking,” arXiv preprint arXiv:1711.01894, 2017.
[14]
Y. Adi, C. Baum, M. Cisse, B. Pinkas, and J. Keshet, “Turning your weakness into a strength: Watermarking deep neural networks by backdooring,” Usenix Security Symposium, 2018.
[15]
K. Grosse, P. Manoharan, N. Papernot, M. Backes, and P. McDaniel, “On the (statistical) detection of adversarial examples,” arXiv preprint arXiv:1702.06280, 2017.
[16]
B. D. Rouhani, M. Samragh, T. Javidi, and F. Koushanfar, “Safe machine learning and defeat-ing adversarial attacks,” IEEE Security and Privacy (S&P) Magazine, 2018.
[17]
B. D. Rouhani, M. Samragh, M. Javaheripi, T. Javidi, and F. Koushanfar, “Deepfense: Online accelerated defense against adversarial deep learning,” in 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1--8, IEEE, 2018.
[18]
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
[19]
N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proceedings of ACM on Asia Conference on Computer and Communications Security, 2017.
[20]
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015.
[21]
A. B. Patel, T. Nguyen, and R. G. Baraniuk, “A probabilistic theory of deep learning,” arXiv preprint arXiv:1504.00641, 2015.
[22]
D. Lin, S. Talathi, and S. Annapureddy, “Fixed point quantization of deep convolutional networks,” in International Conference on Machine Learning (ICML), 2016.
[23]
S. Zagoruyko and N. Komodakis, “Wide residual networks,” arXiv preprint arXiv:1605.07146, 2016.
[24]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in CVPR09, 2009.
[25]
Y. LeCun, C. Cortes, and C. J. Burges, “The mnist database of handwritten digits,” 1998.
[26]
A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009.
[27]
S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” in Advances in Neural Information Processing Systems (NIPS), 2015.
[28]
S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015.
[29]
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[30]
B. D. Rouhani, A. Mirhoseini, and F. Koushanfar, “Deep3: Leveraging three levels of parallelism for efficient deep learning,” in Proceedings of ACM 54th Annual Design Automation Conference (DAC), 2017.
[31]
B. D. Rouhani, A. Mirhoseini, and F. Koushanfar, “Delight: Adding energy dimension to deep neural networks,” in Proceedings of the International Symposium on Low Power Electronics and Design (ISLPED), ACM, 2016.
[32]
N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang, “Convolutional neural networks for medical image analysis: Full training or fine tuning?,” IEEE transactions on medical imaging, vol. 35, no. 5, 2016.
[33]
N. F. Johnson, Z. Duric, and S. Jajodia, Information Hiding: Steganography and Watermarking-Attacks and Countermeasures: Steganography and Watermarking: Attacks and Countermeasures, vol. 1. Springer Science & Business Media, 2001.
[34]
J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806, 2014.
[35]
M. Liang and X. Hu, “Recurrent convolutional neural network for object recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[36]
H. Chen, B. D. Rouhani, X. Fan, O. C. Kilinc, and F. Koushanfar, “Performance comparison of contemporary dnn watermarking techniques,” arXiv preprint arXiv:1811.03713, 2018.
[37]
A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun, “The loss surfaces of multilayer networks,” in Artificial Intelligence and Statistics, 2015.

Cited By

View all
  • (2024)High-Frequency Artifacts-Resistant Image Watermarking Applicable to Image Processing ModelsApplied Sciences10.3390/app1404149414:4(1494)Online publication date: 12-Feb-2024
  • (2024)An Imperceptible and Owner-unique Watermarking Method for Graph Neural NetworksProceedings of the ACM Turing Award Celebration Conference - China 202410.1145/3674399.3674443(108-113)Online publication date: 5-Jul-2024
  • (2024)Reliable Model Watermarking: Defending against Theft without Compromising on EvasionProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681610(10124-10133)Online publication date: 28-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASPLOS '19: Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems
April 2019
1126 pages
ISBN:9781450362405
DOI:10.1145/3297858
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 April 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep neural networks
  2. digital watermark
  3. intellectual property protection

Qualifiers

  • Research-article

Conference

ASPLOS '19

Acceptance Rates

ASPLOS '19 Paper Acceptance Rate 74 of 351 submissions, 21%;
Overall Acceptance Rate 535 of 2,713 submissions, 20%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)856
  • Downloads (Last 6 weeks)93
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)High-Frequency Artifacts-Resistant Image Watermarking Applicable to Image Processing ModelsApplied Sciences10.3390/app1404149414:4(1494)Online publication date: 12-Feb-2024
  • (2024)An Imperceptible and Owner-unique Watermarking Method for Graph Neural NetworksProceedings of the ACM Turing Award Celebration Conference - China 202410.1145/3674399.3674443(108-113)Online publication date: 5-Jul-2024
  • (2024)Reliable Model Watermarking: Defending against Theft without Compromising on EvasionProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681610(10124-10133)Online publication date: 28-Oct-2024
  • (2024)Safe-SD: Safe and Traceable Stable Diffusion with Text Prompt Trigger for Invisible Generative WatermarkingProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681418(7113-7122)Online publication date: 28-Oct-2024
  • (2024)ProActive DeepFake Detection using GAN-based Visible WatermarkingACM Transactions on Multimedia Computing, Communications, and Applications10.1145/362554720:11(1-27)Online publication date: 12-Sep-2024
  • (2024)Identifying Appropriate Intellectual Property Protection Mechanisms for Machine Learning Models: A Systematization of Watermarking, Fingerprinting, Model Access, and AttacksIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.327013535:10(13082-13100)Online publication date: Oct-2024
  • (2024)Unambiguous and High-Fidelity Backdoor Watermarking for Deep Neural NetworksIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.325021035:8(11204-11217)Online publication date: Aug-2024
  • (2024)Wide Flat Minimum Watermarking for Robust Ownership Verification of GANsIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.344365019(8322-8337)Online publication date: 2024
  • (2024)RemovalNet: DNN Fingerprint Removal AttacksIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.331506421:4(2645-2658)Online publication date: Jul-2024
  • (2024)Protecting Intellectual Property With Reliable Availability of Learning Models in AI-Based Cybersecurity ServicesIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2022.322297221:2(600-617)Online publication date: Mar-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media