Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2976749.2978318acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Deep Learning with Differential Privacy

Published: 24 October 2016 Publication History

Abstract

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.

References

[1]
CIFAR-10 and CIFAR-100 datasets. www.cs.toronto.edu/~kriz/cifar.html.
[2]
TensorFlow convolutional neural networks tutorial. www.tensor ow.org/tutorials/deepcnn.
[3]
TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
[4]
M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. CoRR, abs/1607.00133, 2016.
[5]
C. C. Aggarwal. On k-anonymity and the curse of dimensionality. In VLDB, pages 901--909, 2005.
[6]
R. Agrawal and R. Srikant. Privacy-preserving data mining. In SIGMOD, pages 439--450. ACM, 2000.
[7]
R. Bassily, K. Nissim, A. Smith, T. Steinke, U. Stemmer, and J. Ullman. Algorithmic stability for adaptive data analysis. In STOC, pages 1046--1059. ACM, 2016.
[8]
R. Bassily, A. D. Smith, and A. Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In FOCS, pages 464--473. IEEE, 2014.
[9]
A. Beimel, H. Brenner, S. P. Kasiviswanathan, and K. Nissim. Bounds on the sample complexity for private learning and private data release. Machine Learning, 94(3):401--437, 2014.
[10]
J. Brickell and V. Shmatikov. The cost of privacy: Destruction of data-mining utility in anonymized data publishing. In KDD, pages 70--78. ACM, 2008.
[11]
M. Bun and T. Steinke. Concentrated di erential privacy: Simpli cations, extensions, and lower bounds. CoRR, abs/1605.02065, 2016.
[12]
K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization. J. Machine Learning Research, 12:1069--1109, 2011.
[13]
R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A Matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
[14]
D. D. Cox and N. Pinto. Beyond simple features: A large-scale feature search approach to unconstrained face recognition. In FG 2011, pages 8--15. IEEE, 2011.
[15]
D. Silver, A. Huang, C. J. Maddison et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484--489, 2016.
[16]
A. Daniely, R. Frostig, and Y. Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. CoRR, abs/1602.05897, 2016.
[17]
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Machine Learning Research, 12:2121--2159, July 2011.
[18]
C. Dwork. A rm foundation for private data analysis. Commun. ACM, 54(1):86--95, Jan. 2011.
[19]
C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor. Our data, ourselves: Privacy via distributed noise generation. In EUROCRYPT, pages 486--503. Springer, 2006.
[20]
C. Dwork and J. Lei. Differential privacy and robust statistics. In STOC, pages 371--380. ACM, 2009.
[21]
C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In TCC, pages 265--284. Springer, 2006.
[22]
C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3--4):211--407, 2014.
[23]
C. Dwork and G. N. Rothblum. Concentrated differential privacy. CoRR, abs/1603.01887, 2016.
[24]
C. Dwork, G. N. Rothblum, and S. Vadhan. Boosting and differential privacy. In FOCS, pages 51--60. IEEE, 2010.
[25]
C. Dwork, K. Talwar, A. Thakurta, and L. Zhang. Analyze Gauss: Optimal bounds for privacy-preserving principal component analysis. In STOC, pages 11--20. ACM, 2014.
[26]
M. Fredrikson, S. Jha, and T. Ristenpart. Model inversion attacks that exploit con dence information and basic countermeasures. In CCS, pages 1322--1333. ACM, 2015.
[27]
I. Goodfellow. Efficient per-example gradient computations. CoRR, abs/1510.01799v2, 2015.
[28]
B. Graham. Fractional max-pooling. CoRR, abs/1412.6071, 2014.
[29]
A. Gupta, K. Ligett, F. McSherry, A. Roth, and K. Talwar. Di erentially private combinatorial optimization. In SODA, pages 1106--1125, 2010.
[30]
K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In ICCV, pages 1026--1034. IEEE, 2015.
[31]
R. Ierusalimschy, L. H. de Figueiredo, and W. Filho. Lua|an extensible extension language. Software: Practice and Experience, 26(6):635--652, 1996.
[32]
K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object recognition? In ICCV, pages 2146--2153. IEEE, 2009.
[33]
R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, pages 315--323, 2013.
[34]
P. Kairouz, S. Oh, and P. Viswanath. The composition theorem for di erential privacy. In ICML, pages 1376--1385. ACM, 2015.
[35]
S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. D. Smith. What can we learn privately? SIAM J. Comput., 40(3):793--826, 2011.
[36]
D. Kifer, A. D. Smith, and A. Thakurta. Private convex optimization for empirical risk minimization with applications to high-dimensional regression. In COLT, pages 25.1--25.40, 2012.
[37]
A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pages 1097--1105, 2012.
[38]
Y. LeCun, L. Bottou, Y. Bengio, and P. Ha ner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 1998.
[39]
Y. Lindell and B. Pinkas. Privacy preserving data mining. In CRYPTO, pages 36--54. Springer, 2000.
[40]
C. J. Maddison, A. Huang, I. Sutskever, and D. Silver. Move evaluation in Go using deep convolutional neural networks. In ICLR, 2015.
[41]
F. McSherry and I. Mironov. Di erentially private recommender systems: Building privacy into the Netflix Prize contenders. In KDD, pages 627--636. ACM, 2009.
[42]
F. D. McSherry. Privacy integrated queries: An extensible platform for privacy-preserving data analysis. In SIGMOD, pages 19--30. ACM, 2009.
[43]
T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
[44]
I. Mironov. Renyi differential privacy. Private communication, 2016.
[45]
Y. Nesterov. Introductory Lectures on Convex Optimization. A Basic Course. Springer, 2004.
[46]
J. Pennington, R. Socher, and C. D. Manning. GloVe: Global vectors for word representation. In EMNLP, pages 1532--1543, 2014.
[47]
N. Phan, Y. Wang, X. Wu, and D. Dou. Di erential privacy preservation for deep auto-encoders: an application of human behavior prediction. In AAAI, pages 1309--1316, 2016.
[48]
N. Pinto, Z. Stone, T. E. Zickler, and D. Cox. Scaling up biologically-inspired computer vision: A case study in unconstrained face recognition on Facebook. In CVPR, pages 35--42. IEEE, 2011.
[49]
R. M. Rogers, A. Roth, J. Ullman, and S. P. Vadhan. Privacy odometers and lters: Pay-as-you-go composition. CoRR, abs/1605.08294, 2016.
[50]
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323:533--536, Oct. 1986.
[51]
A. Saxe, P. W. Koh, Z. Chen, M. Bhand, B. Suresh, and A. Ng. On random weights and unsupervised feature learning. In ICML, pages 1089--1096. ACM, 2011.
[52]
R. Shokri and V. Shmatikov. Privacy-preserving deep learning. In CCS, pages 1310--1321. ACM, 2015.
[53]
S. Song, K. Chaudhuri, and A. Sarwate. Stochastic gradient descent with di erentially private updates. In GlobalSIP Conference, 2013.
[54]
L. Sweeney. k-anonymity: A model for protecting privacy. International J. of Uncertainty, Fuzziness and Knowledge-Based Systems, 10(05):557--570, 2002.
[55]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pages 1--9. IEEE, 2015.
[56]
S. Tu, R. Roelofs, S. Venkataraman, and B. Recht. Large scale kernel learning using block coordinate descent. CoRR, abs/1602.05310, 2016.
[57]
O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. E. Hinton. Grammar as a foreign language. In NIPS, pages 2773--2781, 2015.
[58]
X. Wu, A. Kumar, K. Chaudhuri, S. Jha, and J. F. Naughton. Di erentially private stochastic gradient descent for in-RDBMS analytics. CoRR, abs/1606.04722, 2016.

Cited By

View all
  • (2025)Noise-resistant sharpness-aware minimization in deep learningNeural Networks10.1016/j.neunet.2024.106829181(106829)Online publication date: Jan-2025
  • (2025)MemberShield: A framework for federated learning with membership privacyNeural Networks10.1016/j.neunet.2024.106768181(106768)Online publication date: Jan-2025
  • (2025)Acceleration offloading for differential privacy protection based on federated learning in edge intelligent controllersFuture Generation Computer Systems10.1016/j.future.2024.107526163(107526)Online publication date: Feb-2025
  • Show More Cited By

Index Terms

  1. Deep Learning with Differential Privacy

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CCS '16: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security
      October 2016
      1924 pages
      ISBN:9781450341394
      DOI:10.1145/2976749
      This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 24 October 2016

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. deep learning
      2. differential privacy

      Qualifiers

      • Research-article

      Conference

      CCS'16
      Sponsor:

      Acceptance Rates

      CCS '16 Paper Acceptance Rate 137 of 831 submissions, 16%;
      Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3,647
      • Downloads (Last 6 weeks)466
      Reflects downloads up to 01 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Noise-resistant sharpness-aware minimization in deep learningNeural Networks10.1016/j.neunet.2024.106829181(106829)Online publication date: Jan-2025
      • (2025)MemberShield: A framework for federated learning with membership privacyNeural Networks10.1016/j.neunet.2024.106768181(106768)Online publication date: Jan-2025
      • (2025)Acceleration offloading for differential privacy protection based on federated learning in edge intelligent controllersFuture Generation Computer Systems10.1016/j.future.2024.107526163(107526)Online publication date: Feb-2025
      • (2025)3D-AOCL: Analytic online continual learning for imbalanced 3D point cloud classificationAlexandria Engineering Journal10.1016/j.aej.2024.10.037111(530-539)Online publication date: Jan-2025
      • (2025)FLAV: Federated Learning for Autonomous Vehicle privacy protectionAd Hoc Networks10.1016/j.adhoc.2024.103685166(103685)Online publication date: Jan-2025
      • (2024)Privacy-preserving algorithms in distributed optimization problemsData Privacy - Techniques, Applications, and Standards [Working Title]10.5772/intechopen.1006864Online publication date: 26-Sep-2024
      • (2024)Atlas-X Equity Financing: Unlocking New Methods to Securely Obfuscate Axe Inventory Data Based on Differential PrivacyProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3663019(1585-1592)Online publication date: 6-May-2024
      • (2024)Differential Private Defense Against Backdoor Attacks in Federated LearningFrontiers in Computing and Intelligent Systems10.54097/dyt1nn609:2(31-39)Online publication date: 28-Aug-2024
      • (2024)Efficient secure aggregation for privacy-preserving federated learning based on secret sharingJUSTC10.52396/JUSTC-2022-011654:1(0104)Online publication date: 2024
      • (2024)Enhancing Neural Network Resilence against Adversarial Attacks based on FGSM TechniqueEngineering, Technology & Applied Science Research10.48084/etasr.747914:3(14634-14639)Online publication date: 1-Jun-2024
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media