Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1109/GLOBECOM38437.2019.9014272guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

PEFL: A Privacy-Enhanced Federated Learning Scheme for Big Data Analytics

Published: 01 December 2019 Publication History

Abstract

Federated learning has emerged as a promising solution for big data analytics, which jointly trains a global model across multiple mobile devices. However, participants' sensitive data information may be leaked to an untrusted server through uploaded gradient vectors. To address this problem, we propose a privacy-enhanced federated learning (PEFL) scheme to protect the gradients over an untrusted server. This is mainly enabled by encrypting participants' local gradients with Paillier homomorphic cryptosystem. In order to reduce the computation costs of the cryptosystem, we utilize the distributed selective stochastic gradient descent (DSSGD) method in the local training phase to achieve the distributed encryption. Moreover, the encrypted gradients can be further used for secure sum aggregation at the server side. In this way, the untrusted server can only learn the aggregated statistics for all the participants' updates, while each individual's private information will be well-protected. For the security analysis, we theoretically prove that our scheme is secure under several cryptographic hard problems. Exhaustive experimental results demonstrate that PEFL has low computation costs while reaching high accuracy in the settings of federated learning.

References

[1]
S. Yu, M. Liu, W. Dou, X. Liu, and S. Zhou, “Networking for Big Data: A Survey,” IEEE Communications Surveys & Tutorials., vol. 19, no. 1, pp. 531–549, 2017.
[2]
M. A. Alsheikh, D. Niyato, S. Lin, H.-P. Tan, and Z. Han, “Mobile big data analytics using deep learning and apache spark,” IEEE Network. vol. 30, no. 3, pp. 22–29, 2016.
[3]
S. Yu, “Big Privacy: Challenges and Opportunities of Privacy Study in the Age of Big Data,” IEEE Access., vol. 4, pp. 2751–2763, 2016.
[4]
H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A.y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proc of AISTATS' 17, Fort Lauderadale, Florida, USA, Apr. 2017, pp. 1–10.
[5]
Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated Machine Learning: Concept and Applications,” ACM Transactions on Intelligent Systems and Technology., vol. 10, no. 2, pp: 1–19, 2019.
[6]
R. Shokri and V. Shmatikov, “Privacy-Preserving Deep Learning,” in Proc of ACM CCS'15, Denver, Colorado, USA, Oct. 2015, pp. 1310–1321.
[7]
K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical Secure Aggregation for Privacy-Preserving Machine Learning,” in Proc of ACM CCS'17, Dallas, Texas, USA, Oct. 2017, pp. 1175–1191.
[8]
F. Fredrikson, S. Jha, and T. Ristenpart, “Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures,” in Proc of ACM CCS'15, Denver, Colorado, USA, Oct. 2015, pp. 1322–1333.
[9]
L. T. Phong, Y. Aono, T. Hayashi. L. Wang, and S. Moriai, “Privacy-Preserving Deep Learning via Additively Homomorphic Encryption,” IEEE Trans. Inf. Forensics Security., vol. 13, no. 5, pp. 1333–1345, May. 2018.
[10]
Z. Wang, M. Song, Z. Zhang, Y. Song, Q. Wang, and H. Qi, “Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning,” in Proc of IEEE INFOCOM'19, Paris, France, April. 2019, PP. 2512–2520.
[11]
M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep Learning with Differential Privacy,” in Proc of ACM CCS'16, Vienna, Austria, Oct. 2016, pp. 308–318.
[12]
R. C. Geyer, T. Klein, and M. Nabi, “Differentially Private Federated Learning: A Client Level Perspective,” in Proc of NIPS' 17, Long Beach, CA, USA, Dec. 2017, pp. 1–7.
[13]
S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, and R. Zhang, “A Hybrid Approach to Privacy-Preserving Federated Learning,” arXiv preprint arXiv:, 2018.
[14]
C. Dwork and A. Roth, “The Algorithmic Foundations of Differential Privacy,” Found. Trends Theor. Comput. Sci., vol. 9, no. 3, pp. 211–407, Aug. 2014.
[15]
P. Paillier, “Public-Key Cryptosystems based on Composite Degree Residuosity Classes,” in Proc of ACM EUROCRYPT'99, Prague, Czech Republic, May. 1999, pp. 223–238.
[16]
O. Goldreich, “Foundations of Cryptography: Basic Applications” in Cambridge, UK: Press. Cambridge University, vol. 2, 2004.
[17]
B. Hitaj, G. Ateniese, and F. P-Cruz, “Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning,” in Proc of ACM CCS'17, Dallas, TX, USA, Oct. 2017, pp. 603–618.
[18]
J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, Q. V. Le and A. Y. Ag, “Large Scale Distributed Deep Networks,” in Proc of NIPS' 12, Lake Tahoe, Nevada, USA, Dec. 2012, pp. 1232–1240.
[19]
C. P. Schnorr, “Efficient signature generation by smart cards,” Journal of cryptology. vol. 4, no. 3, pp. 161–174, 1991.

Cited By

View all
  • (2024)A Meta-Learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365261215:3(1-36)Online publication date: 18-Mar-2024
  • (2024)Benchmarking robustness and privacy-preserving methods in federated learningFuture Generation Computer Systems10.1016/j.future.2024.01.009155:C(18-38)Online publication date: 1-Jun-2024
  • (2023)Privacy and Fairness in Federated Learning: On the Perspective of TradeoffACM Computing Surveys10.1145/360601756:2(1-37)Online publication date: 15-Sep-2023
  • Show More Cited By

Index Terms

  1. PEFL: A Privacy-Enhanced Federated Learning Scheme for Big Data Analytics
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Guide Proceedings
          2019 IEEE Global Communications Conference (GLOBECOM)
          6544 pages

          Publisher

          IEEE Press

          Publication History

          Published: 01 December 2019

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 16 Oct 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)A Meta-Learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365261215:3(1-36)Online publication date: 18-Mar-2024
          • (2024)Benchmarking robustness and privacy-preserving methods in federated learningFuture Generation Computer Systems10.1016/j.future.2024.01.009155:C(18-38)Online publication date: 1-Jun-2024
          • (2023)Privacy and Fairness in Federated Learning: On the Perspective of TradeoffACM Computing Surveys10.1145/360601756:2(1-37)Online publication date: 15-Sep-2023
          • (2023)Trading Off Privacy, Utility, and Efficiency in Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/359518514:6(1-32)Online publication date: 5-May-2023
          • (2023)Blockchain-empowered Federated Learning: Challenges, Solutions, and Future DirectionsACM Computing Surveys10.1145/357095355:11(1-31)Online publication date: 22-Feb-2023
          • (2023)Blockchain-Based Federated Learning for Securing Internet of Things: A Comprehensive SurveyACM Computing Surveys10.1145/356081655:9(1-43)Online publication date: 16-Jan-2023
          • (2022)No Free Lunch Theorem for Security and Utility in Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/356321914:1(1-35)Online publication date: 9-Nov-2022

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media