Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Privacy-Preserving Distributed Multi-Task Learning against Inference Attack in Cloud Computing

Published: 22 October 2021 Publication History

Abstract

Because of the powerful computing and storage capability in cloud computing, machine learning as a service (MLaaS) has recently been valued by the organizations for machine learning training over some related representative datasets. When these datasets are collected from different organizations and have different distributions, multi-task learning (MTL) is usually used to improve the generalization performance by scheduling the related training tasks into the virtual machines in MLaaS and transferring the related knowledge between those tasks. However, because of concerns about privacy breaches (e.g., property inference attack and model inverse attack), organizations cannot directly outsource their training data to MLaaS or share their extracted knowledge in plaintext, especially the organizations in sensitive domains. In this article, we propose a novel privacy-preserving mechanism for distributed MTL, namely NOInfer, to allow several task nodes to train the model locally and transfer their shared knowledge privately. Specifically, we construct a single-server architecture to achieve the private MTL, which protects task nodes’ local data even if \(n-1\) out of \(n\) nodes colluded. Then, a new protocol for the Alternating Direction Method of Multipliers (ADMM) is designed to perform the privacy-preserving model training, which resists the inference attack through the intermediate results and ensures that the training efficiency is independent of the number of training samples. When releasing the trained model, we also design a differentially private model releasing mechanism to resist the membership inference attack. Furthermore, we analyze the privacy preservation and efficiency of NOInfer in theory. Finally, we evaluate our NOInfer over two testing datasets and evaluation results demonstrate that NOInfer efficiently and effectively achieves the distributed MTL.

References

[1]
Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 308–318.
[2]
Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser. 2015. Machine learning classification over encrypted data. In Proceedings of the 22nd Annual Network and Distributed System Security Symposium. 1–14.
[3]
Stephen P. Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning 3, 1 (2011), 1–122.
[4]
Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. 2011. Differentially private empirical risk minimization. Journal of Machine Learning Research 12 (2011), 1069–1109.
[5]
Cynthia Dwork. 2006. Differential privacy. In Proceedings of Automata, Languages and Programming, 33rd International Colloquium, Part II. 1–12.
[6]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith. 2016. Calibrating noise to sensitivity in private data analysis. Journal of Privacy and Confidentiality 7, 3 (2016), 17–51.
[7]
Peter Fenner and Edward Pyzer-Knapp. 2020. Privacy-preserving Gaussian process regression—A modular approach to the application of homomorphic encryption. In Proceedings of the 34th AAAI Conference on Artificial Intelligence. 3866–3873.
[8]
Sheng Gao, Jianfeng Ma, Weisong Shi, Guoxing Zhan, and Cong Sun. 2013. TrPF: A trajectory privacy-preserving framework for participatory sensing. IEEE Transactions on Information Forensics and Security 8, 6 (2013), 874–887.
[9]
Oded Goldreich. 2004. The Foundations of Cryptography—Volume 2, Basic Applications. Cambridge University Press.
[10]
Ke Gu, Linyu Wang, and Bo Yin. 2019. Social community detection and message propagation scheme based on personal willingness in social network. Soft Computing 23, 15 (2019), 6267–6285.
[11]
Zonghao Huang, Rui Hu, Yuanxiong Guo, Eric Chan-Tin, and Yanmin Gong. 2020. DP-ADMM: ADMM-based distributed learning with differential privacy. IEEE Transactions on Information Forensics and Security 15 (2020), 1002–1012.
[12]
Jakub Konecný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. CoRR abs/1610.05492 (2016).
[13]
Ping Li, Jin Li, Zhengan Huang, Tong Li, Chong-Zhi Gao, Siu-Ming Yiu, and Kai Chen. 2017. Multi-key privacy-preserving deep learning in cloud computing. Future Generation Computer Systems 74 (2017), 76–85.
[14]
Tong Li, Jin Li, Xiaofeng Chen, Zheli Liu, Wenjing Lou, and Thomas Hou. 2020. NPMML: A framework for non-interactive privacy-preserving multi-party machine learning. IEEE Transactions on Dependable and Secure Computing. Early access, February 4, 2020.
[15]
Sulin Liu, Sinno Jialin Pan, and Qirong Ho. 2017. Distributed multi-task relationship learning. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, 937–946.
[16]
Ximeng Liu, Robert H. Deng, Kim-Kwang Raymond Choo, and Jian Weng. 2016. An efficient privacy-preserving outsourced calculation toolkit with multiple keys. IEEE Transactions on Information Forensics and Security 11, 11 (2016), 2401–2414.
[17]
Lingjuan Lyu, Jiangshan Yu, Karthik Nandakumar, Yitong Li, Xingjun Ma, Jiong Jin, Han Yu, and Kee Siong Ng. 2020. Towards fair and privacy-preserving federated deep models. IEEE Transactions on Parallel and Distributed Systems 31, 11 (2020), 2524–2541.
[18]
Xindi Ma, Hui Li, Jianfeng Ma, Qi Jiang, Sheng Gao, Ning Xi, and Di Lu. 2017. APPLET: A privacy-preserving framework for location-aware recommender system. Science China Information Sciences 60, 9 (2017), 092101.
[19]
Xindi Ma, Jianfeng Ma, Hui Li, Qi Jiang, and Sheng Gao. 2018. ARMOR: A trust-based privacy-preserving framework for decentralized friend recommendation in online social networks. Future Generation Computer Systems 79 (2018), 82–94.
[20]
Xindi Ma, Jianfeng Ma, Hui Li, Qi Jiang, and Sheng Gao. 2018. PDLM: Privacy-preserving deep learning model on cloud with multiple keys. IEEE Transactions on Services Computing 14, 4 (2018), 1251–1263.
[21]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2018. Inference attacks against collaborative learning. CoRR abs/1805.04049 (2018).
[22]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. 691–706.
[23]
Payman Mohassel and Yupeng Zhang. 2017. SecureML: A system for scalable privacy-preserving machine learning. In Proceedings of the IEEE Symposium on Security and Privacy. 19–38.
[24]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Machine learning with membership privacy using adversarial regularization. In Proceedings of ACM SIGSAC Conference on Computer and Communications Security. 634–646.
[25]
Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2018. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security 13, 5 (2018), 1333–1345.
[26]
Le Trieu Phong and Tran Thi Phuong. 2019. Privacy-preserving deep learning via weight transmission. IEEE Transactions on Information Forensics and Security 14, 11 (2019), 3003–3015.
[27]
Rahul Rachuri and Ajith Suresh. 2019. Trident: Efficient 4PC framework for privacy preserving machine learning. CoRR abs/1912.02631 (2019).
[28]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1310–1321.
[29]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In Proceedings of IEEE Symposium on Security and Privacy. 3–18.
[30]
Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Wenqi Wei, and Lei Yu. 2019. Effects of differential privacy and data skewness on membership inference vulnerability. CoRR abs/1911.09777 (2019).
[31]
Sameer Wagh, Shruti Tople, Fabrice Benhamouda, Eyal Kushilevitz, Prateek Mittal, and Tal Rabin. 2020. FALCON: Honest-majority maliciously secure framework for private deep learning. CoRR abs/2004.02229 (2020).
[32]
Dapeng Wu, Shushan Si, Shaoen Wu, and Ruyan Wang. 2018. Dynamic trust relationships aware data privacy protection in mobile crowd-sensing. IEEE Internet of Things Journal 5, 4 (2018), 2958–2970.
[33]
Liyang Xie, Inci M. Baytas, Kaixiang Lin, and Jiayu Zhou. 2017. Privacy-preserving distributed multi-task learning with asynchronous updates. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1195–1204.
[34]
Guowen Xu, Hongwei Li, Sen Liu, Kan Yang, and Xiaodong Lin. 2020. VerifyNet: Secure and verifiable federated learning. IEEE Transactions on Information Forensics and Security 15 (2020), 911–926.
[35]
Yang Yang, Ximeng Liu, and Robert H. Deng. 2020. Multi-user multi-keyword rank search over encrypted data in arbitrary language. IEEE Transactions on Dependable and Secure Computing 17, 2 (2020), 320–334.
[36]
Chen Zhang, Xiongwei Hu, Yu Xie, Maoguo Gong, and Bin Yu. 2020. A privacy-preserving multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. Frontiers in Neurorobotics 13 (2020), 112.
[37]
Qingchen Zhang, Laurence T. Yang, and Zhikui Chen. 2016. Privacy preserving deep computation model on cloud for big data feature learning. IEEE Transactions on Computers 65, 5 (2016), 1351–1362.
[38]
Tao Zhang and Quanyan Zhu. 2017. Dynamic differential privacy for ADMM-based distributed classification learning. IEEE Transactions on Information Forensics and Security 12, 1 (2017), 172–187.
[39]
Lingchen Zhao, Yan Zhang, Qian Wang, Yanjiao Chen, Cong Wang, and Qin Zou. 2018. Privacy-preserving collaborative deep learning with irregular participants. CoRR abs/1812.10113 (2018).
[40]
Wenting Zheng, Raluca Ada Popa, Joseph E. Gonzalez, and Ion Stoica. 2019. Helen: Maliciously secure coopetitive learning for linear models. In Proceedings of the 2019 IEEE Symposium on Security and Privacy. 724–738.

Cited By

View all
  • (2023)Governance and sustainability of distributed continuum systems: a big data approachJournal of Big Data10.1186/s40537-023-00737-010:1Online publication date: 28-Apr-2023
  • (2023)Learning in Your “Pocket”: Secure Collaborative Deep Learning With Membership PrivacyIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2022.319232620:3(2641-2656)Online publication date: 1-May-2023
  • (2023)A review of security issues and solutions for precision health in Internet-of-Medical-Things systemsSecurity and Safety10.1051/sands/20220102(2022010)Online publication date: 31-Jan-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Internet Technology
ACM Transactions on Internet Technology  Volume 22, Issue 2
May 2022
582 pages
ISSN:1533-5399
EISSN:1557-6051
DOI:10.1145/3490674
  • Editor:
  • Ling Liu
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2021
Accepted: 01 September 2020
Revised: 01 September 2020
Received: 01 June 2020
Published in TOIT Volume 22, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Multi-task learning
  2. cloud computing
  3. privacy preservation
  4. homomorphic cryptography
  5. differential privacy

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • National Natural Science Foundation of China
  • China Postdoctoral Science Foundation
  • Key Research and Development Program of Shaanxi
  • Natural Science Foundation of Shaanxi Province
  • Shaanxi Provincial Education Department
  • Fundamental Research Funds for the Central Universities
  • European Commission

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)137
  • Downloads (Last 6 weeks)13
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Governance and sustainability of distributed continuum systems: a big data approachJournal of Big Data10.1186/s40537-023-00737-010:1Online publication date: 28-Apr-2023
  • (2023)Learning in Your “Pocket”: Secure Collaborative Deep Learning With Membership PrivacyIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2022.319232620:3(2641-2656)Online publication date: 1-May-2023
  • (2023)A review of security issues and solutions for precision health in Internet-of-Medical-Things systemsSecurity and Safety10.1051/sands/20220102(2022010)Online publication date: 31-Jan-2023
  • (2023)MP-CLFKnowledge-Based Systems10.1016/j.knosys.2023.110527270:COnline publication date: 21-Jun-2023
  • (2022)DisBezant: Secure and Robust Federated Learning Against Byzantine Attack in IoT-Enabled MTSIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2022.3152156(1-11)Online publication date: 2022
  • (2022)The Promising Role of Representation Learning for Distributed Computing Continuum Systems2022 IEEE International Conference on Service-Oriented System Engineering (SOSE)10.1109/SOSE55356.2022.00021(126-132)Online publication date: Aug-2022

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media