Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

PPFL: Enhancing Privacy in Federated Learning with Confidential Computing

Published: 30 March 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Mobile networks and devices provide the users with ubiquitous connectivity, while many of their functionality and business models rely on data analysis and processing. In this context, Machine Learning (ML) plays a key role and has been successfully leveraged by the different actors in the mobile ecosystem (e.g., application and Operating System developers, vendors, network operators, etc.). Traditional ML designs assume (user) data are collected and models are trained in a centralized location. However, this approach has privacy consequences related to data collection and processing. Such concerns have incentivized the scientific community to design and develop Privacy-preserving ML methods, including techniques like Federated Learning (FL) where the ML model is trained or personalized on user devices close to the data; Differential Privacy, where data are manipulated to limit the disclosure of private information; Trusted Execution Environments (TEE), where most of the computation is run under a secure/ private environment; and Multi-Party Computation, a cryptographic technique that allows various parties to run joint computations without revealing their private data to each other.

    References

    [1]
    Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR, 2938--2948. https:// proceedings.mlr.press/v108/bagdasaryan20a.html
    [2]
    Kleomenis Katevas, Eugene Bagdasaryan, Jason Waterman, Mohamad Mounir Safadieh, Eleanor Birrell, Hamed Haddadi, and Deborah Estrin. 2021. Policy-based federated learning. http://arxiv.org/ abs/2003.06612
    [3]
    Nicolas Kourtellis, Kleomenis Katevas, and Diego Perino. 2020. FLaaS: Federated learning as a service. Proceedings of the 1st Workshop on Distributed Machine Learning (DistributedML'20), ACM, New York, NY, USA, 7--13. org/10.1145/3426745.3431337
    [4]
    Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems 2, 429--450.
    [5]
    Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR, 1273--1282. https://proceedings. mlr.press/v54/mcmahan17a.html
    [6]
    Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. 2019 IEEE Symposium on Security and Privacy (SP), 691--706. https://doi.org/10.1109/SP.2019.00029
    [7]
    Fan Mo, Anastasia Borovykh, Mohammad Malekzadeh, Hamed Haddadi, and Soteris Demetriou. 2020. Layer-wise characterization of latent information leakage in federated learning. International Conference of Learning Representations: Workshop on Distributed and Private Machine Learning (DPML).
    [8]
    Fan Mo, Hamed Haddadi, Kleomenis Katevas, Eduard Marin, Diego Perino, and Nicolas Kourtellis. 2021. PPFL: Privacy-preserving federated learning with trusted execution environments. Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '21), ACM, New York, NY, USA, 94--108. https://doi.org/10.1145/3458864.3466628
    [9]
    Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Soteris Demetriou, Ilias Leontiadis, Andrea Cavallaro, and Hamed Haddadi. 2020. DarkneTZ: Towards model privacy at the edge using trusted execution environments. Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services (June 2020), 161--174. https://doi.org/10.1145/ 3386901.3388946
    [10]
    Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE Symposium on Security and Privacy (SP), 739--753. https://doi.org/10.1109/ SP.2019.00065
    [11]
    Alexandre Sablayrolles, Matthijs Douze, Cordelia Schmid, Yann Ollivier, and Herve Jegou. 2019. White-box vs black-box: Bayes optimal strategies for membership inference. Proceedings of the 36th International Conference on Machine Learning, PMLR, 5558--5567. https://proceedings. mlr.press/v97/sablayrolles19a.html
    [12]
    Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. 2019. Federated learning with matched averaging. International Conference on Learning Representations. https://openreview.net/forum?id=BkluqlSFDS
    [13]
    S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. 2018 IEEE 31st Computer Security Foundations Symposium (CSF), 268--282. https://doi. org/10.1109/CSF.2018.00027
    [14]
    Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. Advances in Neural Information Processing Systems, Curran Associates, Inc. https://proceedings.neurips.cc/ paper/2019/hash/60a6c4002cc7b29142def887153 1281a-Abstract.html

    Cited By

    View all
    • (2024)Trustworthy AI using Confidential Federated LearningQueue10.1145/366522022:2(87-107)Online publication date: 24-May-2024
    • (2024)A Multifaceted Survey on Federated Learning: Fundamentals, Paradigm Shifts, Practical Issues, Recent Developments, Partnerships, Trade-Offs, Trustworthiness, and Ways ForwardIEEE Access10.1109/ACCESS.2024.341306912(84643-84679)Online publication date: 2024
    • (2022)Group Privacy: An Underrated but Worth Studying Research Problem in the Era of Artificial Intelligence and Big DataElectronics10.3390/electronics1109144911:9(1449)Online publication date: 30-Apr-2022
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image GetMobile: Mobile Computing and Communications
    GetMobile: Mobile Computing and Communications  Volume 25, Issue 4
    December 2021
    34 pages
    ISSN:2375-0529
    EISSN:2375-0537
    DOI:10.1145/3529706
    Issue’s Table of Contents
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 30 March 2022
    Published in SIGMOBILE-GETMOBILE Volume 25, Issue 4

    Check for updates

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)84
    • Downloads (Last 6 weeks)7
    Reflects downloads up to 09 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Trustworthy AI using Confidential Federated LearningQueue10.1145/366522022:2(87-107)Online publication date: 24-May-2024
    • (2024)A Multifaceted Survey on Federated Learning: Fundamentals, Paradigm Shifts, Practical Issues, Recent Developments, Partnerships, Trade-Offs, Trustworthiness, and Ways ForwardIEEE Access10.1109/ACCESS.2024.341306912(84643-84679)Online publication date: 2024
    • (2022)Group Privacy: An Underrated but Worth Studying Research Problem in the Era of Artificial Intelligence and Big DataElectronics10.3390/electronics1109144911:9(1449)Online publication date: 30-Apr-2022
    • (2022)Shielding federated learning systems against inference attacks with ARM TrustZoneProceedings of the 23rd ACM/IFIP International Middleware Conference10.1145/3528535.3565255(335-348)Online publication date: 7-Nov-2022

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media