Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

No Free Lunch Theorem for Security and Utility in Federated Learning

Published: 09 November 2022 Publication History
  • Get Citation Alerts
  • Abstract

    In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms. On one hand, private and sensitive training data must be kept secure as much as possible in the presence of semi-honest partners; on the other hand, a certain amount of information has to be exchanged among different parties for the sake of learning utility. Such a challenge calls for the privacy-preserving federated learning solution, which maximizes the utility of the learned model and maintains a provable privacy guarantee of participating parties’ private data.
    This article illustrates a general framework that (1) formulates the trade-off between privacy loss and utility loss from a unified information-theoretic point of view, and (2) delineates quantitative bounds of the privacy-utility trade-off when different protection mechanisms including randomization, sparsity, and homomorphic encryption are used. It was shown that in general there is no free lunch for the privacy-utility trade-off, and one has to trade the preserving of privacy with a certain degree of degraded utility. The quantitative analysis illustrated in this article may serve as the guidance for the design of practical federated learning algorithms.

    References

    [1]
    Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, New York, NY, 308–318.
    [2]
    Le Trieu Phong, Yoshinori Aono, Takuya Hayashi, Lihua Wang, and Shiho Moriai. 2017. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security 13, 5 (2017), 1333–1345.
    [3]
    Shahab Asoodeh, Mario Diaz, Fady Alajaji, and Tamás Linder. 2018. Estimation efficiency under privacy constraints. IEEE Transactions on Information Theory 65, 3 (2018), 1512–1534.
    [4]
    G. R. Blakley. 1979. Safeguarding cryptographic keys. In Proceedings of the 1979 AFIPS National Computer Conference. 313–317.
    [5]
    Yochai Blau and Tomer Michaeli. 2019. Rethinking lossy compression: The rate-distortion-perception tradeoff. In Proceedings of the International Conference on Machine Learning. 675–685.
    [6]
    Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 1175–1191.
    [7]
    George E. P. Box and George C. Tiao. 2011. Bayesian Inference in Statistical Analysis. Vol. 40. John Wiley & Sons.
    [8]
    Stephen Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press.
    [9]
    Zvika Brakerski. 2012. Fully homomorphic encryption without modulus switching from classical GapSVP. In Proceedings of the Annual Cryptology Conference. 868–886.
    [10]
    Zvika Brakerski, Adeline Langlois, Chris Peikert, Oded Regev, and Damien Stehlé. 2013. Classical hardness of learning with errors. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing. 575–584.
    [11]
    Zhiqi Bu, Jinshuo Dong, Qi Long, and Weijie J Su. 2020. Deep learning with Gaussian differential privacy. Harvard Data Science Review 2020, 23 (2020).
    [12]
    Joe P. Buhler, Hendrik W. Lenstra, and Carl Pomerance. 1993. Factoring integers with the number field sieve. In The Development of the Number Field Sieve. Springer, 50–94.
    [13]
    Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. 2021. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12299–12310.
    [14]
    Kewei Cheng, Tao Fan, Yilun Jin, Yang Liu, Tianjian Chen, Dimitrios Papadopoulos, and Qiang Yang. 2021. SecureBoost: A lossless federated learning framework. arXiv:1901.08755 (2021).
    [15]
    Arthur P. Dempster. 1968. A generalization of Bayesian inference. Journal of the Royal Statistical Society: Series B (Methodological) 30, 2 (1968), 205–232.
    [16]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
    [17]
    Luc Devroye, Abbas Mehrabian, and Tommy Reddad. 2018. The total variation distance between high-dimensional Gaussians. arXiv preprint arXiv:1810.08693 (2018).
    [18]
    Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems 32 (2019).
    [19]
    Flávio du Pin Calmon and Nadia Fawaz. 2012. Privacy against statistical inference. In Proceedings of the 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton’12). IEEE, Los Alamitos, CA, 1401–1408.
    [20]
    John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. 2013. Local privacy and minimax bounds: Sharp rates for probability estimation. arXiv preprint arXiv:1305.6000 (2013).
    [21]
    Cynthia Dwork. 2006. Differential privacy. In Proceedings of the International Colloquium on Automata, Languages, and Programming. 1–12.
    [22]
    Cynthia Dwork. 2008. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation. 1–19.
    [23]
    Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Theory of Cryptography Conference. 265–284.
    [24]
    Cynthia Dwork and Moni Naor. 2010. On the difficulties of disclosure prevention in statistical databases or the case for differential privacy. Journal of Privacy and Confidentiality 2, 1 (Sept. 2010), 93–107.
    [25]
    Cynthia Dwork and Aaron Roth. 2014. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science 9, 3-4 (2014), 211–407.
    [26]
    Dominik Maria Endres and Johannes E. Schindelin. 2003. A new metric for probability distributions. IEEE Transactions on Information theory 49, 7 (2003), 1858–1860.
    [27]
    Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1322–1333.
    [28]
    Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. 2020. Inverting gradients—How easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053 (2020).
    [29]
    Stuart Geman and Donald Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence6 (1984), 721–741.
    [30]
    Stuart Geman and Donald Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-6, 6 (1984), 721–741.
    [31]
    Craig Gentry. 2009. A Fully Homomorphic Encryption Scheme. Stanford University, Stanford, CA.
    [32]
    Craig Gentry. 2010. Toward basing fully homomorphic encryption on worst-case hardness. In Proceedings of the Annual Cryptology Conference. 116–137.
    [33]
    Craig Gentry, Amit Sahai, and Brent Waters. 2013. Homomorphic encryption from learning with errors: Conceptually-simpler, asymptotically-faster, attribute-based. In Proceedings of the Annual Cryptology Conference. 75–92.
    [34]
    Robin C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).
    [35]
    Antonious Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, and Ananda Theertha Suresh. 2021. Shuffled model of differential privacy in federated learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics. 2521–2529.
    [36]
    Shafi Goldwasser and Silvio Micali. 1984. Probabilistic encryption. Journal of Computer and System Sciences 28, 2 (1984), 270–299.
    [37]
    Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan Yao, and Qiang Yang. 2021. Federated deep learning with Bayesian privacy. arXiv preprint arXiv:2109.13012 (2021).
    [38]
    Otkrist Gupta and Ramesh Raskar. 2018. Distributed learning of deep neural network over multiple agents. Journal of Network and Computer Applications 116 (2018), 1–8.
    [39]
    Trung Ha, Tran Khanh Dang, Tran Tri Dang, Tuan Anh Truong, and Manh Tuan Nguyen. 2019. Differential privacy in deep learning: An overview. In Proceedings of the 2019 International Conference on Advanced Computing and Applications (ACOMP’19). IEEE, Los Alamitos, CA, 97–102.
    [40]
    Farzin Haddadpour and Mehrdad Mahdavi. 2019. On the convergence of local descent methods in federated learning. arXiv preprint arXiv:1910.14425 (2019).
    [41]
    Zecheng He, Tianwei Zhang, and Ruby B. Lee. 2019. Model inversion attacks against collaborative inference. In Proceedings of the 35th Annual Computer Security Applications Conference. 148–162.
    [42]
    Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep models under the GAN: Information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 603–618.
    [43]
    Hsiang Hsu, Shahab Asoodeh, Salman Salamatian, and Flavio P. Calmon. 2018. Generalizing bottleneck problems. In Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT’18). IEEE, Los Alamitos, CA, 531–535.
    [44]
    Xueyang Hu, Mingxuan Yuan, Jianguo Yao, Yu Deng, Lei Chen, Qiang Yang, Haibing Guan, and Jia Zeng. 2015. Differential privacy in telco big data platform. Proceedings of the VLDB Endowment 8, 12 (2015), 1692–1703.
    [45]
    Ibrahim Issa, Aaron B. Wagner, and Sudeep Kamath. 2019. An operational approach to information leakage. IEEE Transactions on Information Theory 66, 3 (2019), 1625–1657.
    [46]
    Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, et al. 2021. Advances and open problems in federated learning. Foundations and Trends in Machine Learning 14, 1-2 (2021), 1–210.
    [47]
    Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4401–4410.
    [48]
    Muah Kim, Onur Günlü, and Rafael F. Schaefer. 2021. Federated learning with local differential privacy: Trade-offs between privacy, utility, and communication. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’21). IEEE, Los Alamitos, CA, 2650–2654.
    [49]
    Jakub Konečnỳ, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016).
    [50]
    Jakub Konečnỳ, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
    [51]
    Ang Li, Yixiao Duan, Huanrui Yang, Yiran Chen, and Jianlei Yang. 2020. TIPRDC: Task-independent privacy-respecting data crowdsourcing framework for deep learning with anonymized intermediate representations. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 824–832.
    [52]
    Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2019. On the convergence of FedAvg on non-IID data. arXiv preprint arXiv:1907.02189 (2019).
    [53]
    Jiachun Liao, Oliver Kosut, Lalitha Sankar, and Flavio du Pin Calmon. 2019. Tunable measures for information leakage and applications to privacy-utility tradeoffs. IEEE Transactions on Information Theory 65, 12 (2019), 8043–8066.
    [54]
    Ji Liu, Jizhou Huang, Yang Zhou, Xuhong Li, Shilei Ji, Haoyi Xiong, and Dejing Dou. 2022. From distributed machine learning to federated learning: A survey. Knowledge and Information Systems 64, 4 (2022), 885–917.
    [55]
    Ali Makhdoumi and Nadia Fawaz. 2013. Privacy-utility tradeoff under statistical uncertainty. In Proceedings of the 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton’13). IEEE, Los Alamitos, CA, 1627–1634.
    [56]
    Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. 1273–1282.
    [57]
    H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. 2016. Federated learning of deep networks using model averaging. arXiv preprint arXiv:1602.05629 (2016).
    [58]
    Daniele Micciancio and Panagiotis Voulgaris. 2013. A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations. SIAM Journal on Computing 42, 3 (2013), 1364–1391.
    [59]
    Ilya Mironov. 2017. Rényi differential privacy. In Proceedings of the 2017 IEEE 30th Computer Security Foundations Symposium (CSF’17). IEEE, Los Alamitos, CA, 263–275.
    [60]
    Pascal Paillier. 1999. Public-key cryptosystems based on composite degree residuosity classes. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques. 223–238.
    [61]
    Chris Peikert. 2009. Public-key cryptosystems from the worst-case shortest vector problem. In Proceedings of the 41st Annual ACM Symposium on Theory of Computing. 333–342.
    [62]
    Borzoo Rassouli and Deniz Gündüz. 2019. Optimal utility-privacy trade-off with total variation distance as a privacy measure. IEEE Transactions on Information Forensics and Security 15 (2019), 594–603.
    [63]
    Oded Regev. 2009. On lattices, learning with errors, random linear codes, and cryptography. Journal of the ACM 56, 6 (2009), 1–40.
    [64]
    Ronald L. Rivest, Len Adleman, and Michael L. Dertouzos. 1978. On data banks and privacy homomorphisms. Foundations of Secure Computation 4, 11 (1978), 169–180.
    [65]
    Lalitha Sankar, S. Raj Rajagopalan, and H. Vincent Poor. 2013. Utility-privacy tradeoffs in databases: An information-theoretic approach. IEEE Transactions on Information Forensics and Security 8, 6 (2013), 838–852.
    [66]
    Claus-Peter Schnorr. 1987. A hierarchy of polynomial time lattice basis reduction algorithms. Theoretical Computer Science 53, 2-3 (1987), 201–224.
    [67]
    Mohamed Seif, Ravi Tandon, and Ming Li. 2020. Wireless federated learning with local differential privacy. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT’20). IEEE, Los Alamitos, CA, 2604–2609.
    [68]
    Adi Shamir. 1979. How to share a secret. Communications of the ACM 22, 11 (Nov. 1979), 612–613.
    [69]
    Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1310–1321.
    [70]
    Chandra Thapa, Mahawaga Arachchige Pathum Chamikara, and Seyit Camtepe. 2020. SplitFed: When federated learning meets split learning. arXiv preprint arXiv:2004.12088 (2020).
    [71]
    Oleh J. Tretiak. 1974. Rate distortion theory: A mathematical basis for data compression, Toby Berger. Prentice-Hall, Urbana, IL (1971). Information & Computation 24 (1974), 92–95.
    [72]
    A. Triastcyn and B. Faltings. 2019. Federated learning with Bayesian differential privacy. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data’19). IEEE, Los Alamitos, CA, 2587–2596.
    [73]
    Aleksei Triastcyn and Boi Faltings. 2020. Bayesian differential privacy for machine learning. In Proceedings of the 37th International Conference on Machine Learning, Hal Daumé III and Aarti Singh (Eds.). Proceedings of Machine Learning Research, Vol. 119. PMLR, 9583–9592. https://proceedings.mlr.press/v119/triastcyn20a.html.
    [74]
    Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, and Yi Zhou. 2019. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. 1–11.
    [75]
    Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, and Wenqi Wei. 2020. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the 3rd ACM International Workshop on Edge Systems, Analytics, and Networking. 61–66.
    [76]
    Di Wang, Minwei Ye, and Jinhui Xu. 2017. Differentially private empirical risk minimization revisited: Faster and more general. In Advances in Neural Information Processing Systems 30 (2017).
    [77]
    Hao Wang and Flavio P. Calmon. 2017. An estimation-theoretic view of privacy. In Proceedings of the 2017 55th Annual Allerton Conference on Communication, Control, and Computing (Allerton’17). IEEE, Los Alamitos, CA, 886–893.
    [78]
    Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019. Beyond inferring class representatives: User-level privacy leakage from federated learning. In Proceedings of the IEEE Conference on Computer Communications (IEEE INFOCOM’19). IEEE, Los Alamitos, CA, 2512–2520.
    [79]
    Larry Wasserman. 2004. Bayesian inference. In All of Statistics. Springer, 175–192.
    [80]
    David H. Wolpert and William G. Macready. 1997. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1, 1 (1997), 67–82.
    [81]
    Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology 10, 2 (2019), 1–19.
    [82]
    Qiang Yang, Yang Liu, Yong Cheng, Yan Kang, Tianjian Chen, and Han Yu. 2019. Federated learning. Synthesis Lectures on Artificial Intelligence and Machine Learning 13, 3 (2019), 1–207.
    [83]
    Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M. Alvarez, Jan Kautz, and Pavlo Molchanov. 2021. See through gradients: Image batch recovery via GradInversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16337–16346.
    [84]
    Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC’20). 493–506. https://www.usenix.org/conference/atc20/presentation/zhang-chengliang.
    [85]
    Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC’20). 493–506.
    [86]
    Jiale Zhang, Bing Chen, Shui Yu, and Hai Deng. 2019. PEFL: A privacy-enhanced federated learning scheme for big data analytics. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM’19). IEEE, Los Alamitos, CA, 1–6.
    [87]
    Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. iDLG: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020).
    [88]
    Yang Zhao, Jun Zhao, Mengmeng Yang, Teng Wang, Ning Wang, Lingjuan Lyu, Dusit Niyato, and Kwok-Yan Lam. 2020. Local differential privacy-based federated learning for Internet of Things. IEEE Internet of Things Journal 8, 11 (2020), 8836–8853.
    [89]
    Ligeng Zhu and Song Han. 2020. Deep leakage from gradients. In Federated Learning. Springer, 17–31.
    [90]
    Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS’19).

    Cited By

    View all
    • (2024)DESIGN: Online Device Selection and Edge Association for Federated Synergy Learning-enabled AIoTACM Transactions on Intelligent Systems and Technology10.1145/3673237Online publication date: 15-Jun-2024
    • (2024)Honest Fraction Differential PrivacyProceedings of the 2024 ACM Workshop on Information Hiding and Multimedia Security10.1145/3658664.3659655(247-251)Online publication date: 24-Jun-2024
    • (2024)A Game-theoretic Framework for Privacy-preserving Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365604915:3(1-35)Online publication date: 10-Apr-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Intelligent Systems and Technology
    ACM Transactions on Intelligent Systems and Technology  Volume 14, Issue 1
    February 2023
    487 pages
    ISSN:2157-6904
    EISSN:2157-6912
    DOI:10.1145/3570136
    • Editor:
    • Huan Liu
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 November 2022
    Online AM: 20 September 2022
    Accepted: 24 August 2022
    Revised: 09 July 2022
    Received: 10 March 2022
    Published in TIST Volume 14, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Federated learning
    2. privacy-preserving computing
    3. security
    4. utility
    5. trade-off
    6. divergence
    7. optimization

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • National Key Research and Development Program of China
    • Hong Kong RGC

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)391
    • Downloads (Last 6 weeks)27
    Reflects downloads up to 11 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)DESIGN: Online Device Selection and Edge Association for Federated Synergy Learning-enabled AIoTACM Transactions on Intelligent Systems and Technology10.1145/3673237Online publication date: 15-Jun-2024
    • (2024)Honest Fraction Differential PrivacyProceedings of the 2024 ACM Workshop on Information Hiding and Multimedia Security10.1145/3658664.3659655(247-251)Online publication date: 24-Jun-2024
    • (2024)A Game-theoretic Framework for Privacy-preserving Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365604915:3(1-35)Online publication date: 10-Apr-2024
    • (2024)A Meta-Learning Framework for Tuning Parameters of Protection Mechanisms in Trustworthy Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/365261215:3(1-36)Online publication date: 18-Mar-2024
    • (2024)A Privacy-preserving Auction Mechanism for Learning Model as an NFT in Blockchain-driven MetaverseACM Transactions on Multimedia Computing, Communications, and Applications10.1145/359997120:7(1-24)Online publication date: 27-Mar-2024
    • (2024)PRIMϵ: Novel Privacy-Preservation Model With Pattern Mining and Genetic AlgorithmIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.332476919(571-585)Online publication date: 1-Jan-2024
    • (2023)Practical privacy-preserving Gaussian process regression via secret sharingProceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence10.5555/3625834.3625958(1315-1325)Online publication date: 31-Jul-2023
    • (2023)Trading Off Privacy, Utility, and Efficiency in Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/359518514:6(1-32)Online publication date: 20-Nov-2023
    • (2023)A research on intelligent recommendation algorithms based on federated learningThird International Conference on Advanced Algorithms and Neural Networks (AANN 2023)10.1117/12.3004909(83)Online publication date: 9-Oct-2023
    • (2023)FedIPR: Ownership Verification for Federated Deep Neural Network ModelsIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2022.319595645:4(4521-4536)Online publication date: 1-Apr-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media