Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Efficient Federated Learning Using Dynamic Update and Adaptive Pruning with Momentum on Shared Server Data

Published: 20 November 2024 Publication History

Abstract

Despite achieving remarkable performance, Federated Learning (FL) encounters two important problems, i.e., low training efficiency and limited computational resources. In this article, we propose a new FL framework, i.e., FedDUMAP, with three original contributions, to leverage the shared insensitive data on the server in addition to the distributed data in edge devices so as to efficiently train a global model. First, we propose a simple dynamic server update algorithm, which takes advantage of the shared insensitive data on the server while dynamically adjusting the update steps on the server in order to speed up the convergence and improve the accuracy. Second, we propose an adaptive optimization method with the dynamic server update algorithm to exploit the global momentum on the server and each local device for superior accuracy. Third, we develop a layer-adaptive model pruning method to carry out specific pruning operations, which is adapted to the diverse features of each layer so as to attain an excellent tradeoff between effectiveness and efficiency. Our proposed FL model, FedDUMAP, combines the three original techniques and has a significantly better performance compared with baseline approaches in terms of efficiency (up to 16.9 times faster), accuracy (up to 20.4% higher), and computational cost (up to 62.6% smaller).

References

[1]
Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated learning with non-iid data. arXiv:1806.00582. Retrieved from https://doi.org/10.48550/arXiv.1806.00582
[2]
Official Journal of the European Union. 2016. General Data Protection Regulation. Retrieved February 12, 2021 from https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679
[3]
2018. California Consumer Privacy Act Home Page. Retrieved February 14, 2021 from https://www.caprivacy.org/
[4]
Qinbin Li, Bingsheng He, and Dawn Song. 2021. Practical one-shot federated learning for cross-silo setting. In Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI ’21), 1484–1490.
[5]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics (AISTATS ’17), 1273–1282.
[6]
Ji Liu, Jizhou Huang, Yang Zhou, Xuhong Li, Shilei Ji, Haoyi Xiong, and Dejing Dou. 2022. From distributed machine learning to federated learning: A survey. Knowledge and Information Systems 64, 4 (2022), 885–917.
[7]
Ji Liu, Chunlu Chen, Yu Li, Lin Sun, Yulun Song, Jingbo Zhou, Bo Jing, and Dejing Dou. 2024. Enhancing trust and privacy in distributed networks: A comprehensive survey on blockchain-based federated learning. Knowledge and Information Systems 66 (2024), 1–27.
[8]
Ji Liu, Zhihua Wu, Danlei Feng, Minxu Zhang, Xinxuan Wu, Xuefeng Yao, Dianhai Yu, Yanjun Ma, Feng Zhao, and Dejing Dou. 2023. Heterps: Distributed deep learning with reinforcement learning based scheduling in heterogeneous environments. Future Generation Computer Systems 148 (2023), 106–117.
[9]
He Li, Kaoru Ota, and Mianxiong Dong. 2018. Learning IoT in edge: Deep learning for the internet of things with edge computing. IEEE Network 32, 1 (2018), 96–101.
[10]
Jian Shen, Jun Shen, Xiaofeng Chen, Xinyi Huang, and Willy Susilo. 2017. An efficient public auditing protocol with novel dynamic structure for cloud data. IEEE Transactions on Information Forensics and Security 12, 10 (2017), 2402–2415.
[11]
Naoya Yoshida, Takayuki Nishio, Masahiro Morikura, Koji Yamamoto, and Ryo Yonetani. 2020. Hybrid-FL for wireless networks: Cooperative learning mechanism using non-IID data. In Proceedings of the IEEE International Conference on Communications (ICC ’20), 1–7.
[12]
Amazon. 2021. Amazon Cloud. Retrieved December 30, 2021 from https://aws.amazon.com
[13]
Microsoft Azure. 2021. Microsoft Azure. Retrieved December 30, 2021 from https://azure.microsoft.com/
[14]
Baidu. 2021. Baidu AI Cloud. Retrieved December 30, 2021 from https://cloud.baidu.com/
[15]
Kun Wang, Xin Qi, Lei Shu, Der-jiunn Deng, and Joel J. P. C. Rodrigues. 2016. Toward trustworthy crowdsourcing in the social internet of things. IEEE Wireless Communications 23, 5 (2016), 30–36.
[16]
Xi Ye, Yushu Zhang, Xiangli Xiao, Shuang Yi, and Rushi Lan. 2023. Usability enhanced thumbnail-preserving encryption based on data hiding for JPEG images. IEEE Signal Processing Letters 30 (2023), 793–797.
[17]
Kun Yang, Shengbo Chen, and Cong Shen. 2022. On the convergence of hybrid server-clients collaborative training. IEEE Journal on Selected Areas in Communications 41, 3 (2022), 802–819.
[18]
Hang Gu, Bin Guo, Jiangtao Wang, Wen Sun, Jiaqi Liu, Sicong Liu, and Zhiwen Yu. 2022. FedAux: An efficient framework for hybrid federated learning. In Proceedings of the IEEE International Conference on Communications (ICC ’22). IEEE, 195–200.
[19]
Zhuotao Lian, Qingkui Zeng, Weizheng Wang, Thippa Reddy Gadekallu, and Chunhua Su. 2022. Blockchain-based two-stage federated learning with non-IID data in IoMT system. IEEE Transactions on Computational Social Systems 10 (2022), 1701-1710.
[20]
Tehrim Yoon, Sumin Shin, Sung Ju Hwang, and Eunho Yang. 2021. FedMix: Approximation of Mixup under Mean Augmented Federated Learning. In Int. Conf. on Learning Representations (ICLR).
[21]
Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wenchao Xu, and Feijie Wu. 2021. Parameterized knowledge transfer for personalized federated learning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS ’21), Vol. 34, 1–13.
[22]
Tao Lin, Lingjing Kong, Sebastian U. Stich, and Martin Jaggi. 2020. Ensemble distillation for robust model fusion in federated learning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS ’20), 1–13.
[23]
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. 2013. On the importance of initialization and momentum in deep learning. In Proceedings of the International Conference on Machine Learning (ICML ’13), 1139–1147.
[24]
John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12, 7 (2011), 2121–2159.
[25]
S. Reddi, Manzil Zaheer, Devendra Sachan, Satyen Kale, and Sanjiv Kumar. 2018. Adaptive methods for nonconvex optimization. In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS ’18), 1–17.
[26]
Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980. Retrieved from https://doi.org/10.48550/arXiv.1412.6980
[27]
Jed Mills, Jia Hu, and Geyong Min. 2019. Communication-efficient federated learning for wireless edge intelligence in IoT. IEEE Internet of Things Journal 7, 7 (2019), 5986–5994.
[28]
Honglin Yuan, Manzil Zaheer, and Sashank Reddi. 2021. Federated composite optimization. In Proceedings of the International Conference on Machine Learning (ICML ’21), Vol. 139, 12253–12266.
[29]
Wei Liu, Li Chen, Yunfei Chen, and Wenyi Zhang. 2020. Accelerating federated learning via momentum gradient descent. IEEE Transactions on Parallel and Distributed Systems (TPDS) 31, 8 (2020), 1754–1766.
[30]
Hongchang Gao, An Xu, and Heng Huang. 2021. On the convergence of communication-efficient local SGD for federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, 7510–7518.
[31]
Jianyu Wang, Vinayak Tantia, Nicolas Ballas, and Michael Rabbat. 2020. SlowMo: Improving communication-efficient distributed SGD with slow momentum. In Proceedings of the International Conference on Learning Representations (ICLR ’20), 1–27.
[32]
Jiayin Jin, Jiaxiang Ren, Yang Zhou, Lingjuan Lyu, Ji Liu, and Dejing Dou. 2022. Accelerated federated learning with decoupled adaptive optimization. In Proceedings of the International Conference on Machine Learning. PMLR, 10298–10322.
[33]
Yuang Jiang, Shiqiang Wang, Víctor Valls, Bong Jun Ko, Wei-Han Lee, Kin K. Leung, and Leandros Tassiulas. 2023. Model pruning enables efficient federated learning on edge devices. IEEE Transactions on Neural Networks and Learning Systems 34, 12 (2023), 10374–10386. DOI:
[34]
Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, and Ling Shao. 2020. HRank: Filter pruning using high-rank feature map. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR ’20), 1529–1538.
[35]
Hong Zhang, Ji Liu, Juncheng Jia, Yang Zhou, Huaiyu Dai, and Dejing Dou. 2022. FedDUAP: Federated learning with dynamic update and adaptive pruning using shared data on the server. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI ’22), 1–7. In press.
[36]
Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, and Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G.L. D’Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. 2021. Advances and Open Problems in Federated Learning. Foundations and Trends® in Machine Learning 14, 1 (2021), 1–121.
[37]
Chendi Zhou, Ji Liu, Juncheng Jia, Jingbo Zhou, Yang Zhou, Huaiyu Dai, and Dejing Dou. 2022. Efficient device scheduling with multi-job federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, 9971–9979.
[38]
Guanghao Li, Yue Hu, Miao Zhang, Ji Liu, Quanjun Yin, Yong Peng, and Dejing Dou. 2022. FedHiSyn: A hierarchical synchronous federated learning framework for resource and data heterogeneity. In Proceedings of the International Conference on Parallel Processing (ICPP ’22), 1–10. In press.
[39]
Ji Liu, Tianshi Che, Yang Zhou, Ruoming Jin, Huaiyu Dai, Dejing Dou, and Patrick Valduriez. 2024. AEDFL: efficient asynchronous decentralized federated learning with heterogeneous devices. In Proceedings of the SIAM International Conference on Data Mining (SDM ’24). SIAM, 833–841.
[40]
Ji Liu, Juncheng Jia, Tianshi Che, Chao Huo, Jiaxiang Ren, Yang Zhou, Huaiyu Dai, and Dejing Dou. 2024. FedASMU: Efficient asynchronous federated learning with dynamic staleness-aware model update. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, 13900–13908.
[41]
Tianshi Che, Yang Zhou, Zijie Zhang, Lingjuan Lyu, Ji Liu, Da Yan, Dejing Dou, and Jun Huan. 2023. Fast federated machine unlearning with nonlinear functional theory. In Proceedings of the International Conference on Machine Learning. PMLR, 4241–4268.
[42]
Ji Liu, Xuehai Zhou, Lei Mo, Shilei Ji, Yuan Liao, Zheng Li, Qin Gu, and Dejing Dou. 2023. Distributed and deep vertical federated learning with big data. Concurrency and Computation: Practice and Experience 35, 21 (2023), e7697.
[43]
Ji Liu, Juncheng Jia, Beichen Ma, Chendi Zhou, Jingbo Zhou, Yang Zhou, Huaiyu Dai, and Dejing Dou. 2022. Multi-job intelligent scheduling with cross-device federated learning. IEEE Transactions on Parallel and Distributed Systems 34, 2 (2022), 535–551.
[44]
Tianshi Che, Zijie Zhang, Yang Zhou, Xin Zhao, Ji Liu, Zhe Jiang, Da Yan, Ruoming Jin, and Dejing Dou. 2022. Federated fingerprint learning with heterogeneous architectures. In Proceedings of the IEEE International Conference on Data Mining (ICDM ’22). IEEE, 31–40.
[45]
Ji Liu, Daxiang Dong, Xi Wang, An Qin, Xingjian Li, Patrick Valduriez, Dejing Dou, and Dianhai Yu. 2023. Large-scale knowledge distillation with elastic heterogeneous computing resources. Concurrency and Computation: Practice and Experience 35, 26 (2023), e7272.
[46]
Eunjeong Jeong, Seungeun Oh, Hyesung Kim, Jihong Park, Mehdi Bennis, and Seong-Lyun Kim. 2018. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data. arXiv:1811.11479. Retrieved from https://doi.org/10.48550/arXiv.1811.11479
[47]
Chaoyang He, Murali Annavaram, and Salman Avestimehr. 2020. Group knowledge transfer: Federated learning of large CNNs at the edge. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS ’20), Vol. 33, 1–17.
[48]
Daxiang Dong, Ji Liu, Xi Wang, Weibao Gong, An Qin, Xingjian Li, Dianhai Yu, Patrick Valduriez, and Dejing Dou. 2022. Elastic deep learning using knowledge distillation with heterogeneous computing resources. In Proceedings of the European Conference on Parallel Processing workshop (European Conference on Parallel Processing workshop), 116–128.
[49]
Ji Liu, Daxiang Dong, Xi Wang, Weibao Gong, An Qin, Xingjian Li, Patrick Valduriez, Dejing Dou, and Dianhai Yu. 2022. Large-scale knowledge distillation with elastic heterogeneous computing resources. Concurrency and Computation: Practice and Experience 35 (2022), 1–16. In press.
[50]
Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang. 2020. Federated semi-supervised learning with inter-client consistency & disjoint learning. In Proceedings of the International Conference on Learning Representations (ICLR ’20), 1–15.
[51]
Lokesh Nagalapatti and Ramasuri Narayanam. 2021. Game of gradients: Mitigating irrelevant clients in federated learning. AAAI Conference on Artificial Intelligence 35, 10 (2021), 9046–9054.
[52]
Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. 2020. Scaffold: Stochastic controlled averaging for federated learning. In Proceedings of the International Conference on Machine Learning (ICML ’20), 5132–5143.
[53]
Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul Whatmough, and Venkatesh Saligrama. 2021. Federated learning based on dynamic regularization. In Proceedings of the International Conference on Learning Representations (ICLR ’21), 1–36.
[54]
Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin, Ion Stoica, Vladimir Braverman, Joseph Gonzalez, and Raman Arora. 2020. FetchSGD: Communication-efficient federated learning with sketching. In Proceedings of the International Conference on Machine Learning (ICML ’20), 8253–8265.
[55]
Jed Mills, Jia Hu, Geyong Min, Rui Jin, Siwei Zheng, and Jin Wang. 2021. Accelerating federated learning with a global biased optimiser. arXiv:2108.09134. Retrieved from https://doi.org/10.48550/arXiv.2108.09134
[56]
Tianshi Che, Ji Liu, Yang Zhou, Jiaxiang Ren, Jiwen Zhou, Victor S Sheng, Huaiyu Dai, and Dejing Dou. 2023. Federated learning of large language models with parameter-efficient prompt tuning and adaptive optimization. arXiv:2310.15080. Retrieved from https://doi.org/10.48550/arXiv.2310.15080
[57]
Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and Hugh Brendan McMahan. 2021. Adaptive federated optimization. In Proceedings of the International Conference on Learning Representations (ICLR ’21), 1–38.
[58]
Ang Li, Jingwei Sun, Pengcheng Li, Yu Pu, Hai Li, and Yiran Chen. 2021. Hermes: An efficient federated learning framework for heterogeneous mobile clients. In Proceedings of the 27th Annual International Conference on Mobile Computing and Networking, 420–437.
[59]
Ang Li, Jingwei Sun, Xiao Zeng, Mi Zhang, Hai Li, and Yiran Chen. 2021b. Fedmask: Joint computation and communication-efficient personalized federated learning via heterogeneous masking. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, 42–55.
[60]
Yongheng Deng, Weining Chen, Ju Ren, Feng Lyu, Yang Liu, Yunxin Liu, and Yaoxue Zhang. 2022. TailorFL: Dual-personalized federated learning under system and data heterogeneity. In Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems, 592–606.
[61]
Sixing Yu, Phuong Nguyen, Ali Anwar, and Ali Jannesari. 2021. Adaptive dynamic pruning for non-IID federated learning. arXiv:2106.06921. Retrieved from https://www.cs.iastate.edu/swapp/files/inline-files/yu_ea_flpruning_arxiv-june-2021.pdf
[62]
Zeru Zhang, Jiayin Jin, Zijie Zhang, Yang Zhou, Xin Zhao, Jiaxiang Ren, Ji Liu, Lingfei Wu, Ruoming Jin, and Dejing Dou. 2021. Validating the lottery ticket hypothesis with inertial manifold theory. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS ’21), Vol. 34, 1–15.
[63]
Jakub Konečnỳ, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv:1610.05492. Retrieved from https://doi.org/10.48550/arXiv.1610.05492
[64]
Juncheng Jia, Ji Liu, Chendi Zhou, Hao Tian, Mianxiong Dong, and Dejing Dou. 2024. Efficient asynchronous federated learning with sparsification and quantization. Concurrency and Computation: Practice and Experience 36, 9 (2024), e8002.
[65]
Nader Bouacida, Jiahui Hou, Hui Zang, and Xin Liu. 2021. Adaptive federated dropout: Improving communication efficiency and generalization for federated learning. In Proceedings of the IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS ’21), 1–6.
[66]
Yuanyuan Chen, Zichen Chen, Pengcheng Wu, and Han Yu. 2022. FedOBD: Opportunistic block dropout for efficiently training large-scale neural networks through federated learning. arXiv:2208.05174. Retrieved from https://doi.org/10.48550/arXiv.2208.05174
[67]
Bent Fuglede and Flemming Topsoe. 2004. Jensen-Shannon divergence and Hilbert space embedding. In Proceedings of the International Symposium on Information Theory (ISIT ’04), 31.
[68]
Solomon Kullback. 1997. Information Theory and Statistics. Courier Corporation.
[69]
Fan Lai, Xiangfeng Zhu, Harsha V. Madhyastha, and Mosharaf Chowdhury. 2021. Oort: Efficient federated learning via guided participant selection. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI ’21), 19–35.
[70]
Miao Yang, Ximin Wang, Hua Qian, Yongxin Zhu, Hongbin Zhu, Mohsen Guizani, and Victor Chang. 2022. An improved federated learning algorithm for privacy-preserving in cybertwin-driven 6G system. IEEE Transactions on Industrial Informatics 18 (2022), 6733–6742.
[71]
Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H. Vincent Poor. 2020. Tackling the objective inconsistency problem in heterogeneous federated optimization. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS ’20), 1–13.
[72]
Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2020. On the convergence of fedavg on non-IID data. In Proceedings of the International Conference on Learning Representations (ICLR ’20), 1–26.
[73]
Fan Zhou and Guojing Cong. 2018. On the convergence properties of a k-step averaging stochastic gradient descent algorithm for nonconvex optimization. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI ’18), 3219–3227.
[74]
Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu, and Gauri Joshi. 2021. Local adaptivity in federated learning: Convergence and consistency. In Proceedings of the International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML (FL-ICML ’21), 1–22.
[75]
Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, and Ananda Theertha Suresh. 2020. Mime: Mimicking centralized stochastic algorithms in federated learning. arXiv:2008.03606. Retrieved from https://doi.org/10.48550/arXiv.2008.03606
[76]
Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features From Tiny Images. Technical Report, University of Toronto (2009).
[77]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR ’16), 770–778.
[78]
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR ’15), 1–14.
[79]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 2278–2324.
[80]
William H. Press and Saul A. Teukolsky. 1990. Savitzky-Golay smoothing filters. Computers in Physics 4, 6 (1990), 669–672.

Cited By

View all
  • (2024)Trustworthy federated learning: privacy, security, and beyondKnowledge and Information Systems10.1007/s10115-024-02285-2Online publication date: 26-Nov-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Intelligent Systems and Technology
ACM Transactions on Intelligent Systems and Technology  Volume 15, Issue 6
December 2024
727 pages
EISSN:2157-6912
DOI:10.1145/3613712
  • Editor:
  • Huan Liu
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 November 2024
Online AM: 02 September 2024
Accepted: 31 July 2024
Revised: 02 January 2024
Received: 29 December 2022
Published in TIST Volume 15, Issue 6

Check for updates

Author Tags

  1. Federated learning
  2. Distributed machine learning
  3. Model pruning
  4. Heterogeneity
  5. Momentum

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)241
  • Downloads (Last 6 weeks)28
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Trustworthy federated learning: privacy, security, and beyondKnowledge and Information Systems10.1007/s10115-024-02285-2Online publication date: 26-Nov-2024

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media