Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Efficiently Distributed Federated Learning

  • Conference paper
  • First Online:
Euro-Par 2023: Parallel Processing Workshops (Euro-Par 2023)

Abstract

Federated Learning (FL) is experiencing a substantial research interest, with many frameworks being developed to allow practitioners to build federations easily and quickly. Most of these efforts do not consider two main aspects that are key to Machine Learning (ML) software: customizability and performance. This research addresses these issues by implementing an open-source FL framework named FastFederatedLearning (FFL). FFL is implemented in C/C++, focusing on code performance, and allows the user to specify any communication graph between clients and servers involved in the federation, ensuring customizability. FFL is tested against Intel OpenFL, achieving consistent speedups over different computational platforms (x86-64, ARM-v8, RISC-V), ranging from 2.5x and 3.69x. We aim to wrap FFL with a Python interface to ease its use and implement a middleware for different communication backends to be used. We aim to build dynamic federations in which relations between clients and servers are not static, giving life to an environment where federations can be seen as long-time evolving structures and exploited as services.

This work receives EuroHPC-JU funding under grant no. 101034126, with support from the Horizon2020 programme (the European PILOT) and from the Spoke “FutureHPC & BigData” of the ICSC - Centro Nazionale di Ricerca in “High-Performance Computing, Big Data, and Quantum Computing”, funded by European Union - NextGenerationEU.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/alpha-unito/FastFederatedLearning.

  2. 2.

    https://github.com/alpha-unito/OpenFL-extended.

References

  1. Aldinucci, M., Campa, S., Danelutto, M., et al.: Design patterns percolating to parallel programming framework implementation. Int. J. Parallel Prog. 42(6), 1012–1031 (2013)

    Article  Google Scholar 

  2. Aldinucci, M., Danelutto, M., Kilpatrick, P., et al.: Fastflow: high-level and efficient streaming on multicore. Program. Multi-core Many-core Comput. Syst., 261–280 (2017)

    Google Scholar 

  3. Beutel, D.J., Topal, T., Mathur, A., et al.: Flower: a friendly federated learning research framework. arXiv preprint arXiv:2007.14390 (2020)

  4. Briggs, C., Fan, Z., Andras, P.: Federated learning with hierarchical clustering of local updates to improve training on non-IID data. In: 2020 International Joint Conference on Neural Networks, pp. 1–9 (2020)

    Google Scholar 

  5. Chen, Y., Ning, Y., Slawski, M., et al.: Asynchronous online federated learning for edge devices with non-IID data. In: 2020 IEEE International Conference on Big Data, pp. 15–24 (2020)

    Google Scholar 

  6. Ghosh, A., Chung, J., Yin, D., et al.: An efficient framework for clustered federated learning. Adv. Neural. Inf. Process. Syst. 33, 19586–19597 (2020)

    Google Scholar 

  7. Grant, S.W., Voorhies, R.: Cereal a c++11 library for serialization (2013). https://github.com/USCiLab/cereal

  8. He, C., Li, S., So, J., et al.: FedML: a research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518 (2020)

  9. Kourtellis, N., Katevas, K., Perino, D.: FLaaS: federated learning as a service. In: Proceedings of the 1st Workshop on Distributed Machine Learning, pp. 7–13 (2020)

    Google Scholar 

  10. Liu, Y., Fan, T., Chen, T., et al.: FATE: an industrial grade platform for collaborative learning with data protection. J. Mach. Learn. Res. 22(1), 10320–10325 (2021)

    MathSciNet  Google Scholar 

  11. Lu, X., Liao, Y., Lio, P., et al.: Privacy-preserving asynchronous federated learning mechanism for edge network computing. IEEE Access 8, 48970–48981 (2020)

    Article  Google Scholar 

  12. Martín, A., Ashish, A., Paul, B., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/

  13. McMahan, B., Moore, E., Ramage, D., et al.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, vol. 54, pp. 1273–1282 (2017)

    Google Scholar 

  14. Mittone, G., Tonci, N., Birke, R., et al.: Experimenting with emerging RISC-V systems for decentralised machine learning. In: 20th ACM International Conference on Computing Frontiers (2023)

    Google Scholar 

  15. Paszke, A., Gross, S., Massa, F., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32, vol. 32, pp. 8024–8035 (2019)

    Google Scholar 

  16. Polato, M., Esposito, R., Aldinucci, M.: Boosting the federation: cross-silo federated learning without gradient descent. In: 2022 International Joint Conference on Neural Networks, pp. 1–10 (2022)

    Google Scholar 

  17. Reina, G.A., Gruzdev, A., Foley, P., et al.: OpenFL: an open-source framework for federated learning. arXiv preprint arXiv:2105.06413 (2021)

  18. Roth, H.R., Cheng, Y., Wen, Y., et al.: Nvidia flare: federated learning from simulation to real-world. arXiv preprint arXiv:2210.13291 (2022)

  19. Sattler, F., Müller, K.R., Samek, W.: Clustered federated learning: model-agnostic distributed multitask optimization under privacy constraints. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3710–3722 (2020)

    Article  MathSciNet  Google Scholar 

  20. Verbraeken, J., Wolting, M., Katzy, J., et al.: A survey on distributed machine learning. ACM Comput. Surv. 53(2), 1–33 (2020)

    Article  Google Scholar 

  21. Warnat-Herresthal, S., Schultze, H., Shastry, K.L., et al.: Swarm learning for decentralized and confidential clinical machine learning. Nature 594(7862), 265–270 (2021)

    Article  Google Scholar 

  22. Wu, W., He, L., Lin, W., et al.: SAFA: a semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans. Comput. 70(5), 655–668 (2020)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gianluca Mittone .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mittone, G., Birke, R., Aldinucci, M. (2024). Efficiently Distributed Federated Learning. In: Zeinalipour, D., et al. Euro-Par 2023: Parallel Processing Workshops. Euro-Par 2023. Lecture Notes in Computer Science, vol 14352. Springer, Cham. https://doi.org/10.1007/978-3-031-48803-0_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48803-0_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48802-3

  • Online ISBN: 978-3-031-48803-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics