Abstract
Federated Learning (FL) is experiencing a substantial research interest, with many frameworks being developed to allow practitioners to build federations easily and quickly. Most of these efforts do not consider two main aspects that are key to Machine Learning (ML) software: customizability and performance. This research addresses these issues by implementing an open-source FL framework named FastFederatedLearning (FFL). FFL is implemented in C/C++, focusing on code performance, and allows the user to specify any communication graph between clients and servers involved in the federation, ensuring customizability. FFL is tested against Intel OpenFL, achieving consistent speedups over different computational platforms (x86-64, ARM-v8, RISC-V), ranging from 2.5x and 3.69x. We aim to wrap FFL with a Python interface to ease its use and implement a middleware for different communication backends to be used. We aim to build dynamic federations in which relations between clients and servers are not static, giving life to an environment where federations can be seen as long-time evolving structures and exploited as services.
This work receives EuroHPC-JU funding under grant no. 101034126, with support from the Horizon2020 programme (the European PILOT) and from the Spoke “FutureHPC & BigData” of the ICSC - Centro Nazionale di Ricerca in “High-Performance Computing, Big Data, and Quantum Computing”, funded by European Union - NextGenerationEU.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Aldinucci, M., Campa, S., Danelutto, M., et al.: Design patterns percolating to parallel programming framework implementation. Int. J. Parallel Prog. 42(6), 1012–1031 (2013)
Aldinucci, M., Danelutto, M., Kilpatrick, P., et al.: Fastflow: high-level and efficient streaming on multicore. Program. Multi-core Many-core Comput. Syst., 261–280 (2017)
Beutel, D.J., Topal, T., Mathur, A., et al.: Flower: a friendly federated learning research framework. arXiv preprint arXiv:2007.14390 (2020)
Briggs, C., Fan, Z., Andras, P.: Federated learning with hierarchical clustering of local updates to improve training on non-IID data. In: 2020 International Joint Conference on Neural Networks, pp. 1–9 (2020)
Chen, Y., Ning, Y., Slawski, M., et al.: Asynchronous online federated learning for edge devices with non-IID data. In: 2020 IEEE International Conference on Big Data, pp. 15–24 (2020)
Ghosh, A., Chung, J., Yin, D., et al.: An efficient framework for clustered federated learning. Adv. Neural. Inf. Process. Syst. 33, 19586–19597 (2020)
Grant, S.W., Voorhies, R.: Cereal a c++11 library for serialization (2013). https://github.com/USCiLab/cereal
He, C., Li, S., So, J., et al.: FedML: a research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518 (2020)
Kourtellis, N., Katevas, K., Perino, D.: FLaaS: federated learning as a service. In: Proceedings of the 1st Workshop on Distributed Machine Learning, pp. 7–13 (2020)
Liu, Y., Fan, T., Chen, T., et al.: FATE: an industrial grade platform for collaborative learning with data protection. J. Mach. Learn. Res. 22(1), 10320–10325 (2021)
Lu, X., Liao, Y., Lio, P., et al.: Privacy-preserving asynchronous federated learning mechanism for edge network computing. IEEE Access 8, 48970–48981 (2020)
Martín, A., Ashish, A., Paul, B., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/
McMahan, B., Moore, E., Ramage, D., et al.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, vol. 54, pp. 1273–1282 (2017)
Mittone, G., Tonci, N., Birke, R., et al.: Experimenting with emerging RISC-V systems for decentralised machine learning. In: 20th ACM International Conference on Computing Frontiers (2023)
Paszke, A., Gross, S., Massa, F., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems 32, vol. 32, pp. 8024–8035 (2019)
Polato, M., Esposito, R., Aldinucci, M.: Boosting the federation: cross-silo federated learning without gradient descent. In: 2022 International Joint Conference on Neural Networks, pp. 1–10 (2022)
Reina, G.A., Gruzdev, A., Foley, P., et al.: OpenFL: an open-source framework for federated learning. arXiv preprint arXiv:2105.06413 (2021)
Roth, H.R., Cheng, Y., Wen, Y., et al.: Nvidia flare: federated learning from simulation to real-world. arXiv preprint arXiv:2210.13291 (2022)
Sattler, F., Müller, K.R., Samek, W.: Clustered federated learning: model-agnostic distributed multitask optimization under privacy constraints. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3710–3722 (2020)
Verbraeken, J., Wolting, M., Katzy, J., et al.: A survey on distributed machine learning. ACM Comput. Surv. 53(2), 1–33 (2020)
Warnat-Herresthal, S., Schultze, H., Shastry, K.L., et al.: Swarm learning for decentralized and confidential clinical machine learning. Nature 594(7862), 265–270 (2021)
Wu, W., He, L., Lin, W., et al.: SAFA: a semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans. Comput. 70(5), 655–668 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mittone, G., Birke, R., Aldinucci, M. (2024). Efficiently Distributed Federated Learning. In: Zeinalipour, D., et al. Euro-Par 2023: Parallel Processing Workshops. Euro-Par 2023. Lecture Notes in Computer Science, vol 14352. Springer, Cham. https://doi.org/10.1007/978-3-031-48803-0_40
Download citation
DOI: https://doi.org/10.1007/978-3-031-48803-0_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-48802-3
Online ISBN: 978-3-031-48803-0
eBook Packages: Computer ScienceComputer Science (R0)