Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Cost-Efficient Federated Learning for Edge Intelligence in Multi-Cell Networks

Published: 07 August 2024 Publication History

Abstract

The proliferation of various mobile devices with massive data and improving computing capacity have prompted the rise of edge artificial intelligence (Edge AI). Without revealing the raw data, federated learning (FL) becomes a promising distributed learning paradigm that caters to the above trend. Nevertheless, due to periodical communication for model aggregation, it would incur inevitable costs in terms of training latency and energy consumption, especially in multi-cell edge networks. Thus motivated, we study the joint edge aggregation and association problem to achieve the cost-efficient FL performance, where the model aggregation over multiple cells just happens at the network edge. After analyzing the NP-hardness with complex coupled variables, we transform it into a set function optimization problem and prove the objective function shows neither submodular nor supermodular property. By decomposing the complex objective function, we reconstruct a substitute function with the supermodularity and the bounded gap. On this basis, we design a two-stage search-based algorithm with theoretical performance guarantee. We further extend to the case of flexible bandwidth allocation and design the decoupled resource allocation algorithm with reduced computation size. Finally, extensive simulations and field experiments based on the testbed are conducted to validate both the effectiveness and near-optimality of our proposed solution.

References

[1]
Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge intelligence: Paving the last mile of artificial intelligence with edge computing,” Proc. IEEE, vol. 107, no. 8, pp. 1738–1762, Aug. 2019.
[2]
W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” IEEE Netw., vol. 34, no. 3, pp. 134–142, May 2020.
[3]
C. Dong et al., “UAVs as an intelligent service: Boosting edge intelligence for air-ground integrated networks,” IEEE Netw., vol. 35, no. 4, pp. 167–175, Jul./Aug. 2021.
[4]
Y. Qu et al., “Empowering edge intelligence by air-ground integrated federated learning,” IEEE Netw., vol. 35, no. 5, pp. 34–41, Sep. 2021.
[5]
P. Kairouz et al., “Advances and open problems in federated learning,” Found. Trends Mach. Learn., vol. 14, nos. 1–2, pp. 1–210, 2021.
[6]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. AISTATS, 2017, pp. 1273–1282.
[7]
S. Wang et al., “When edge meets learning: Adaptive control for resource-constrained distributed machine learning,” in Proc. IEEE INFOCOM, Apr. 2018, pp. 63–71.
[8]
F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, “Robust and communication-efficient federated learning from non-i.i.d. data,” IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 9, pp. 3400–3413, Sep. 2020.
[9]
T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” in Proc. Mach. Learn. Syst. (MLSys), 2020, pp. 429–450.
[10]
X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of FedAvg on non-IID data,” in Proc. Int. Conf. Learn. Represent., 2019, pp. 1–26.
[11]
L. Liu, J. Zhang, S. H. Song, and K. B. Letaief, “Client-edge-cloud hierarchical federated learning,” in Proc. IEEE Int. Conf. Commun. (ICC), Jun. 2020, pp. 1–6.
[12]
G. Zhao, H. Xu, Y. Zhao, C. Qiao, and L. Huang, “Offloading dependent tasks in mobile edge computing with service caching,” in Proc. IEEE INFOCOM, Jul. 2020, pp. 1997–2006.
[13]
Y. Song, T. Wang, Y. Wu, L. Qian, and Z. Shi, “Non-orthogonal multiple access assisted federated learning for UAV swarms: An approach of latency minimization,” in Proc. IEEE IWCMC, Jun. 2021, pp. 1123–1128.
[14]
T. T. Vu, D. T. Ngo, N. H. Tran, H. Q. Ngo, M. N. Dao, and R. H. Middleton, “Cell-free massive MIMO for wireless federated learning,” IEEE Trans. Wireless Commun., vol. 19, no. 10, pp. 6377–6392, Oct. 2020.
[15]
W. Xia, W. Wen, K.-K. Wong, T. Q. Quek, J. Zhang, and H. Zhu, “Federated-learning-based client scheduling for low-latency wireless communications,” IEEE Wireless Commun., vol. 28, no. 2, pp. 32–38, Apr. 2021.
[16]
B. Xu, W. Xia, J. Zhang, X. Sun, and H. Zhu, “Dynamic client association for energy-aware hierarchical federated learning,” in Proc. IEEE WCNC, Mar. 2021, pp. 1–6.
[17]
R. Zhou, J. Yu, R. Wang, B. Li, J. Jiang, and L. Wu, “A reinforcement learning approach for minimizing job completion time in clustered federated learning,” in Proc. IEEE INFOCOM Conf. Comput. Commun., Jul. 2023, pp. 1–10.
[18]
D. Chen et al., “Matching-theory-based low-latency scheme for multitask federated learning in MEC networks,” IEEE Internet Things J., vol. 8, no. 14, pp. 11415–11426, Jul. 2021.
[19]
L. Li, D. Shi, R. Hou, H. Li, M. Pan, and Z. Han, “To talk or to work: Flexible communication compression for energy efficient federated learning over heterogeneous mobile edge devices,” in Proc. IEEE INFOCOM, May 2021, pp. 1–10.
[20]
Q. Zeng, Y. Du, K. Huang, and K. K. Leung, “Energy-efficient radio resource allocation for federated edge learning,” in Proc. IEEE Int. Conf. Commun. Workshops (ICC Workshops), Jun. 2020, pp. 1–6.
[21]
Z. Yang, M. Chen, W. Saad, C. S. Hong, and M. Shikh-Bahaei, “Energy efficient federated learning over wireless communication networks,” IEEE Trans. Wireless Commun., vol. 20, no. 3, pp. 1935–1949, Mar. 2020.
[22]
X. Mo and J. Xu, “Energy-efficient federated edge learning with joint communication and computation design,” J. Commun. Inf. Netw., vol. 6, no. 2, pp. 110–124, Jun. 2021.
[23]
N. H. Tran, W. Bao, A. Zomaya, M. N. Nguyen, and C. S. Hong, “Federated learning over wireless networks: Optimization model design and analysis,” in Proc. IEEE INFOCOM, Apr. 2019, pp. 1387–1395.
[24]
B. Luo, X. Li, S. Wang, J. Huang, and L. Tassiulas, “Cost-effective federated learning design,” in Proc. IEEE INFOCOM, May 2021, pp. 1–10.
[25]
S. Luo, X. Chen, Q. Wu, Z. Zhou, and S. Yu, “HFEL: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning,” IEEE Trans. Wireless Commun., vol. 19, no. 10, pp. 6535–6548, Oct. 2020.
[26]
M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies, “The case for VM-based cloudlets in mobile computing,” IEEE Pervasive Comput., vol. 8, no. 4, pp. 14–23, Oct. 2009.
[27]
Y. Shen, Y. Qu, C. Dong, F. Zhou, and Q. Wu, “Joint training and resource allocation optimization for federated learning in UAV swarm,” IEEE Internet Things J., vol. 10, no. 3, pp. 2272–2284, Feb. 2023.
[28]
Y. Qu, C. Dong, T. Wu, Y. Zhuang, H. Dai, and F. Wu, “Efficient edge intelligence under clustering for UAV swarm networks,” in Proc. Int. Conf. Space-Air-Ground Comput. (SAGC), Oct. 2021, pp. 112–117.
[29]
Z. Wang, H. Xu, J. Liu, H. Huang, C. Qiao, and Y. Zhao, “Resource-efficient federated learning with hierarchical aggregation in edge computing,” in Proc. IEEE INFOCOM, May 2021, pp. 1–10.
[30]
X. Ouyang, Z. Xie, J. Zhou, J. Huang, and G. Xing, “ClusterFL: A similarity-aware federated learning system for human activity recognition,” in Proc. 19th Annu. Int. Conf. Mobile Syst., Appl., Services, Jun. 2021, pp. 54–66.
[31]
C. T. Dinh et al., “Federated learning over wireless networks: Convergence analysis and resource allocation,” IEEE/ACM Trans. Netw., vol. 29, no. 1, pp. 398–409, Feb. 2021.
[32]
M. M. Amiri, D. Gündüz, S. R. Kulkarni, and H. V. Poor, “Update aware device scheduling for federated learning at the wireless edge,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Jun. 2020, pp. 2598–2603.
[33]
J. Ren, Y. He, D. Wen, G. Yu, K. Huang, and D. Guo, “Scheduling for cellular federated edge learning with importance and channel awareness,” IEEE Trans. Wireless Commun., vol. 19, no. 11, pp. 7690–7703, Nov. 2020.
[34]
T. Nishio and R. Yonetani, “Client selection for federated learning with heterogeneous resources in mobile edge,” in Proc. IEEE Int. Conf. Commun., May 2019, pp. 1–7.
[35]
F. Ang, L. Chen, N. Zhao, Y. Chen, W. Wang, and F. R. Yu, “Robust federated learning with noisy communication,” IEEE Trans. Commun., vol. 68, no. 6, pp. 3452–3464, Jun. 2020.
[36]
S. Prakash et al., “Coded computing for low-latency federated learning over wireless edge networks,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 233–250, Jan. 2021.
[37]
A. Li, L. Zhang, J. Tan, Y. Qin, J. Wang, and X.-Y. Li, “Sample-level data selection for federated learning,” in Proc. IEEE INFOCOM Conf. Comput. Commun., May 2021, pp. 1–10.
[38]
K. Bonawitz et al., “Practical secure aggregation for privacy-preserving machine learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., Oct. 2017, pp. 1175–1191.
[39]
K. Wei et al., “Low-latency federated learning over wireless channels with differential privacy,” IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 290–307, Jan. 2021.
[40]
B. Xin, W. Yang, Y. Geng, S. Chen, S. Wang, and L. Huang, “Private FL-GAN: Differential privacy synthetic data generation based on federated learning,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., May 2020, pp. 2927–2931.
[41]
N. Agarwal, A. T. Suresh, F. Yu, S. Kumar, and H. B. McMahan, “cpSGD: Communication-efficient and differentially-private distributed SGD,” in Proc. Adv. Neural Inf. Process. Syst., 2018, pp. 1–12.
[42]
A. Li, L. Zhang, J. Wang, F. Han, and X.-Y. Li, “Privacy-preserving efficient federated-learning model debugging,” IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 10, pp. 2291–2303, Oct. 2022.
[43]
I. Hegedűs, G. Danner, and M. Jelasity, “Gossip learning as a decentralized alternative to federated learning,” in Proc. IFIP Int. Conf. Distrib. Appl. Interoperable Syst. Lyngby, Danemark: Springer, 2019, pp. 74–90.
[44]
L. Yang, Y. Lu, J. Cao, J. Huang, and M. Zhang, “E-tree learning: A novel decentralized model learning framework for edge AI,” IEEE Internet Things J., vol. 8, no. 14, pp. 11290–11304, Jul. 2021.
[45]
L. Yang, Y. Gan, J. Cao, and Z. Wang, “Optimizing aggregation frequency for hierarchical model training in heterogeneous edge computing,” IEEE Trans. Mobile Comput., vol. 44, no. 7, pp. 4181–4194, Feb. 2022.
[46]
X. Zhou, W. Liang, J. She, Z. Yan, and K. I.-K. Wang, “Two-layer federated learning with heterogeneous model aggregation for 6G supported Internet of Vehicles,” IEEE Trans. Veh. Technol., vol. 70, no. 6, pp. 5308–5317, Jun. 2021.
[47]
J. Konečný, Z. Qu, and P. Richtárik, “Semi-stochastic coordinate descent,” Optim. Methods Softw., vol. 32, no. 5, pp. 993–1005, 2017.
[48]
C. Briggs, Z. Fan, and P. Andras, “Federated learning with hierarchical clustering of local updates to improve training on non-IID data,” in Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2020, pp. 1–9.
[49]
C. Ma et al., “Distributed optimization with arbitrary local solvers,” Optim. Methods Softw., vol. 32, no. 4, pp. 813–848, 2017.
[50]
Y. Qu et al., “Service provisioning for UAV-enabled mobile edge computing,” IEEE J. Sel. Areas Commun., vol. 39, no. 11, pp. 3287–3305, Nov. 2021.
[51]
J. Feng, W. Zhang, Q. Pei, J. Wu, and X. Lin, “Heterogeneous computation and resource allocation for wireless powered federated edge learning systems,” IEEE Trans. Commun., vol. 70, no. 5, pp. 3220–3233, May 2022.
[52]
X. Wang, Z. Ning, L. Guo, S. Guo, X. Gao, and G. Wang, “Online learning for distributed computation offloading in wireless powered mobile edge computing networks,” IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 8, pp. 1841–1855, Aug. 2022.
[53]
X. Deng, J. Li, L. Shi, Z. Wei, X. Zhou, and J. Yuan, “Wireless powered mobile edge computing: Dynamic resource allocation and throughput maximization,” IEEE Trans. Mobile Comput., vol. 21, no. 6, pp. 2271–2288, Jun. 2022.
[54]
M. M. Amiri and D. Gündüz, “Federated learning over wireless fading channels,” IEEE Trans. Wireless Commun., vol. 19, no. 5, pp. 3546–3557, May 2020.
[55]
M. Charikar, S. Guha, É. Tardos, and D. B. Shmoys, “A constant-factor approximation algorithm for the k-median problem,” J. Comput. Syst. Sci., vol. 65, no. 1, pp. 129–149, Aug. 2002.
[56]
J. G. Oxley, Matroid Theory (Oxford Graduate Texts in Mathematics). Oxford, U.K.: Oxford Univ. Press, 2006.
[57]
G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, “An analysis of approximations for maximizing submodular set functions-I,” Math. Programm., vol. 14, no. 1, pp. 265–294, Dec. 1978.
[58]
J. Lee, V. S. Mirrokni, V. Nagarajan, and M. Sviridenko, “Maximizing nonmonotone submodular functions under matroid or knapsack constraints,” SIAM J. Discrete Math., vol. 23, no. 4, pp. 2053–2078, Jan. 2010.
[59]
Y. Zhou et al., “Secure communications for UAV-enabled mobile edge computing systems,” IEEE Trans. Commun., vol. 68, no. 1, pp. 376–388, Jan. 2020.
[60]
F. Zhou, Y. Wu, H. Sun, and Z. Chu, “UAV-enabled mobile edge computing: Offloading optimization and trajectory design,” in Proc. IEEE Int. Conf. Commun. (ICC), May 2018, pp. 1–6.
[61]
S. M. Shah and V. K. N. Lau, “Model compression for communication efficient federated learning,” IEEE Trans. Neural Netw. Learn. Syst., vol. 34, no. 9, pp. 5937–5951, Sep. 2023.
[62]
S. U. Stich, J.-B. Cordonnier, and M. Jaggi, “Sparsified SGD with memory,” in Proc. Adv. Neural Inf. Process. Syst., vol. 31, 2018, pp. 1–12.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE/ACM Transactions on Networking
IEEE/ACM Transactions on Networking  Volume 32, Issue 5
Oct. 2024
897 pages

Publisher

IEEE Press

Publication History

Published: 07 August 2024
Published in TON Volume 32, Issue 5

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 19
    Total Downloads
  • Downloads (Last 12 months)19
  • Downloads (Last 6 weeks)7
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media