Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3643832.3661880acmconferencesArticle/Chapter ViewAbstractPublication PagesmobisysConference Proceedingsconference-collections
research-article
Open access

FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated Clients

Published: 04 June 2024 Publication History

Abstract

Federated Learning (FL) facilitates collaborative training of a shared global model without exposing clients' private data. In practical FL systems, clients (e.g., edge servers, smartphones, and wearables) typically have disparate system resources. Conventional FL, however, adopts a one-size-fits-all solution, where a homogeneous large global model is transmitted to and trained on each client, resulting in an overwhelming workload for less capable clients and starvation for other clients. To address this issue, we propose FedConv, a client-friendly FL framework, which minimizes the computation and memory burden on resource-constrained clients by providing heterogeneous customized sub-models. FedConv features a novel learning-on-model paradigm that learns the parameters of the heterogeneous sub-models via convolutional compression. Unlike traditional compression methods, the compressed models in FedConv can be directly trained on clients without decompression. To aggregate the heterogeneous sub-models, we propose transposed convolutional dilation to convert them back to large models with a unified size while retaining personalized information from clients. The compression and dilation processes, transparent to clients, are optimized on the server leveraging a small public dataset. Extensive experiments on six datasets demonstrate that FedConv outperforms state-of-the-art FL systems in terms of model accuracy (by more than 35% on average), computation and communication overhead (with 33% and 25% reduction, respectively).

References

[1]
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems.
[2]
Ahmed M Abdelmoniem and Marco Canini. 2021. Towards mitigating device heterogeneity in federated learning via adaptive model quantization. In Proceedings of the 1st Workshop on Machine Learning and Systems. 96--103.
[3]
Andrei Afonin and Sai Praneeth Karimireddy. 2021. Towards model agnostic federated learning using knowledge distillation. arXiv preprint arXiv:2110.15210 (2021).
[4]
Samiul Alam, Luyang Liu, Ming Yan, and Mi Zhang. 2022. Fedrolex: Model-heterogeneous federated learning with rolling sub-model extraction. Advances in neural information processing systems 35 (2022), 29677--29690.
[5]
Daniel J. Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Titouan Parcollet, and Nicholas D. Lane. 2020. Flower: A Friendly Federated Learning Research Framework. CoRR (2020).
[6]
Daniel J. Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Titouan Parcollet, and Nicholas D. Lane. 2023. Flower Android Example (TensorFlowLite). https://github.com/adap/flower/tree/main/examples/android.
[7]
Cody Blakeney, Yan Yan, and Ziliang Zong. 2020. Is pruning compression?: Investigating pruning via network layer similarity. In IEEE CVPR. 914--922.
[8]
Dongqi Cai, Shangguang Wang, Yaozong Wu, Felix Xiaozhu Lin, and Mengwei Xu. 2023. Federated few-shot learning for mobile NLP. In Proceedings of the 29th Annual International Conference on Mobile Computing and Networking. 1--17.
[9]
Dongqi Cai, Yaozong Wu, Shangguang Wang, Felix Xiaozhu Lin, and Mengwei Xu. 2022. FedAdapter: Efficient Federated Learning for Modern NLP. arXiv preprint arXiv:2205.10162 (2022).
[10]
Dongqi Cai, Yaozong Wu, Shangguang Wang, Felix Xiaozhu Lin, and Mengwei Xu. 2023. Efficient federated learning for modern nlp. In Proceedings of the 29th Annual International Conference on Mobile Computing and Networking. 1--16.
[11]
Ming Chen, Jinze Wu, Yu Yin, Zhenya Huang, Qi Liu, and Enhong Chen. 2022. Dynamic Clustering Federated Learning for Non-IID Data. In Artificial Intelligence - Second CAAI International Conference, CICAI.
[12]
Yiqiang Chen, Xin Qin, Jindong Wang, Chaohui Yu, and Wen Gao. 2020. Fedhealth: A federated transfer learning framework for wearable healthcare. IEEE Intelligent Systems 35, 4 (2020), 83--93.
[13]
Thomas M. Cover and Joy A. Thomas. 2006. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing).
[14]
Kaiyan Cui, Yanwen Wang, Yuanqing Zheng, and Jinsong Han. 2021. ShakeReader: 'Read' UHF RFID using Smartphone. In IEEE INFOCOM.
[15]
Kaiyan Cui, Qiang Yang, Yuanqing Zheng, and Jinsong Han. 2023. mmRipple: Communicating with mmWave Radars through Smartphone Vibration. In Proceedings of the 22nd International Conference on Information Processing in Sensor Networks. 149--162.
[16]
Ryan Dahl, Mohammad Norouzi, and Jonathon Shlens. 2017. Pixel Recursive Super Resolution. In IEEE ICCV. 5449--5458.
[17]
Luke N Darlow, Elliot J Crowley, Antreas Antoniou, and Amos J Storkey. 2018. Cinic-10 is not imagenet or cifar-10. arXiv preprint arXiv:1810.03505 (2018).
[18]
Teófilo E de Campos, Bodla Rakesh Babu, and Manik Varma. 2009. Character recognition in natural images. In International conference on computer vision theory and applications, Vol. 1. SCITEPRESS, 273--280.
[19]
Li Deng. 2012. The MNIST Database of Handwritten Digit Images for Machine Learning Research. IEEE Signal Process. Mag. (2012).
[20]
Yongheng Deng, Weining Chen, Ju Ren, Feng Lyu, Yang Liu, Yunxin Liu, and Yaoxue Zhang. 2022. TailorFL: Dual-Personalized Federated Learning under System and Data Heterogeneity. In ACM SenSys.
[21]
Tim Dettmers. 2015. 8-bit approximations for parallelism in deep learning. arXiv preprint arXiv:1511.04561 (2015).
[22]
Enmao Diao, Jie Ding, and Vahid Tarokh. 2021. HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients. In ICLR. OpenReview.net.
[23]
Canh T. Dinh, Nguyen Hoang Tran, and Tuan Dung Nguyen. 2020. Personalized Federated Learning with Moreau Envelopes. In NeurIPS.
[24]
Vincent Dumoulin and Francesco Visin. 2016. A guide to convolution arithmetic for deep learning. CoRR abs/1603.07285 (2016).
[25]
Alireza Fallah, Aryan Mokhtari, and Asuman E. Ozdaglar. 2020. Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach. In NeurIPS.
[26]
Hongyang Gao, Hao Yuan, Zhengyang Wang, and Shuiwang Ji. 2020. Pixel Transposed Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. (2020).
[27]
Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics.
[28]
Linlin Guo, Silu Guo, Lei Wang, Chuang Lin, Jialin Liu, Bingxian Lu, Jian Fang, Zhonghao Liu, Zeyang Shan, and Jingwen Yang. 2019. Wiar: A Public Dataset for Wifi-Based Activity Recognition. IEEE Access 7 (2019), 154935--154945.
[29]
Lixiang Han, Zhen Xiao, and Zhenjiang Li. 2024. DTMM: Deploying TinyML Models on Extremely Weak IoT Devices with Pruning. arXiv preprint arXiv:2401.09068 (2024).
[30]
Haibo He, Sheng Chen, Kang Li, and Xin Xu. 2011. Incremental learning from stream data. IEEE Transactions on Neural Networks 22, 12 (2011), 1901--1914.
[31]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In IEEE CVPR. 770--778.
[32]
Yinghui He, Jianwei Liu, Mo Li, Guanding Yu, Jinsong Han, and Kui Ren. 2023. Sencom: Integrated sensing and communication with practical wifi. In ACM MobiCom. 1--16.
[33]
Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. 2019. Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification. CoRR abs/1909.06335 (2019).
[34]
Gang Hu, Yinglei Teng, Nan Wang, and F. Richard Yu. 2023. Clustered Data Sharing for Non-IID Federated Learning over Wireless Networks. CoRR (2023).
[35]
Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger. 2017. Snapshot Ensembles: Train 1, Get M for Free. In ICLR. OpenReview.net.
[36]
Sijie Ji, Yaxiong Xie, and Mo Li. 2022. Sifall: Practical online fall detection with rf sensing. In ACM SenSys. 563--577.
[37]
Sijie Ji, Xuanye Zhang, Yuanqing Zheng, and Mo Li. 2023. Construct 3D Hand Skeleton with Commercial WiFi. arXiv preprint arXiv:2312.15507 (2023).
[38]
Xinqi Jin, Lingkun Li, Fan Dang, Xinlei Chen, and Yunhao Liu. 2022. A survey on edge computing for wearable technology. Digital Signal Processing 125 (2022), 103146.
[39]
Cihat Keçeci, Mohammad Shaqfeh, Hayat Mbayed, and Erchin Serpedin. 2022. Multi-Task and Transfer Learning for Federated Learning Applications. CoRR (2022).
[40]
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
[41]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In NeurIPS.
[42]
Ang Li, Jingwei Sun, Pengcheng Li, Yu Pu, Hai Li, and Yiran Chen. 2021. Hermes: an efficient federated learning framework for heterogeneous mobile clients. In ACM MobiCom. 420--437.
[43]
Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, and Hai Li. 2021. LotteryFL: Empower Edge Intelligence with Personalized and Communication-Efficient Federated Learning. In 6th IEEE/ACM Symposium on Edge Computing, SEC. 68--79.
[44]
Ang Li, Jingwei Sun, Xiao Zeng, Mi Zhang, Hai Li, and Yiran Chen. 2021. Fed-Mask: Joint Computation and Communication-Efficient Personalized Federated Learning via Heterogeneous Masking. In ACM SenSys.
[45]
Chenning Li, Xiao Zeng, Mi Zhang, and Zhichao Cao. 2022. PyramidFL: a fine-grained client selection framework for efficient federated learning. In ACM MobiCom. 158--171.
[46]
Daliang Li and Junpu Wang. 2019. FedMD: Heterogenous Federated Learning via Model Distillation. CoRR (2019).
[47]
Jianhua Lin. 1991. Divergence measures based on the Shannon entropy. IEEE Transactions on Information theory 37, 1 (1991), 145--151.
[48]
Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. 2020. Ensemble distillation for robust model fusion in federated learning. Advances in Neural Information Processing Systems, NeurIPS (2020).
[49]
Jianwei Liu, Wenfan Song, Leming Shen, Jinsong Han, Xian Xu, and Kui Ren. 2021. Mandipass: Secure and usable user authentication via earphone imu. In IEEE ICDCS. IEEE, 674--684.
[50]
Juncai Liu, Jessie Hui Wang, Chenghao Rong, Yuedong Xu, Tao Yu, and Jilong Wang. 2021. Fedpa: An adaptively partial model aggregation strategy in federated learning. Computer Networks 199 (2021), 108468.
[51]
Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. 2019. Rethinking the Value of Network Pruning. In ICLR.
[52]
Xinyue Ma, Suyeon Jeong, Minjia Zhang, Di Wang, Jonghyun Choi, and Myeongjae Jeon. 2023. Cost-effective On-device Continual Learning over Memory Hierarchy with Miro. In Proceedings of the 29th Annual International Conference on Mobile Computing and Networking. 1--15.
[53]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS.
[54]
Alessio Mora, Irene Tenison, Paolo Bellavista, and Irina Rish. 2022. Knowledge distillation for federated learning: a practical guide. arXiv preprint arXiv:2211.04742 (2022).
[55]
Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. 2015. Learning Deconvolution Network for Semantic Segmentation. In IEEE ICCV. IEEE Computer Society, 1520--1528.
[56]
Xiaomin Ouyang, Zhiyuan Xie, Jiayu Zhou, Jianwei Huang, and Guoliang Xing. 2021. ClusterFL: a similarity-aware federated learning system for human activity recognition. In ACM MobiSys.
[57]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems, NeurIPS. 8024--8035.
[58]
Boris Radovič and Veljko Pejović. 2023. REPA: Client Clustering without Training and Data Labels for Improved Federated Learning in Non-IID Settings. arXiv preprint arXiv:2309.14088 (2023).
[59]
Sudipan Saha and Tahir Ahmad. 2021. Federated transfer learning: concept and applications. Intelligenza Artificiale 15, 1 (2021), 35--44.
[60]
Tim Salimans and Durk P Kingma. 2016. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Advances in neural information processing systems, NeurIPS 29 (2016).
[61]
Leming Shen and Yuanqing Zheng. 2023. FedDM: Data and Model Heterogeneity-Aware Federated Learning via Dynamic Weight Sharing. In 2023 IEEE ICDCS. IEEE, 975--976.
[62]
Guomei Shi, Li Li, Jun Wang, Wenyan Chen, Kejiang Ye, and ChengZhong Xu. 2020. HySync: Hybrid federated learning with effective synchronization. In 2020 IEEE 22nd International Conference on High Performance Computing and Communications; IEEE 18th International Conference on Smart City. IEEE, 628--633.
[63]
Jaemin Shin, Yuanchun Li, Yunxin Liu, and Sung-Ju Lee. 2022. FedBalancer: data and pace control for efficient federated learning on heterogeneous clients. In ACM MobiSys. 436--449.
[64]
Jiangang Shu, Tingting Yang, Xinying Liao, Farong Chen, Yao Xiao, Kan Yang, and Xiaohua Jia. 2023. Clustered Federated Multitask Learning on Non-IID Data With Enhanced Privacy. IEEE Internet Things J. (2023).
[65]
Xian Shuai, Yulin Shen, Siyang Jiang, Zhihe Zhao, Zhenyu Yan, and Guoliang Xing. 2022. BalanceFL: Addressing class imbalance in long-tail federated learning. In ACM/IEEE IPSN. 271--284.
[66]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In 3rd International Conference on Learning Representations, ICLR.
[67]
Jingwei Sun, Ang Li, Lin Duan, Samiul Alam, Xuliang Deng, Xin Guo, Haiming Wang, Maria Gorlatova, Mi Zhang, Hai Li, et al. 2022. FedSEA: A Semi-Asynchronous Federated Learning Framework for Extremely Heterogeneous Devices. In ACM SenSys. 106--119.
[68]
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In ICML, Vol. 70. 3319--3328.
[69]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1--9.
[70]
Elina Thibeau-Sutre, Sasha Collin, Ninon Burgos, and Olivier Colliot. 2023. Interpretability of Machine Learning Methods Applied to Neuroimaging. Machine Learning for Brain Disorders (2023), 655--704.
[71]
Nicola Tonellotto, Alberto Gotta, Franco Maria Nardini, Daniele Gadler, and Fabrizio Silvestri. 2021. Neural network quantization in federated learning at the edge. Information Sciences 575 (2021), 417--436.
[72]
Linlin Tu, Xiaomin Ouyang, Jiayu Zhou, Yuze He, and Guoliang Xing. 2021. FedDL: Federated Learning via Dynamic Layer Sharing for Human Activity Recognition. In ACM SenSys. 15--28.
[73]
Saeed Vahidian, Mahdi Morafah, and Bill Lin. 2021. Personalized federated learning by structured and unstructured pruning under data heterogeneity. In 2021 IEEE 41st international conference on distributed computing systems workshops (ICDCSW). IEEE, 27--34.
[74]
Yanwen Wang, Jiaxing Shen, and Yuanqing Zheng. 2020. Push the Limit of Acoustic Gesture Recognition. In IEEE INFOCOM.
[75]
Wikipedia contributors. 2023. Cross-entropy --- Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Cross-entropy&oldid=1170369413
[76]
Rui Xiao, Jianwei Liu, Jinsong Han, and Kui Ren. 2021. Onefi: One-shot recognition for unseen gesture via cots wifi. In ACM SenSys.
[77]
Huatao Xu, Pengfei Zhou, Rui Tan, and Mo Li. 2023. Practically Adopting Human Activity Recognition. In Proceedings of the 29th Annual International Conference on Mobile Computing and Networking. 1--15.
[78]
Huatao Xu, Pengfei Zhou, Rui Tan, Mo Li, and Guobin Shen. 2021. Limu-bert: Unleashing the potential of unlabeled data for imu sensing applications. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. 220--233.
[79]
Qiang Yang and Yuanqing Zheng. 2023. AquaHelper: Underwater SOS Transmission and Detection in Swimming Pools. In ACM SenSys.
[80]
Zhengjie Yang, Wei Bao, Dong Yuan, Nguyen H Tran, and Albert Y Zomaya. 2022. Federated learning with nesterov accelerated gradient. IEEE Transactions on Parallel and Distributed Systems 33, 12 (2022), 4863--4873.
[81]
Zhengjie Yang, Sen Fu, Wei Bao, Dong Yuan, and Bing Zhou. 2023. Hierarchical Federated Learning with Adaptive Momentum in Multi-Tier Networks. In IEEE ICDCS. IEEE, 499--510.
[82]
Dezhong Yao, Wanning Pan, Yao Wan, Hai Jin, and Lichao Sun. 2021. FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization. CoRR abs/2111.14655 (2021).
[83]
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks?. In Advances in Neural Information Processing Systems 27 2014, NeurIPS.
[84]
Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and Understanding Convolutional Networks. In IEEE ECCV.
[85]
Jie Zhang, Song Guo, Xiaosong Ma, Haozhao Wang, Wenchao Xu, and Feijie Wu. 2021. Parameterized Knowledge Transfer for Personalized Federated Learning. In NeurIPS. 10092--10104.
[86]
Jie Zhang, Zhihao Qu, Chenxi Chen, Haozhao Wang, Yufeng Zhan, Baoliu Ye, and Song Guo. 2021. Edge learning: The enabling technology for distributed big data analytics in the edge. ACM Computing Surveys (CSUR) (2021).
[87]
Li Lyna Zhang, Shihao Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, and Yunxin Liu. 2021. nn-Meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices. In ACM MobiSys. ACM, 81--93.
[88]
Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. 2021. Data-Free Knowledge Distillation for Heterogeneous Federated Learning. In ICML. PMLR.

Cited By

View all
  • (2025)Energy-Efficient Federated Learning Through UAV Edge Under Location UncertaintiesIEEE Transactions on Network Science and Engineering10.1109/TNSE.2024.348955412:1(223-236)Online publication date: Jan-2025
  • (2025)Adapter-guided knowledge transfer for heterogeneous federated learningJournal of Systems Architecture10.1016/j.sysarc.2025.103338(103338)Online publication date: Jan-2025
  • (2024)FDLoRa: Tackling Downlink-Uplink Asymmetry with Full-duplex LoRa GatewaysProceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems10.1145/3666025.3699338(281-294)Online publication date: 4-Nov-2024
  • Show More Cited By

Index Terms

  1. FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated Clients

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MOBISYS '24: Proceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services
      June 2024
      778 pages
      ISBN:9798400705816
      DOI:10.1145/3643832
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      In-Cooperation

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 June 2024

      Check for updates

      Badges

      Author Tags

      1. federated learning
      2. model heterogeneity
      3. model compression

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      MOBISYS '24
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 274 of 1,679 submissions, 16%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)644
      • Downloads (Last 6 weeks)67
      Reflects downloads up to 01 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Energy-Efficient Federated Learning Through UAV Edge Under Location UncertaintiesIEEE Transactions on Network Science and Engineering10.1109/TNSE.2024.348955412:1(223-236)Online publication date: Jan-2025
      • (2025)Adapter-guided knowledge transfer for heterogeneous federated learningJournal of Systems Architecture10.1016/j.sysarc.2025.103338(103338)Online publication date: Jan-2025
      • (2024)FDLoRa: Tackling Downlink-Uplink Asymmetry with Full-duplex LoRa GatewaysProceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems10.1145/3666025.3699338(281-294)Online publication date: 4-Nov-2024
      • (2024)Effective Heterogeneous Federated Learning via Efficient Hypernetwork-based Weight GenerationProceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems10.1145/3666025.3699326(112-125)Online publication date: 4-Nov-2024
      • (2024)LATTE: Layer Algorithm-aware Training Time Estimation for Heterogeneous Federated LearningProceedings of the 30th Annual International Conference on Mobile Computing and Networking10.1145/3636534.3690705(1470-1484)Online publication date: 4-Dec-2024
      • (2024)Towards ISAC-Empowered mmWave Radars by Capturing Modulated VibrationsIEEE Transactions on Mobile Computing10.1109/TMC.2024.344340423:12(13787-13803)Online publication date: Dec-2024

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media