Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-031-72952-2_8guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Personalized Federated Domain-Incremental Learning Based on Adaptive Knowledge Matching

Published: 01 October 2024 Publication History

Abstract

This paper focuses on Federated Domain-Incremental Learning (FDIL) where each client continues to learn incremental tasks where their domain shifts from each other. We propose a novel adaptive knowledge matching-based personalized FDIL approach (pFedDIL) which allows each client to alternatively utilize appropriate incremental task learning strategy on the correlation with the knowledge from previous tasks. More specifically, when a new task arrives, each client first calculates its local correlations with previous tasks. Then, the client can choose to adopt a new initial model or a previous model with similar knowledge to train the new task and simultaneously migrate knowledge from previous tasks based on these correlations. Furthermore, to identify the correlations between the new task and previous tasks for each client, we separately employ an auxiliary classifier to each target classification model and propose sharing partial parameters between the target classification model and the auxiliary classifier to condense model parameters. We conduct extensive experiments on several datasets of which results demonstrate that pFedDIL outperforms state-of-the-art methods by up to 14.35% in terms of average accuracy of all tasks.

References

[1]
Bakman, Y.F., Yaldiz, D.N., Ezzeldin, Y.H., Avestimehr, S.: Federated orthogonal training: mitigating global catastrophic forgetting in continual federated learning. arXiv preprint arXiv:2309.01289 (2023)
[2]
Chen HJ, Cheng AC, Juan DC, Wei W, and Sun M Mitigating forgetting in online continual learning via instance-aware parameterization Adv. Neural. Inf. Process. Syst. 2020 33 17466-17477
[3]
Cohen, G., Afshar, S., Tapson, J., Van Schaik, A.: EMNIST: extending MNIST to handwritten letters. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2921–2926. IEEE (2017)
[4]
Dong J, Li H, Cong Y, Sun G, Zhang Y, and Van Gool L No one left behind: real-world federated class-incremental learning IEEE Trans. Pattern Anal. Mach. Intell. 2024 46 4 2054-2070
[5]
Dong, J., Liang, W., Cong, Y., Sun, G.: Heterogeneous forgetting compensation for class-incremental learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 11742–11751 (2023)
[6]
Dong, J., Wang, L., Fang, Z., Sun, G., Xu, S., Wang, X., Zhu, Q.: Federated class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10164–10173 (2022)
[7]
Dong, J., Zhang, D., Cong, Y., Cong, W., Ding, H., Dai, D.: Federated incremental semantic segmentation. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3934–3943 (2023).
[8]
Dong X, Yu Z, Cao W, Shi Y, and Ma Q A survey on ensemble learning Front. Comput. Sci. 2020 14 241-258
[9]
Fini E, Lathuilière S, Sangineto E, Nabi M, and Ricci E Vedaldi A, Bischof H, Brox T, and Frahm J-M Online continual learning under extreme memory constraints Computer Vision – ECCV 2020 2020 Cham Springer 720-735
[10]
Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2096–2130 (2016)
[11]
Hadsell R, Rao D, Rusu AA, and Pascanu R Embracing change: continual learning in deep neural networks Trends Cogn. Sci. 2020 24 12 1028-1040
[12]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
[13]
Hull JJ A database for handwritten text recognition research IEEE Trans. Pattern Anal. Mach. Intell. 1994 16 5 550-554
[14]
Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.L.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-IID private data. ArXiv abs/1811.11479 (2018)
[15]
Jiang, Z., Ren, Y., Lei, M., Zhao, Z.: FedSpeech: federated text-to-speech with continual learning. In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. IJCAI-2021, International Joint Conferences on Artificial Intelligence Organization (2021).
[16]
Jung, S., Ahn, H., Cha, S., Moon, T.: Continual learning with node-importance based adaptive group sparse regularization. arXiv Learning (2020)
[17]
LeCun, Y., Cortes, C., Burges, C.: MNIST handwritten digit database (2010)
[18]
Li L, Fan Y, Tse M, and Lin KY A review of applications in federated learning Comput. Industr. Eng. 2020 149 106854
[19]
Li T, Sahu AK, Talwalkar A, and Smith V Federated learning: challenges, methods, and future directions IEEE Signal Process. Mag. 2020 37 3 50-60
[20]
Li, Y., Li, Q., Wang, H., Li, R., Zhong, W., Zhang, G.: Towards efficient replay in federated incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12820–12829 (2024)
[21]
Liu, Y., Tian, X., Li, Y., Xiong, Z., Wu, F.: Compact feature learning for multi-domain image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7193–7201 (2019)
[22]
Ma, Y., Xie, Z., Wang, J., Chen, K., Shou, L.: Continual federated learning based on knowledge distillation. In: Raedt, L.D. (ed.) Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pp. 2182–2188. International Joint Conferences on Artificial Intelligence Organization (2022). Main Track
[23]
Maltoni D and Lomonaco V Continuous learning in single-incremental-task scenarios Neural Netw.: Off. J. Int. Neural Netw. Soc. 2018 116 56-73
[24]
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
[25]
Mirza, M.J., Masana, M., Possegger, H., Bischof, H.: An efficient domain-incremental learning approach to drive in all weather conditions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3001–3011 (2022)
[26]
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)
[27]
Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1406–1415 (2019)
[28]
Qi, D., Zhao, H., Li, S.: Better generative replay for continual federated learning. arXiv preprint arXiv:2302.13001 (2023)
[29]
Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2001–2010 (2017)
[30]
Rolnick, D., Ahuja, A., Schwarz, J., Lillicrap, T., Wayne, G.: Experience replay for continual learning. Adv. Neural Inf. Process. Syst. 32 (2019)
[31]
Ruder, S.: An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 (2017)
[32]
Saenko K, Kulis B, Fritz M, and Darrell T Daniilidis K, Maragos P, and Paragios N Adapting visual category models to new domains Computer Vision – ECCV 2010 2010 Heidelberg Springer 213-226
[33]
Shen, T., et al.: Federated mutual learning. ArXiv abs/2006.16765 (2020)
[34]
Standley, T., Zamir, A., Chen, D., Guibas, L., Malik, J., Savarese, S.: Which tasks should be learned together in multi-task learning? In: International Conference on Machine Learning, pp. 9120–9132. PMLR (2020)
[35]
Tankard C What the GDPR means for businesses Netw. Secur. 2016 2016 6 5-8
[36]
Van de Ven GM, Tuytelaars T, and Tolias AS Three types of incremental learning Nat. Mach. Intell. 2022 4 12 1185-1197
[37]
Wang, H., Li, Y., Xu, W., Li, R., Zhan, Y., Zeng, Z.: DaFKD: domain-aware federated knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20412–20421 (2023)
[38]
Wang, H., Xu, H., Li, Y., Xu, Y., Li, R., Zhang, T.: FEDCDA: federated learning with cross-rounds divergence-aware aggregation. In: The Twelfth International Conference on Learning Representations (2023)
[39]
Wang, L., Zhang, X., Su, H., Zhu, J.: A comprehensive survey of continual learning: theory, method and application. IEEE Trans. Pattern Anal. Mach. Intell. (2024)
[40]
Yoon, J., Jeong, W., Lee, G., Yang, E., Hwang, S.J.: Federated continual learning with weighted inter-client transfer. In: International Conference on Machine Learning, pp. 12073–12086. PMLR (2021)
[41]
Yu, H., et al.: Personalized federated continual learning via multi-granularity prompt. arXiv preprint arXiv:2407.00113 (2024)
[42]
Zhang C, Xie Y, Bai H, Yu B, Li W, and Gao Y A survey on federated learning Knowl.-Based Syst. 2021 216 106775
[43]
Zhang, J., Chen, C., Zhuang, W., Lyu, L.: TARGET: federated class-continual learning via exemplar-free distillation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4782–4793 (2023)
[44]
Zhao, H., Zhou, T., Long, G., Jiang, J., Zhang, C.: Does continual learning equally forget all parameters? In: International Conference on Machine Learning, pp. 42280–42303. PMLR (2023)

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
Computer Vision – ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part XLVI
Sep 2024
560 pages
ISBN:978-3-031-72951-5
DOI:10.1007/978-3-031-72952-2
  • Editors:
  • Aleš Leonardis,
  • Elisa Ricci,
  • Stefan Roth,
  • Olga Russakovsky,
  • Torsten Sattler,
  • Gül Varol

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 01 October 2024

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media