Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-031-72940-9_8guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Strike a Balance in Continual Panoptic Segmentation

Published: 17 November 2024 Publication History

Abstract

This study explores the emerging area of continual panoptic segmentation, highlighting three key balances. First, we introduce past-class backtrace distillation to balance the stability of existing knowledge with the adaptability to new information. This technique retraces the features associated with past classes based on the final label assignment results, performing knowledge distillation targeting these specific features from the previous model while allowing other features to flexibly adapt to new information. Additionally, we introduce a class-proportional memory strategy, which aligns the class distribution in the replay sample set with that of the historical training data. This strategy maintains a balanced class representation during replay, enhancing the utility of the limited-capacity replay sample set in recalling prior classes. Moreover, recognizing that replay samples are annotated only for the classes of their original step, we devise balanced anti-misguidance losses, which combat the impact of incomplete annotations without incurring classification bias. Building upon these innovations, we present a new method named Balanced Continual Panoptic Segmentation (BalConpas). Our evaluation on the challenging ADE20K dataset demonstrates its superior performance compared to existing state-of-the-art methods. The official code is available at https://github.com/jinpeng0528/BalConpas.

References

[1]
Baek, D., Oh, Y., Lee, S., Lee, J., Ham, B.: Decomposed knowledge distillation for class-incremental semantic segmentation. In: NeurIPS (2022)
[2]
Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, and Zagoruyko S Vedaldi A, Bischof H, Brox T, and Frahm J-M End-to-end object detection with transformers Computer Vision – ECCV 2020 2020 Cham Springer 213-229
[3]
Cermelli, F., Cord, M., Douillard, A.: CoMFormer: continual learning in semantic and panoptic segmentation. In: CVPR (2023)
[4]
Cermelli, F., Mancini, M., Bulo, S.R., Ricci, E., Caputo, B.: Modeling the background for incremental learning in semantic segmentation. In: CVPR (2020)
[5]
Cha, S., Yoo, Y., Moon, T., et al.: SSUL: semantic segmentation with unknown label for exemplar-based class-incremental learning. In: NeurIPS (2021)
[6]
Chaudhry A, Dokania PK, Ajanthan T, and Torr PHS Ferrari V, Hebert M, Sminchisescu C, and Weiss Y Riemannian walk for incremental learning: understanding forgetting and intransigence Computer Vision – ECCV 2018 2018 Cham Springer 556-572
[7]
Chen J, Cong R, Ip HHS, and Kwong S Kepsalinst: using peripheral points to delineate salient instances IEEE Trans. Cybern. 2024 54 6 3392-3405
[8]
Chen, J., Cong, R., Yuxuan, L., Ip, H., Kwong, S.: Saving 100x storage: prototype replay for reconstructing training sample distribution in class-incremental semantic segmentation. In: NeurIPS (2023)
[9]
Cheng, B., Misra, I., Schwing, A.G., Kirillov, A., Girdhar, R.: Masked-attention mask transformer for universal image segmentation. In: CVPR (2022)
[10]
Cheng, B., Schwing, A., Kirillov, A.: Per-pixel classification is not all you need for semantic segmentation. In: NeurIPS (2021)
[11]
Cong R, Xiong H, Chen J, Zhang W, Huang Q, and Zhao Y Query-guided prototype evolution network for few-shot segmentation IEEE Trans. Multimedia 2024 26 6501-6512
[12]
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)
[13]
Dhar, P., Singh, R.V., Peng, K.C., Wu, Z., Chellappa, R.: Learning without memorizing. In: CVPR (2019)
[14]
Douillard, A., Chen, Y., Dapogny, A., Cord, M.: PLOP: Learning without forgetting for continual semantic segmentation. In: CVPR (2021)
[15]
Douillard A, Cord M, Ollion C, Robert T, and Valle E Vedaldi A, Bischof H, Brox T, and Frahm J-M PODNet: pooled outputs distillation for small-tasks incremental learning Computer Vision – ECCV 2020 2020 Cham Springer 86-102
[16]
Douillard, A., Ramé, A., Couairon, G., Cord, M.: DyTox: transformers for continual learning with dynamic token expansion. In: CVPR (2022)
[17]
Gu, Y., Deng, C., Wei, K.: Class-incremental instance segmentation via multi-teacher networks. In: AAAI (2021)
[18]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
[19]
Huang, Z., et al.: Learning prompt with distribution-based feature replay for few-shot class-incremental learning. arXiv preprint arXiv:2401.01598 (2024)
[20]
Kirillov, A., He, K., Girshick, R., Rother, C., Dollár, P.: Panoptic segmentation. In: CVPR (2019)
[21]
Li Z and Hoiem D Learning without forgetting IEEE Trans. Pattern Anal. Mach. Intell. 2017 40 12 2935-2947
[22]
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).
[23]
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: ICLR (2018)
[24]
Mallya A, Davis D, and Lazebnik S Ferrari V, Hebert M, Sminchisescu C, and Weiss Y Piggyback: adapting a single network to multiple tasks by learning to mask weights Computer Vision – ECCV 2018 2018 Cham Springer 72-88
[25]
Mallya, A., Lazebnik, S.: PackNet: adding multiple tasks to a single network by iterative pruning. In: CVPR (2018)
[26]
Maracani, A., Michieli, U., Toldo, M., Zanuttigh, P.: RECALL: Replay-based continual learning in semantic segmentation. In: ICCV (2021)
[27]
Michieli, U., Zanuttigh, P.: Incremental learning techniques for semantic segmentation. In: ICCVW (2019)
[28]
Michieli, U., Zanuttigh, P.: Continual semantic segmentation via repulsion-attraction of sparse and disentangled latent representations. In: CVPR (2021)
[29]
Ostapenko, O., Puscas, M., Klein, T., Jahnichen, P., Nabi, M.: Learning to remember: a synaptic plasticity driven framework for continual learning. In: CVPR (2019)
[30]
Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: CVPR (2017)
[31]
Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: NeurIPS (2017)
[32]
Xiao, J.W., Zhang, C.B., Feng, J., Liu, X., van de Weijer, J., Cheng, M.M.: Endpoints weight fusion for class incremental semantic segmentation. In: CVPR (2023)
[33]
Yan, S., Xie, J., He, X.: DER: dynamically expandable representation for class incremental learning. In: CVPR (2021)
[34]
Yang, G., et al.: Uncertainty-aware contrastive distillation for incremental semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 45(2), 2567–2581 (2023)
[35]
Zhang, C.B., Xiao, J.W., Liu, X., Chen, Y.C., Cheng, M.M.: Representation compensation networks for continual semantic segmentation. In: CVPR (2022)
[36]
Zhang, Z., Gao, G., Fang, Z., Jiao, J., Wei, Y.: Mining unseen classes via regional objectness: a simple baseline for incremental segmentation. In: NeurIPS (2022)
[37]
Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., Torralba, A.: Scene parsing through ADE20K dataset. In: CVPR (2017)

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
Computer Vision – ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part XLI
Sep 2024
585 pages
ISBN:978-3-031-72939-3
DOI:10.1007/978-3-031-72940-9
  • Editors:
  • Aleš Leonardis,
  • Elisa Ricci,
  • Stefan Roth,
  • Olga Russakovsky,
  • Torsten Sattler,
  • Gül Varol

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 17 November 2024

Author Tags

  1. Continual panoptic segmentation
  2. Continual semantic segmentation
  3. Continual learning

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 25 Jan 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media