Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-031-78189-6_3guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Dual Supervised Contrastive Learning Based on Perturbation Uncertainty for Online Class Incremental Learning

Published: 11 December 2024 Publication History

Abstract

To keep learning knowledge from a data stream with changing distribution, continual learning has attracted lots of interests recently. Among its various settings, online class-incremental learning (OCIL) is more realistic and challenging since the data can be used only once. Currently, by employing a buffer to store a few old samples, replay-based methods have obtained huge success and dominated this area. Due to the single pass property of OCIL, how to retrieve high-valued samples from memory is very important. In most of the current works, the logits from the last fully connected layer are used to estimate the value of samples. However, the imbalance between the number of samples for old and new classes leads to a severe bias of the FC layer, which results in an inaccurate estimation. Moreover, this bias also brings about abrupt feature change. To address this problem, we propose a dual supervised contrastive learning method based on perturbation uncertainty. Specifically, we retrieve samples that have not been learned adequately based on perturbation uncertainty. Retraining such samples helps the model to learn robust features. Then, we combine two types of supervised contrastive loss to replace the cross-entropy loss, which further enhances the feature robustness and alleviates abrupt feature changes. Extensive experiments on three popular datasets demonstrate that our method surpasses several recently published works.

References

[1]
Ahn, H., Kwak, J., Lim, S., Bang, H., Kim, H., Moon, T.: Ss-il: separated softmax for incremental learning. In: ICCV, pp. 844–853 (2021)
[2]
Aljundi, R., et al.: Online continual learning with maximal interfered retrieval. Adv. Neural Inform. Process. Syst. 32 (2019)
[3]
Aljundi, R., Lin, M., Goujaud, B., Bengio, Y.: Gradient based sample selection for online continual learning. Adv. Neural Inform. Process. Syst. 32 (2019)
[4]
Bellitto, G., Pennisi, M., Palazzo, S., Bonicelli, L., Boschini, M., Calderara, S.: Effects of auxiliary knowledge on continual learning. In: 2022 26th International Conference on Pattern Recognition (ICPR), pp. 1357–1363. IEEE (2022)
[5]
Boschini M, Bonicelli L, Buzzega P, Porrello A, and Calderara S Class-incremental continual learning into the extended der-verse IEEE Trans. Pattern Anal. Mach. Intell. 2022 45 5 5497-5512
[6]
Buzzega P, Boschini M, Porrello A, Abati D, and Calderara S Dark experience for general continual learning: a strong, simple baseline Adv. Neural. Inf. Process. Syst. 2020 33 15920-15930
[7]
Buzzega, P., Boschini, M., Porrello, A., Calderara, S.: Rethinking experience replay: a bag of tricks for continual learning. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 2180–2187. IEEE (2021)
[8]
Caccia, L., Aljundi, R., Asadi, N., Tuytelaars, T., Pineau, J., Belilovsky, E.: New insights on reducing abrupt representation change in online continual learning. arXiv preprint arXiv:2104.05025 (2021)
[9]
Cha, H., Lee, J., Shin, J.: Co2l: contrastive continual learning. In: ICCV, pp. 9516–9525 (October 2021)
[10]
Chaudhry, A., Ranzato, M., Rohrbach, M., Elhoseiny, M.: Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420 (2018)
[11]
Chaudhry, A., et al.: On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486 (2019)
[12]
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: ICML, pp. 1597–1607. PMLR (2020)
[13]
Davari, M., Asadi, N., Mudur, S., Aljundi, R., Belilovsky, E.: Probing representation forgetting in supervised and unsupervised continual learning. In: CVPR, pp. 16712–16721 (June 2022)
[14]
Fu Z, Wang Z, Xu X, Li D, and Yang H Knowledge aggregation networks for class incremental learning Pattern Recogn. 2023 137 109310
[15]
Gallardo, J., Hayes, T.L., Kanan, C.: Self-supervised training enhances online continual learning (2021). https://arxiv.org/abs/2103.14010
[16]
Gu, Y., Yang, X., Wei, K., Deng, C.: Not just selection, but exploration: online class-incremental continual learning via dual view consistency. In: CVPR, pp. 7442–7451 (2022)
[17]
Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: CVPR, vol. 2, pp. 1735–1742. IEEE (2006)
[18]
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR, pp. 9729–9738 (2020)
[19]
Jia, M., Tang, L., Chen, B.C., Cardie, C., Belongie, S., Hariharan, B., Lim, S.N.: Visual prompt tuning. In: ECCV, pp. 709–727. Springer (2022)
[20]
Khosla P et al. Supervised contrastive learning Adv. Neural. Inf. Process. Syst. 2020 33 18661-18673
[21]
Kirkpatrick J et al. Overcoming catastrophic forgetting in neural networks Proc. Natl. Acad. Sci. 2017 114 13 3521-3526
[22]
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Handbook Systemic Autoimmune Diseases 1(4) (2009)
[23]
Li X, Wang S, Sun J, and Xu Z Memory efficient data-free distillation for continual learning Pattern Recogn. 2023 144 109875
[24]
Li Z and Hoiem D Learning without forgetting IEEE Trans. Pattern Anal. Mach. Intell. 2017 40 12 2935-2947
[25]
Liang G, Chen Z, Chen Z, Ji S, and Zhang Y New insights on relieving task-recency bias for online class incremental learning IEEE Trans. Circuits Syst. Video Technol. 2024 34 5 3451-3464
[26]
Liu, X., Masana, M., Herranz, L., Van de Weijer, J., Lopez, A.M., Bagdanov, A.D.: Rotate your networks: Better weight consolidation and less catastrophic forgetting. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2262–2268. IEEE (2018)
[27]
Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. Adv. Neural Inform. Process. Syst. 30 (2017)
[28]
Mai Z, Li R, Jeong J, Quispe D, Kim H, and Sanner S Online continual learning in image classification: an empirical survey Neurocomputing 2022 469 28-51
[29]
Mai, Z., Li, R., Kim, H., Sanner, S.: Supervised contrastive replay: revisiting the nearest class mean classifier in online class-incremental continual learning. In: CVPR, pp. 3589–3599 (2021)
[30]
Masana M, Liu X, Twardowski B, Menta M, Bagdanov AD, and van de Weijer J Class-incremental learning: survey and performance evaluation on image classification IEEE Trans. Pattern Anal. Mach. Intell. 2023 45 5 5513-5533
[31]
McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Psychology of Learning and Motivation, vol. 24, pp. 109–165. Elsevier (1989)
[32]
Shi, F., Wang, P., Shi, Z., Rui, Y.: Selecting useful knowledge from previous tasks for future learning in a single network. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 9727–9732. IEEE (2021)
[33]
Shim, D., Mai, Z., Jeong, J., Sanner, S., Kim, H., Jang, J.: Online class-incremental continual learning with adversarial shapley value. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 9630–9638 (2021)
[34]
Song, K., Liang, G., Chen, Z., Zhang, Y.: Non-exemplar class-incremental learning by random auxiliary classes augmentation and mixed features. IEEE Trans. Circ. Syst. Video Technol. (2024)
[35]
Van de Ven GM, Tuytelaars T, and Tolias AS Three types of incremental learning Nat. Mach. Intell. 2022 4 12 1185-1197
[36]
Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. Adv. Neural Inform. Process. Syst. 29 (2016)
[37]
Vitter JS Random sampling with a reservoir ACM Trans. Math. Softw. (TOMS) 1985 11 1 37-57
[38]
Wang, Q., Wang, R., Wu, Y., Jia, X., Meng, D.: Cba: improving online continual learning via continual bias adaptor. In: ICCV, pp. 19082–19092 (2023)
[39]
Wang, R., et al.: Attriclip: a non-incremental learner for incremental knowledge learning. In: CVPR, pp. 3654–3663 (2023)
[40]
Wang, Z., et al.: Learning to prompt for continual learning. In: CVPR, pp. 139–149 (2022)
[41]
Yao, X., et al.: Pcl: proxy-based contrastive learning for domain generalization. In: CVPR, pp. 7097–7107 (2022)
[42]
Yoon, J., Madaan, D., Yang, E., Hwang, S.J.: Online coreset selection for rehearsal-based continual learning. arXiv preprint arXiv:2106.01085 (2021)
[43]
Yu, L., Hu, T., HONG, L., Liu, Z., Weller, A., Liu, W.: Continual learning by modeling intra-class variation. Trans. Mach. Learn. Res. (2023). https://openreview.net/forum?id=iDxfGaMYVr
[44]
Zhang Y, Pfahringer B, Frank E, Bifet A, Lim NJS, and Jia Y A simple but strong baseline for online continual learning: Repeated augmented rehearsal Adv. Neural. Inf. Process. Syst. 2022 35 14771-14783
[45]
Zhao, B., Xiao, X., Gan, G., Zhang, B., Xia, S.T.: Maintaining discrimination and fairness in class incremental learning. In: CVPR, pp. 13208–13217 (2020)

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Guide Proceedings
Pattern Recognition: 27th International Conference, ICPR 2024, Kolkata, India, December 1–5, 2024, Proceedings, Part IX
Dec 2024
508 pages
ISBN:978-3-031-78188-9
DOI:10.1007/978-3-031-78189-6

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 11 December 2024

Author Tags

  1. Online class-incremental learning
  2. Perturbation uncertainty retrieval
  3. Supervised contrastive learning

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 26 Jan 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media