Continual Learning for Time Series Forecasting: A First Survey †
Abstract
:1. Introduction
2. Continual Learning Principles
- “Task Incremental Learning where it sequentially learns to solve a number of distinct tasks” (a task is associated with achieving a goal within a context, and if either or both change, we switch tasks);
- “Class Incremental Learning discriminate between incrementally observed classes”;
- “Domain Incremental Learning learn to solve the same problem in different contexts”.
3. Continual Learning for Time Series
3.1. Main Principles and Overview
- Incremental learning of the data domain, which refers to the situation where the underlying data generation process is changing over time, due to the nonstationarity of the data stream. This means that the distribution of the data relative to the same objective is varying over time.
- Incremental learning of the target domain, which refers to the situation where the output of the model varies over time. This is the case when the number or properties of prediction targets is changing (prediction of new variables like in multioutputs networks, changing in time prediction horizon, etc).
3.2. Continual Learning Analysis
3.3. Trends and Challenges
- The first one focuses on model evolution and the crossover of different common continual learning strategies to more effectively mitigate catastrophic forgetting [35].
- The second distinguishes itself on a structural level by implementing models with a biologically inspired approach, aiming for behavioral mimicry of living organisms [36].
4. Concluding Remarks
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Rusu, A.A.; Rabinowitz, N.C.; Desjardins, G.; Soyer, H.; Kirkpatrick, J.; Kavukcuoglu, K.; Pascanu, R.; Hadsell, R. Progressive Neural Networks. arXiv 2016, arXiv:1606.04671. [Google Scholar]
- van de Ven, G.M.; Tuytelaars, T.; Tolias, A.S. Three types of incremental learning. Nat. Mach. Intell. 2022, 4, 1185–1197. [Google Scholar] [CrossRef] [PubMed]
- Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual lifelong learning with neural networks: A review. Neural Netw. 2019, 113, 54–71. [Google Scholar] [CrossRef] [PubMed]
- Gama, J.; Žliobaitė, I.; Bifet, A.; Pechenizkiy, M.; Bouchachia, A. A survey on concept drift adaptation. ACM Comput. Surv. 2014, 46, 1–37. [Google Scholar] [CrossRef]
- Mundt, M.; Hong, Y.; Pliushch, I.; Ramesh, V. A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning. Neural Netw. 2023, 160, 306–336. [Google Scholar] [CrossRef] [PubMed]
- Baker, M.M.; New, A.; Aguilar-Simon, M.; Al-Halah, Z.; Arnold, S.M.; Ben-Iwhiwhu, E.; Brna, A.P.; Brooks, E.; Brown, R.C.; Daniels, Z.; et al. A domain-agnostic approach for characterization of lifelong learning systems. Neural Netw. 2023, 160, 274–296. [Google Scholar] [CrossRef] [PubMed]
- Ao, S.I.; Fayek, H. Continual Deep Learning for Time Series Modeling. Sensors 2023, 23, 7167. [Google Scholar] [CrossRef]
- Gunasekara, N.; Pfahringer, B.; Gomes, H.M.; Bifet, A. Survey on Online Streaming Continual Learning. IJCAI Int. Jt. Conf. Artif. Intell. 2023, 2023, 6628–6637. [Google Scholar]
- Lange, M.D.; Aljundi, R.; Masana, M.; Parisot, S.; Jia, X.; Leonardis, A.; Slabaugh, G.; Tuytelaars, T. A Continual Learning Survey: Defying Forgetting in Classification Tasks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3366–3385. [Google Scholar] [PubMed]
- Hurtado, J.; Salvati, D.; Semola, R.; Bosio, M.; Lomonaco, V. Continual learning for predictive maintenance: Overview and challenges. Intell. Syst. Appl. 2023, 19, 200251. [Google Scholar] [CrossRef]
- French, R.M. Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 1999, 3, 128–135. [Google Scholar] [CrossRef] [PubMed]
- Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A.A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. USA 2017, 114, 3521–3526. [Google Scholar] [CrossRef] [PubMed]
- Shin, H.; Lee, J.K.; Kim, J.; Kim, J. Continual Learning with Deep Generative Replay. In Proceedings of the NIPS 2017, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Li, Z.; Hoiem, D. Learning without Forgetting. In Proceedings of the ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
- Zenke, F.; Poole, B.; Ganguli, S. Continual Learning Through Synaptic Intelligence. PMC 2017, 70, 3987–3995. [Google Scholar]
- Buzzega, P.; Boschini, M.; Porrello, A.; Abati, D.; Calderara, S. Dark Experience for General Continual Learning: A Strong, Simple Baseline. NeurIPS 2020, 33, 15920–15930. [Google Scholar]
- Lopez-Paz, D.; Ranzato, M. Gradient Episodic Memory for Continual Learning. In Proceedings of the NIPS 2017, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Aljundi, R.; Babiloni, F.; Elhoseiny, M.; Rohrbach, M.; Tuytelaars, T. Memory Aware Synapses: Learning what (not) to forget. In Proceedings of the ECCV 2018, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Chaudhry, A.; Ranzato, A.; Rohrbach, M.; Elhoseiny, M. Efficient Lifelong Learning with A-GEM. In Proceedings of the ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Febrinanto, F.G.; Xia, F.; Moore, K.; Thapa, C.; Aggarwal, C. Graph Lifelong Learning: A Survey. IEEE Comput. Intell. Mag. 2022, 18, 32–51. [Google Scholar] [CrossRef]
- Schwarz, J.; Luketina, J.; Czarnecki, W.M.; Grabska-Barwinska, A.; Teh, Y.W.; Pascanu, R.; Hadsell, R. Progress & Compress: A scalable framework for continual learning. In Proceedings of the International Conference on Machine Learning 2018, Vienna, Austria, 10–15 July 2018. [Google Scholar]
- Chen, X.; Wang, J.; Xie, K. TrafficStream: A Streaming Traffic Flow Forecasting Framework Based on Graph Neural Networks and Continual Learning. In Proceedings of the IJCAI 2021, Montreal, QC, Canada, 19–26 August 2021. [Google Scholar]
- Sokar, G.; Mocanu, D.C.; Pechenizkiy, M. Self-Attention Meta-Learner for Continual Learning. In Proceedings of the AAMAS 2021, Virtual Event, 3–7 May 2021. [Google Scholar]
- Chen, S.; Ge, W.; Liang, X.; Jin, X.; Du, Z. Lifelong learning with deep conditional generative replay for dynamic and adaptive modeling towards net zero emissions target in building energy system. Appl. Energy 2024, 353, 122189. [Google Scholar] [CrossRef]
- Rolnick, D.; Ahuja, A.; Schwarz, J.; Lillicrap, T.P.; Wayne, G. Experience Replay for Continual Learning. In Proceedings of the NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Smith, J.S.; Tian, J.; Halbe, S.; Hsu, Y.C.; Kira, Z. A Closer Look at Rehearsal-Free Continual Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 2410–2420. [Google Scholar]
- Hao, H.; Chu, Z.; Zhu, S.; Jiang, G.; Wang, Y.; Jiang, C.; Zhang, J.Y.; Jiang, W.; Xue, S.; Zhou, J. Continual Learning in Predictive Autoscaling. In Proceedings of the CIKM 2023, Birmingham, UK, 21–25 October 2023; pp. 4616–4622. [Google Scholar]
- Bagus, B.; Gepperth, A. An Investigation of Replay-based Approaches for Continual Learning. In Proceedings of the IJCNN 2021, Shenzhen, China, 18–22 July 2021. [Google Scholar]
- Grote-Ramm, W.; Lanuschny, D.; Lorenzen, F.; Brito, M.O.; Schönig, F. Continual learning for neural regression networks to cope with concept drift in industrial processes using convex optimisation. Eng. Appl. Artif. Intell. 2023, 120, 105927. [Google Scholar] [CrossRef]
- Yoon, J.; Yang, E.; Lee, J.; Hwang, S.J. Lifelong Learning with Dynamically Expandable Networks. In Proceedings of the ICLR 2017, Toulon, France, 24–26 April 2017. [Google Scholar]
- Mirzadeh, S.I.; Chaudhry, A.; Yin, D.; Nguyen, T.; Pascanu, R.; Gorur, D.; Farajtabar, M. Architecture Matters in Continual Learning. arXiv 2022, arXiv:2202.00275. [Google Scholar]
- Hung, S.C.Y.; Tu, C.H.; Wu, C.E.; Chen, C.H.; Chan, Y.M.; Chen, C.S. Compacting, Picking and Growing for Unforgetting Continual Learning. In Proceedings of the NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Aich, A. Elastic Weight Consolidation (EWC): Nuts and Bolts. arXiv 2021, arXiv:2105.04093. [Google Scholar]
- Maschler, B.; Pham, T.T.H.; Weyrich, M. Regularization-based Continual Learning for Anomaly Detection in Discrete Manufacturing. Procedia CIRP 2021, 104, 452–457. [Google Scholar] [CrossRef]
- He, Y.; Sick, B. CLeaR: An Adaptive Continual Learning Framework for Regression Tasks. AI Perspect. 2021, 3, 2. [Google Scholar] [CrossRef]
- Pham, Q.; Liu, C.; Sahoo, D.; Hoi, S.C.H. Learning Fast and Slow for Online Time Series Forecasting. In Proceedings of the ICLR 2022, Baltimore, MD, USA, 25–29 April 2022. [Google Scholar]
- Aljundi, R.; Lin, M.; Goujaud, B.; Bengio, Y. Gradient based sample selection for online continual learning. In Proceedings of the NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- He, Y. Adaptive Explainable Continual Learning Framework for Regression Problems with Focus on Power Forecasts. arXiv 2021, arXiv:2108.10781. [Google Scholar]
- Li, A.; Zhang, C.; Xiao, F.; Fan, C.; Deng, Y.; Wang, D. Large-scale comparison and demonstration of continual learning for adaptive data-driven building energy prediction. Appl. Energy 2023, 347, 121481. [Google Scholar] [CrossRef]
- Zhou, Y.; Tian, X.; Zhang, C.; Zhao, Y.; Li, T. Elastic weight consolidation-based adaptive neural networks for dynamic building energy load prediction modeling. Energy Build. 2022, 265, 112098. [Google Scholar] [CrossRef]
- He, Y.; Henze, J.; Sick, B. Continuous learning of deep neural networks to improve forecasts for regional energy markets. IFAC-PapersOnLine 2020, 53, 12175–12182. [Google Scholar] [CrossRef]
- Schillaci, G.; Schmidt, U.; Miranda, L. Prediction Error-Driven Memory Consolidation for Continual Learning: On the Case of Adaptive Greenhouse Models. KI-Kunstl. Intell. 2021, 35, 71–80. [Google Scholar] [CrossRef]
- Gupta, V.; Narwariya, J.; Malhotra, P.; Vig, L.; Shroff, G. Continual Learning for Multivariate Time Series Tasks with Variable Input Dimensions. In Proceedings of the ICDM 2022, Orlando, FL, USA, 28 November–1 December 2022. [Google Scholar]
- Farooq, J.; Bazaz, M.A. A deep learning algorithm for modeling and forecasting of COVID-19 in five worst affected states of India. Alex. Eng. J. 2021, 60, 587–596. [Google Scholar] [CrossRef]
- Wang, H.; Li, M.; Yue, X. IncLSTM: Incremental Ensemble LSTM Model towards Time Series Data. Comput. Electr. Eng. 2021, 92, 107156. [Google Scholar] [CrossRef]
- Fekri, M.N.; Patel, H.; Grolinger, K.; Sharma, V. Deep learning for load forecasting with smart meter data: Online Adaptive Recurrent Neural Network. Appl. Energy 2021, 282, 116177. [Google Scholar] [CrossRef]
- Aljundi, R.; Kelchtermans, K.; Tuytelaars, T. Task-Free Continual Learning. In Proceedings of the NeurIPS 2018, Vancouver, BC, Canada, 3–8 December 2018. [Google Scholar]
- Read, J.; Žliobaitė, I. Learning from Data Streams: An Overview and Update. SSRN Electron. J. 2022. [Google Scholar] [CrossRef]
- Vaswani, A.; Brain, G.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the NIPS 2017, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Gao, S.; Lei, Y. A new approach for crude oil price prediction based on stream learning. Geosci. Front. 2017, 8, 183–187. [Google Scholar] [CrossRef]
- Zhang, Y.F.; Wen, Q.; Wang, X.; Chen, W.; Sun, L.; Zhang, Z.; Wang, L.; Jin, R.; Tan, T. OneNet: Enhancing Time Series Forecasting Models under Concept Drift by Online Ensembling. In Proceedings of the NeurIPS 2023, Vancouver, BC, Canada, 10–16 December 2023. [Google Scholar]
- Melgar-García, L.; Gutiérrez-Avilés, D.; Rubio-Escudero, C.; Troncoso, A. Identifying novelties and anomalies for incremental learning in streaming time series forecasting. Eng. Appl. Artif. Intell. 2023, 123, 106326. [Google Scholar] [CrossRef]
- Zhao, L.; Kong, S.; Shen, Y. DoubleAdapt: A Meta-learning Approach to Incremental Learning for Stock Trend Forecasting. Assoc. Comput. Mach. 2023, 8, 3492–3503. [Google Scholar]
- Sarmas, E.; Strompolas, S.; Marinakis, V.; Santori, F.; Bucarelli, M.A.; Doukas, H. An Incremental Learning Framework for Photovoltaic Production and Load Forecasting in Energy Microgrids. Electronics 2022, 11, 3962. [Google Scholar] [CrossRef]
- Puah, B.K.; Chong, L.W.; Wong, Y.W.; Begam, K.M.; Khan, N.; Juman, M.A.; Rajkumar, R.K. A regression unsupervised incremental learning algorithm for solar irradiance prediction. Renew. Energy 2021, 164, 908–925. [Google Scholar] [CrossRef]
- Prabhu, A.; Hammoud, H.A.A.K.; Dokania, P.; Torr, P.H.S.; Lim, S.N.; Ghanem, B.; Bibi, A. Computationally Budgeted Continual Learning: What Does Matter? In Proceedings of the CVPR 2023, Vancouver, BC, Canada, 18–22 June 2023. [Google Scholar]
- Rojat, T.; Puget, R.; Filliat, D.; Ser, J.D.; Gelin, R.; Díaz-Rodríguez, N. Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey. arXiv 2021, arXiv:2104.00950. [Google Scholar]
- Haque, S.; Eberhart, Z.; Bansal, A.; McMillan, C. Semantic Similarity Metrics for Evaluating Source Code Summarization. IEEE Comput. Soc. 2022, 2022, 36–47. [Google Scholar] [CrossRef]
- Liu, B.; Xiao, X.; Stone, P. A Lifelong Learning Approach to Mobile Robot Navigation. IEEE Robot. Autom. Lett. 2020, 6, 1090–1096. [Google Scholar] [CrossRef]
- Pal, G.; Hong, X.; Wang, Z.; Wu, H.; Li, G.; Atkinson, K. Lifelong Machine Learning and root cause analysis for large-scale cancer patient data. J. Big Data 2019, 6, 108. [Google Scholar] [CrossRef]
Fitting Error MSE (e-2) | ||||
---|---|---|---|---|
Instance A | Instance B | Instance C | Baseline | |
Mean | 5.138 | 5.442 | 2.829 | 3.190 |
Forgetting Ratio | ||||
---|---|---|---|---|
Instance B (AE) | Instance C (AE) | Instance B | Instance C | |
Mean | 1.402 | 1.171 | 3.550 | 1.161 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Besnard, Q.; Ragot, N. Continual Learning for Time Series Forecasting: A First Survey. Eng. Proc. 2024, 68, 49. https://doi.org/10.3390/engproc2024068049
Besnard Q, Ragot N. Continual Learning for Time Series Forecasting: A First Survey. Engineering Proceedings. 2024; 68(1):49. https://doi.org/10.3390/engproc2024068049
Chicago/Turabian StyleBesnard, Quentin, and Nicolas Ragot. 2024. "Continual Learning for Time Series Forecasting: A First Survey" Engineering Proceedings 68, no. 1: 49. https://doi.org/10.3390/engproc2024068049