Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

Deep Learning for Time Series Forecasting: Tutorial and Literature Survey

Published: 07 December 2022 Publication History

Abstract

Deep learning based forecasting methods have become the methods of choice in many applications of time series prediction or forecasting often outperforming other approaches. Consequently, over the last years, these methods are now ubiquitous in large-scale industrial forecasting applications and have consistently ranked among the best entries in forecasting competitions (e.g., M4 and M5). This practical success has further increased the academic interest to understand and improve deep forecasting methods. In this article we provide an introduction and overview of the field: We present important building blocks for deep forecasting in some depth; using these building blocks, we then survey the breadth of the recent deep forecasting literature.

References

[1]
Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation. 265–283.
[2]
Subutai Ahmad, Alexander Lavin, Scott Purdy, and Zuha Agha. 2017. Unsupervised real-time anomaly detection for streaming data. Neurocomputing 262 (2017), 134–147.
[3]
Amr Ahmed, Moahmed Aly, Joseph Gonzalez, Shravan Narayanamurthy, and Alexander J. Smola. 2012. Scalable inference in latent variable models. In Proceedings of the 5th ACM International Conference on Web Search and Data Mining. ACM, 123–132.
[4]
Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C. Maddix, Syama Sundar Rangapuram, David Salinas, Jasper Schulz, et al. 2020. GluonTS: Probabilistic and neural time series modeling in python.Journal of Machine Learning Research 21, 116 (2020), 1–6.
[5]
Abdul Fatir Ansari, Konstantinos Benidis, Richard Kurle, Ali Caner Turkmen, Harold Soh, Alexander J. Smola, Bernie Wang, and Tim Januschowski. 2021. Deep explicit duration switching models for time series. Advances in Neural Information Processing Systems 34 (2021).
[6]
Reza Asadi and Amelia C. Regan. 2020. A spatial-temporal decomposition based deep neural network for time series forecasting. Applied Soft Computing 87 (2020), 105963.
[7]
George Athanasopoulos, Roman A. Ahmed, and Rob J. Hyndman. 2009. Hierarchical forecasts for Australian domestic tourism. International Journal of Forecasting 25, 1 (2009), 146–166.
[8]
George Athanasopoulos, Rob J. Hyndman, Nikolaos Kourentzes, and Fotios Petropoulos. 2017. Forecasting with temporal hierarchies. European Journal of Operational Research 262, 1 (2017), 60–74.
[9]
Fadhel Ayed, Lorenzo Stella, Tim Januschowski, and Jan Gasthaus. 2020. Anomaly Detection at Scale: The Case for Deep Distributional Time Series Models. arXiv:2007.15541. Retrieved from https://arxiv.org/abs/2007.15541.
[10]
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473. Retrieved from https://arxiv.org/abs/1409.0473.
[11]
Shaojie Bai, J. Zico Kolter, and Vladlen Koltun. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv:1803.01271. Retrieved from https://arxiv.org/abs/1803.01271.
[12]
Luca Vincenzo Ballestra, Andrea Guizzardi, and Fabio Palladini. 2019. Forecasting and trading on the VIX futures market: A neural network approach based on open to close returns and coincident indicators. International Journal of Forecasting 35, 4 (2019), 1250–1262.
[13]
Kasun Bandara, Christoph Bergmeir, and Hansika Hewamalage. 2020. LSTM-MSNet: Leveraging forecasts on sets of related time series with multiple seasonal patterns. IEEE Transactions on Neural Networks and Learning Systems (2020).
[14]
Kasun Bandara, Christoph Bergmeir, and Slawek Smyl. 2017. Forecasting across time series databases using long short-term memory networks on groups of similar series. arXiv:1710.03222. Retrieved from https://arxiv.org/abs/1710.03222.
[15]
Kasun Bandara, Peibei Shi, Christoph Bergmeir, Hansika Hewamalage, Quoc Tran, and Brian Seaman. 2019. Sales demand forecast in e-commerce using a long short-term memory neural network methodology. In Proceedings of the International Conference on Neural Information Processing. Springer, 462–474.
[16]
David Barber. 2012. Bayesian Reasoning and Machine Learning. Cambridge University Press.
[17]
Souhaib Ben Taieb, James W. Taylor, and Rob J. Hyndman. 2017. Coherent probabilistic forecasts for hierarchical time series. In Proceedings of the International Conference on Machine Learning. 3348–3357.
[18]
Marin Biloš, Johanna Sommer, Syama Sundar Rangapuram, Tim Januschowski, and Stephan Günnemann. 2021. Neural flows: Efficient alternative to neural ODEs. Advances in Neural Information Processing Systems 34 (2021).
[19]
Alexander Binder, Sebastian Bach, Gregoire Montavon, Klaus-Robert Müller, and Wojciech Samek. 2016. Layer-wise relevance propagation for deep neural network architectures. In Proceedings of the Information Science and Applications.Springer, 913–922.
[20]
Toby Bischoff and Austin Gross. 2019. Wavenet & Dropout: An efficient setup for competitive forecasts at scale. In Proceedings of the International Symposium on Forecasting.
[21]
Michael Bohlke-Schneider, Paul Jeha, Pedro Mercado, Shubham Kapoor, Jan Gasthaus, and Tim Januschowski. 2022. PSA-GAN: Progressive self attention GANs for synthetic time series. In Proceedings of the International Conference on Learning Representations.
[22]
Michael Bohlke-Schneider, Shubham Kapoor, and Tim Januschowski. 2020. Resilient neural forecasting systems. In Proceedings of the 4th International Workshop on Data Management for End-to-End Machine Learning.Association for Computing Machinery, New York, NY, 5 pages.
[23]
Oliver Borchert, David Salinas, Valentin Flunkert, Tim Januschowski, and Stephan Günnemann. 2022. Multi-objective model selection for time series forecasting. arXiv:2202.08485. Retrieved from https://arxiv.org/abs/2202.08485.
[24]
Anastasia Borovykh, Sander Bohte, and Cornelis W. Oosterlee. 2017. Conditional time series forecasting with convolutional neural networks. arXiv:1703.04691. Retrieved from https://arxiv.org/abs/1703.04691.
[25]
Joos-Hendrik Böse, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Dustin Lange, David Salinas, Sebastian Schelter, Matthias Seeger, and Yuyang Wang. 2017. Probabilistic demand forecasting at scale. Proceedings of the VLDB Endowment 10, 12 (2017), 1694–1705.
[26]
Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. 1992. A training algorithm for optimal margin classifiers. In Proceedings of the 5th Annual Workshop on Computational Learning Theory.ACM, New York, NY,144–152.
[27]
Sofiane Brahim-Belhouari and Amine Bermak. 2004. Gaussian process for nonstationary time series prediction. Computational Statistics & Data Analysis 47, 4 (2004), 705–712.
[28]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33 (2020), 1877–1901.
[29]
Laurent Callot, Mehmet Caner, A. Özlem Önder, and Esra Ulaşan. 2019. A nodewise regression approach to estimating large portfolios. Journal of Business & Economic Statistics (2019), 1–12.
[30]
Laurent A. F. Callot, Anders B. Kock, and Marcelo C. Medeiros. 2017. Modeling and forecasting large realized covariance matrices and portfolio choice. Journal of Applied Econometrics 32, 1 (2017), 140–158.
[31]
Nicolas Chapados. 2014. Effective Bayesian modeling of groups of related count time series. In Proceedings of the International Conference on Machine Learning. PMLR, 1395–1403.
[32]
Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K. Duvenaud. 2018. Neural ordinary differential equations. Advances in Neural Information Processing Systems 31 (2018).
[33]
Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.ACM, 785–794.
[34]
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. In Proceedings of the NeurIPS Workshop on Machine Learning Systems.
[35]
Yitian Chen, Yanfei Kang, Yixiong Chen, and Zizhuo Wang. 2020. Probabilistic forecasting with temporal convolutional neural network. Neurocomputing 399 (2020), 491–501.
[36]
KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv:1409.1259. Retrieved from https://arxiv.org/abs/1409.1259.
[37]
Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. End-to-end continuous speech recognition using attention-based recurrent NN: First results. arXiv:1412.1602. Retrieved from https://arxiv.org/abs/1412.1602.
[38]
Jan K. Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Proceedings of the Advances in Neural Information Processing Systems. 577–585.
[39]
Yagmur Gizem Cinar, Hamid Mirisaee, Parantapa Goswami, Eric Gaussier, Ali Aït-Bachir, and Vadim Strijov. 2017. Position-based content attention for time series forecasting with sequence-to-sequence RNNs. In Proceedings of the International Conference on Neural Information Processing. 533–544.
[40]
Michael J. Crawley. 2012. Mixed-effects models. The R Book, Second Edition (2012), 681–714.
[41]
J. D. Croston. 1972. Forecasting and stock control for intermittent demands. Journal of the Operational Research Society 23, 3(1972), 289–303.
[42]
Emmanuel de Bézenac, Syama Sundar Rangapuram, Konstantinos Benidis, Michael Bohlke-Schneider, Richard Kurle, Lorenzo Stella, Hilaf Hasson, Patrick Gallinari, and Tim Januschowski. 2020. Normalizing kalman filters for multivariate time series analysis. Advances in Neural Information Processing Systems 33 (2020).
[43]
Ailin Deng and Bryan Hooi. 2021. Graph neural network-based anomaly detection in multivariate time series. In Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2–9.
[44]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, 4171–4186.
[45]
I. Dimoulkas, P. Mazidi, and L. Herre. 2019. Neural networks for GEFCom2017 probabilistic load forecasting. International Journal of Forecasting 35, 4 (2019), 1409–1423.
[46]
Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. 2017. Density estimation using Real NVP. In Proceedings of the 5th International Conference on Learning Representations.
[47]
Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter. 2015. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In Proceedings of the 24th International Conference on Artificial Intelligence.AAAI, 3460–3468.
[48]
Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. 2016. Recurrent marked temporal point processes: Embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1555–1564.
[49]
James Durbin and Siem Jan Koopman. 2012. Time Series Analysis by State Space Methods. Oxford University Press.
[50]
Elena Ehrlich, Laurent Callot, and François-Xavier Aubet. 2021. Spliced binned-pareto distribution for robust modeling of heavy-tailed time series. arXiv:2106.10952. Retrieved from https://arxiv.org/abs/2106.10952.
[51]
Carson Eisenach, Yagna Patel, and Dhruv Madeka. 2020. MQTransformer: Multi-horizon forecasts with context dependent and feedback-aware attention. arXiv:2009.14799. Retrieved from https://arxiv.org/abs/2009.14799.
[52]
Jesse Engel, Kumar Krishna Agrawal, Shuo Chen, Ishaan Gulrajani, Chris Donahue, and Adam Roberts. 2018. GANSynth: Adversarial neural audio synthesis. In Proceedings of the International Conference on Learning Representations.
[53]
Cristóbal Esteban, Stephanie L. Hyland, and Gunnar Rätsch. 2017. Real-valued (medical) time series generation with recurrent conditional gans. arXiv:1706.02633. Retrieved from https://arxiv.org/abs/1706.02633.
[54]
Fotios Petropoulos et al.2020. Forecasting: Theory and practice. International Journal of Forecasting (2020).
[55]
Christos Faloutsos, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, and Yuyang Wang. 2019. Forecasting big time series: Theory and practice. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
[56]
Christos Faloutsos, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, and Yuyang Wang. 2020. Forecasting big time series: Theory and practice. In Proceedings of the Companion Proceedings of the Web Conference 2020.Association for Computing Machinery, 320–321.
[57]
Christos Faloutsos, Jan Gasthaus, Tim Januschowski, and Yuyang Wang. 2018. Forecasting big time series: Old and new. Proceedings of the VLDB Endowment 11, 12 (2018), 2102–2105.
[58]
Christos Faloutsos, Jan Gasthaus, Tim Januschowski, and Yuyang Wang. 2019. Classical and contemporary approaches to big time series forecasting. In Proceedings of the 2019 International Conference on Management of Data.ACM, New York, NY.
[59]
Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2018. Transfer learning for time series classification. In Proceedings of the IEEE International Conference on Big Data. 1367–1376.
[60]
Valentin Flunkert, Quentin Rebjock, Joel Castellon, Laurent Callot, and Tim Januschowski. 2020. A simple and effective predictive resource scaling heuristic for large-scale cloud applications. arXiv:2008.01215. Retrieved from https://arxiv.org/abs/2008.01215.
[61]
Marco Fraccaro, Simon Kamronn, Ulrich Paquet, and Ole Winther. 2017. A disentangled recognition and nonlinear dynamics model for unsupervised learning. Advances in Neural Information Processing Systems 30 (2017).
[62]
Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. 2019. Unsupervised scalable representation learning for multivariate time series. Advances in Neural Information Processing Systems 32 (2019).
[63]
Victor Garcia Satorras, Syama Sundar Rangapuram, and Tim Januschowski. 2022. Multivariate time series forecasting with latent graph inference. arXiv preprint (2022).
[64]
Jan Gasthaus, Konstantinos Benidis, Yuyang Wang, Syama Sundar Rangapuram, David Salinas, Valentin Flunkert, and Tim Januschowski. 2019. Probabilistic forecasting with spline quantile function RNNs. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics. 1901–1910.
[65]
Andrew Gelman, John B. Carlin, Hal S. Stern, David B. Dunson, Aki Vehtari, and Donald B. Rubin. 2013. Bayesian Data Analysis. CRC press.
[66]
John Geweke. 1977. The dynamic factor analysis of economic time series. Latent Variables in Socio-economic Models (1977).
[67]
Agathe Girard, Carl Edward Rasmussen, Joaquin Quinonero Candela, and Roderick Murray-Smith. 2003. Gaussian process priors with uncertain inputs application to multiple-step ahead time series forecasting. In Proceedings of the Advances in Neural Information Processing Systems. 545–552.
[68]
Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. 315–323.
[69]
Tilmann Gneiting, Fadoua Balabdaoui, and Adrian E. Raftery. 2007. Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 69, 2 (2007), 243–268.
[70]
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. Retrieved from http://www.deeplearningbook.org.
[71]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in Neural Information Processing Systems 27 (2014).
[72]
Adèle Gouttes, Kashif Rasul, Mateusz Koren, Johannes Stephan, and Tofigh Naghibi. 2021. Probabilistic time series forecasting with implicit quantile networks. arXiv:2107.03743. Retrieved from https://arxiv.org/abs/2107.03743.
[73]
Rafael S. Gutierrez, Adriano O. Solis, and Somnath Mukhopadhyay. 2008. Lumpy demand forecasting using neural networks. International Journal of Production Economics 111, 2(2008), 409–420.
[74]
Hilaf Hasson, Bernie Wang, Tim Januschowski, and Jan Gasthaus. 2021. Probabilistic forecasting: A level-set approach. In Proceedings of the Advances in Neural Information Processing Systems. Curran Associates, Inc.
[75]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[76]
Hansika Hewamalage, Christoph Bergmeir, and Kasun Bandara. 2021. Recurrent neural networks for time series forecasting: Current status and future directions. International Journal of Forecasting 37, 1 (2021), 388–427.
[77]
Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Computation 14, 8(2002), 1771–1800.
[78]
Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Computation 18, 7 (2006), 1527–1554.
[79]
Tin Kam Ho. 1995. Random decision forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition. IEEE, 278–282.
[80]
Sepp Hochreiter. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness, and Knowledge-Based Systems 6, 2 (1998), 107–116.
[81]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 1735–1780.
[82]
Daniel Hsu. 2017. Time series forecasting based on augmented long short-term memory. arXiv:1707.00666. Retrieved from https://arxiv.org/abs/1707.00666.
[83]
M. J. C. Hu and Halbert E. Root. 1964. An adapative data processing system for weather forecasting. Journal of Applied Metereology (1964).
[84]
Rob J. Hyndman and George Athanasopoulos. 2018. Forecasting: Principles and Practice. OTexts.
[85]
Rob J. Hyndman and Anne B. Koehler. 2006. Another look at measures of forecast accuracy. International Journal of Forecasting (2006), 679–688.
[86]
Rob J. Hyndman, Anne B. Koehler, J. Keith Ord, and Ralph D. Snyder. 2008. Forecasting with Exponential Smoothing: the State Space Approach. Springer.
[87]
Rob J. Hyndman, Earo Wang, and Nikolay Laptev. 2015. Large-scale unusual time series detection. In Proceedings of the IEEE International Conference on Data Mining Workshop. 1616–1619.
[88]
Tim Janke, Mohamed Ghanmi, and Florian Steinke. 2021. Implicit generative copulas. Advances in Neural Information Processing Systems 34 (2021).
[89]
Tim Januschowski, Jan Gasthaus, Yuyang Wang, Syama Sundar Rangapuram, and Laurent Callot. 2018. Deep learning for forecasting: Current trends and challenges. Foresight: The International Journal of Applied Forecasting 51 (2018), 42–47.
[90]
Tim Januschowski, Jan Gasthaus, Yuyang Wang, David Salinas, Valentin Flunkert, Michael Bohlke-Schneider, and Laurent Callot. 2019. Criteria for classifying forecasting methods. International Journal of Forecasting (2019).
[91]
Tim Januschowski and Stephan Kolassa. 2019. A classification of business forecasting problems. Foresight: The International Journal of Applied Forecasting 52 (2019), 36–43.
[92]
Tim Januschowski, Yuyang Wang, Hilaf Hasson, Timo Erkkila, Kari Torkkila, and Jan Gasthaus. 2021. Forecasting with trees. International Journal of Forecasting (2021).
[93]
Yunho Jeon and Sihyeon Seong. 2021. Robust recurrent network model for intermittent time-series forecasting. International Journal of Forecasting (2021).
[94]
Michael I. Jordan. 1986. Serial Order: A Parallel, Distributed Processing Approach. Technical Report. Institute for Cognitive Science, University of California, San Diego.
[95]
Michael I. Jordan. 1989. Serial order: A parallel, distributed processing approach. In Proceedings of the Advances in Connectionist Theory: Speech. Erlbaum.
[96]
Kelvin Kan, François-Xavier Aubet, Tim Januschowski, Youngsuk Park, Konstantinos Benidis, Lars Ruthotto, and Jan Gasthaus. 2022. Multivariate quantile function forecaster. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics.
[97]
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2017. Progressive growing of gans for improved quality, stability, and variation. arXiv:1710.10196. Retrieved from https://arxiv.org/abs/1710.10196.
[98]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[99]
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. LightGBM: A highly efficient gradient boosting decision tree. Advances in Neural Information Processing Systems 30 (2017).
[100]
Mehdi Khashei and Mehdi Bijari. 2011. A novel hybridization of artificial neural networks and ARIMA models for time series forecasting. Applied Soft Computing 11, 2 (2011), 2664–2675.
[101]
Diederik P. Kingma and Prafulla Dhariwal. 2018. Glow: Generative flow with invertible 1 \(\times\) 1 convolutions. In Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018. 10236–10245.
[102]
Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. 2018. Neural relational inference for interacting systems. In Proceedings of the International Conference on Machine Learning. PMLR, 2688–2697.
[103]
Roger Koenker. 2005. Quantile Regression. Cambridge University Press.
[104]
Stephan Kolassa. 2020. Why the “best” point forecast depends on the error or accuracy measure. International Journal of Forecasting 36, 1 (2020), 208–211.
[105]
Nikolaos Kourentzes. 2013. Intermittent demand forecasts with neural networks. International Journal of Production Economics 143, 1 (2013), 198–206.
[106]
Richard Kurle, Syama Sundar Rangapuram, Emmanuel de Bézenac, Stephan Günnemann, and Jan Gasthaus. 2020. Deep rao-blackwellised particle filters for time series forecasting. Advances in Neural Information Processing Systems 33 (2020).
[107]
Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. 2018. Modeling long- and short-term temporal patterns with deep neural networks. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. ACM, 95–104.
[108]
F. Laio and S. Tamea. 2007. Verification tools for probabilistic forecasts of continuous hydrological variables. Hydrology and Earth System Sciences 11, 4 (2007), 1267–1277.
[109]
Alex M. Lamb, Anirudh Goyal, Ying Zhang, Saizheng Zhang, Aaron C. Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. Advances in Neural Information Processing Systems 29 (2016).
[110]
Martin Längkvist, Lars Karlsson, and Amy Loutfi. 2014. A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recognition Letters 42 (2014), 11–24.
[111]
Nikolay Laptev, Jason Yosinsk, Li Li Erran, and Slawek Smyl. 2017. Time-series extreme event forecasting with neural networks at Uber. In Proceedings of the ICML Time Series Workshop.
[112]
Yann LeCun. 1989. Generalization and network design strategies. In Proceedings of the Connectionism in Perspective.
[113]
Yann LeCun and Yoshua Bengio. 1995. Convolutional networks for images, speech, and time series. The Handbook of Brain Theory and Neural Networks 3361, 10 (1995).
[114]
Yann LeCun, Sumit Chopra, Raia Hadsell, Fu Jie Huang, et al.2006. A tutorial on energy-based learning. In Proceedings of the Predicitng Structured Data. MIT.
[115]
Yann LeCun, L. D. Jackel, Leon Bottou, A. Brunot, Corinna Cortes, J. S. Denker, Harris Drucker, I. Guyon, U. A. Muller, Eduard Sackinger, Patrice Simard, and V. Vapnik. 1995. Comparison of learning algorithms for handwritten digit recognition. In Proceedings of the International Conference on Artificial Neural Networks. 53–60.
[116]
Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. 2019. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Advances in Neural Information Processing Systems 32 (2019).
[117]
Xuerong Li, Wei Shang, and Shouyang Wang. 2019. Text-based crude oil price forecasting: A deep learning approach. International Journal of Forecasting 35, 4 (2019), 1548–1560.
[118]
Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. 2018. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. In Proceedings of the International Conference on Learning Representations.
[119]
Bryan Lim, Sercan Ö Arık, Nicolas Loeff, and Tomas Pfister. 2021. Temporal fusion transformers for interpretable multi-horizon time series forecasting. International Journal of Forecasting 37, 4 (2021), 1748–1764.
[120]
Bryan Lim and Stefan Zohren. 2021. Time-series forecasting with deep learning: A survey. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, 2194(2021).
[121]
Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. Advances in Neural Information Processing Systems 30 (2017).
[122]
Zachary C. Lipton. 2018. The mythos of model interpretability. Queue 16, 3 (2018), 30:31–30:57.
[123]
Yucheng Low, Deepak Agarwal, and Alexander J. Smola. 2011. Multiple domain user personalization. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 123–131.
[124]
Rui Luo, Weinan Zhang, Xiaojun Xu, and Jun Wang. 2018. A neural stochastic volatility model. In Proceedings of the AAAI Conference on Artificial Intelligence.
[125]
Helmut Lütkepohl. 2005. Vector autoregressive moving average processes. In Proceedings of the New Introduction to Multiple Time Series Analysis. Springer, 419–446.
[126]
Yisheng Lv, Yanjie Duan, Wenwen Kang, Zhengxi Li, and Fei-Yue Wang. 2014. Traffic flow prediction with big data: A deep learning approach. IEEE Transactions on Intelligent Transportation Systems 16, 2 (2014), 865–873.
[127]
Danielle C. Maddix, Yuyang Wang, and Alex Smola. 2018. Deep factors with gaussian processes for forecasting. arXiv:1812.00098. Retrieved from https://arxiv.org/abs/1812.00098.
[128]
Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. 2018. The M4 competition: Results, findings, conclusion and way forward. International Journal of Forecasting 34, 4 (2018), 802–808.
[129]
Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. 2018. Statistical and machine learning forecasting methods: Concerns and ways forward. PLOS ONE 13, 3(2018).
[130]
Spyros Makridakis, Evangelos Spiliotis, Vassilios Assimakopoulos, Zhi Chen, Anil Gaba, Ilia Tsetlin, and Robert L. Winkler. 2021. The M5 uncertainty competition: Results, findings and conclusions. International Journal of Forecasting (2021).
[131]
Zelda Mariet and Vitaly Kuznetsov. 2019. Foundations of sequence-to-sequence modeling for time series. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 408–417.
[132]
James E. Matheson and Robert L. Winkler. 1976. Scoring rules for continuous probability distributions. Management Science 22, 10 (1976), 1087–1096.
[133]
Hongyuan Mei and Jason M. Eisner. 2017. The neural hawkes process: A neurally self-modulating multivariate point process. In Proceedings of the Advances in Neural Information Processing Systems. 6754–6764.
[134]
Olof Mogren. 2016. C-RNN-GAN: Continuous recurrent neural networks with adversarial training. arXiv:1611.09904. Retrieved from https://arxiv.org/abs/1611.09904.
[135]
Pablo Montero-Manso and Rob J. Hyndman. 2020. Principles and algorithms for forecasting groups of time series: Locality and globality. arXiv:2008.00444. Retrieved from https://arxiv.org/abs/2008.00444.
[136]
Srayanta Mukherjee, Devashish Shankar, Atin Ghosh, Nilam Tathawadekar, Pramod Kompalli, Sunita Sarawagi, and Krishnendu Chaudhury. 2018. ARMDN: Associative and recurrent mixture density networks for eretail demand forecasting. arXiv:1803.03800. Retrieved from https://arxiv.org/abs/1803.03800.
[137]
Junier Oliva, Avinava Dubey, Manzil Zaheer, Barnabas Poczos, Ruslan Salakhutdinov, Eric Xing, and Jeff Schneider. 2018. Transformation autoregressive networks. In Proceedings of the International Conference on Machine Learning. PMLR, 3898–3907.
[138]
Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. 2019. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. arXiv:1905.10437. Retrieved from https://arxiv.org/abs/1905.10437.
[139]
Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. 2020. Meta-learning framework with applications to zero-shot time-series forecasting. arXiv:2002.02887. Retrieved from https://arxiv.org/abs/2002.02887.
[140]
George Papamakarios, Theo Pavlakou, and Iain Murray. 2017. Masked autoregressive flow for density estimation. arXiv:1705.07057. Retrieved from https://arxiv.org/abs/1705.07057.
[141]
Youngsuk Park, Danielle Maddix, François-Xavier Aubet, Kelvin Kan, Jan Gasthaus, and Yuyang Wang. 2022. Learning quantile functions without quantile crossing for distribution-free time series forecasting. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics.
[142]
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning. 1310–1318.
[143]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems 32 (2019).
[144]
Xueheng Qiu, Le Zhang, Ye Ren, Ponnuthurai N. Suganthan, and Gehan Amaratunga. 2014. Ensemble deep learning for regression and time series forecasting. In Proceedings of the Symposium on Computational Intelligence in Ensemble Learning. IEEE, 1–6.
[145]
Stephan Rabanser, Tim Januschowski, Valentin Flunkert, David Salinas, and Jan Gasthaus. 2020. The effectiveness of discretization in forecasting: An empirical study on neural time series models. arXiv:2005.10111. Retrieved from https://arxiv.org/abs/2005.10111.
[146]
Syama Sundar Rangapuram, Matthias W. Seeger, Jan Gasthaus, Lorenzo Stella, Yuyang Wang, and Tim Januschowski. 2018. Deep state space models for time series forecasting. In Proceedings of the Advances in Neural Information Processing Systems. 7785–7794.
[147]
Syama Sundar Rangapuram, Shubham Shubham Kapoor, Rajbir Nirwan, Pedro Mercado, Tim Januschowski, Yuyang Wang, and Michael Bohlke-Schneider. 2022. Coherent Probabilistic Forecasting for Temporal Hierarchies. (2022).
[148]
Syama Sundar Rangapuram, Lucien D. Werner, Konstantinos Benidis, Pedro Mercado, Jan Gasthaus, and Tim Januschowski. 2021. End-to-end learning of coherent probabilistic forecasts for hierarchical time series. In Proceedings of the International Conference on Machine Learning. PMLR, 8832–8843.
[149]
Carl Edward Rasmussen and Christopher K. I. Williams. 2006. Gaussian Process for Machine Learning. MIT press.
[150]
Kashif Rasul, Calvin Seward, Ingmar Schuster, and Roland Vollgraf. 2021. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In Proceedings of the International Conference on Machine Learning. PMLR, 8857–8868.
[151]
Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs Bergmann, and Roland Vollgraf. 2020. Multi-variate probabilistic time series forecasting via conditioned normalizing flows. arXiv:2002.06103. Retrieved from https://arxiv.org/abs/2002.06103.
[152]
Frank Rosenblatt. 1957. The Perceptron, A Perceiving and Recognizing Automaton Project Para. Cornell Aeronautical Laboratory.
[153]
David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1985. Learning Internal Representations by Error Propagation. Technical Report. University of California San Diego, La Jolla Institute for Cognitive Science.
[154]
David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back-propagating errors. Nature 323, 6088 (1986), 533–536.
[155]
David Salinas, Michael Bohlke-Schneider, Laurent Callot, and Jan Gasthaus. 2019. High-dimensional multivariate forecasting with low-rank gaussian copula processes. In Proceedings of the Advances in Neural Information Processing Systems.
[156]
David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. 2020. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting 36, 3 (2020), 1181–1191.
[157]
Harshit Saxena, Omar Aponte, and Katie T. McConky. 2019. A hybrid machine learning model for forecasting a billing period’s peak electric load days. International Journal of Forecasting 35, 4 (2019), 1288–1303.
[158]
Bernhard Schölkopf. 2019. Causality for machine learning. arXiv:1911.10500. Retrieved from https://arxiv.org/abs/1911.10500.
[159]
Matthias Seeger. 2004. Gaussian processes for machine learning. International Journal of Neural Systems 14, 2 (2004), 69–106.
[160]
Matthias W. Seeger, David Salinas, and Valentin Flunkert. 2016. Bayesian intermittent demand forecasting for large inventories. In Proceedings of the Advances in Neural Information Processing Systems. 4646–4654.
[161]
Artemios-Anargyros Semenoglou, Evangelos Spiliotis, Spyros Makridakis, and Vassilios Assimakopoulos. 2021. Investigating the accuracy of cross-learning time series forecasting methods. International Journal of Forecasting 37, 3 (2021), 1072–1084.
[162]
Rajat Sen, Hsiang-Fu Yu, and Inderjit S. Dhillon. 2019. Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting. Advances in Neural Information Processing Systems 32 (2019).
[163]
Chao Shang, Jie Chen, and Jinbo Bi. 2021. Discrete graph structure learning for forecasting multiple time series. In Proceedings of the International Conference on Learning Representations.
[164]
Anuj Sharma, Robert Johnson, Florian Engert, and Scott Linderman. 2018. Point process latent variable models of larval zebrafish behavior. In Proceedings of the Advances in Neural Information Processing Systems. 10919–10930.
[165]
Oleksandr Shchur, Ali Caner Turkmen, Tim Januschowski, Jan Gasthaus, and Stephan Günnemann. 2021. Detecting anomalous event sequences with temporal point processes. Advances in Neural Information Processing Systems 34 (2021).
[166]
Oleksandr Shchur, Ali Caner Türkmen, Tim Januschowski, and Stephan Günnemann. 2021. Neural temporal point processes: A review. arXiv:2104.03528. Retrieved from https://arxiv.org/abs/2104.03528.
[167]
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (2016), 484–489.
[168]
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 6419 (2018), 1140–1144.
[169]
Slawek Smyl. 2020. A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting. International Journal of Forecasting 36, 1 (2020), 75–85.
[170]
Slawek Smyl and N. Grace Hua. 2019. Machine learning methods for GEFCom2017 probabilistic load forecasting. International Journal of Forecasting 35, 4 (2019), 1424–1431.
[171]
Ralph D. Snyder, J. Keith Ord, and Adrian Beaumont. 2012. Forecasting the intermittent demand for slow-moving inventories: A modelling approach. International Journal of Forecasting 28, 2 (2012), 485–496.
[172]
Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the International Conference on Machine Learning. PMLR, 2256–2265.
[173]
Huan Song, Deepta Rajan, Jayaraman J. Thiagarajan, and Andreas Spanias. 2018. Attend and diagnose: Clinical time series analysis using attention models. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
[174]
Yang Song and Diederik P. Kingma. 2021. How to train your energy-based models. arXiv:2101.03288. Retrieved from https://arxiv.org/abs/2101.03288.
[175]
Kamile Stankeviciute, Ahmed M. Alaa, and Mihaela van der Schaar. 2021. Conformal time-series forecasting. Advances in Neural Information Processing Systems 34 (2021).
[176]
Souhaib Ben Taieb, James W. Taylor, and Rob J. Hyndman. 2021. Hierarchical probabilistic forecasting of electricity demand with smart meter data. Journal of the American Statistical Association 116, 533 (2021), 27–43.
[177]
Shuntaro Takahashi, Yu Chen, and Kumiko Tanaka-Ishii. 2019. Modeling financial time-series with generative adversarial networks. Physica A: Statistical Mechanics and its Applications 527 (2019), 121261.
[178]
Filotas Theodosiou and Nikolaos Kourentzes. 2021. Forecasting with deep temporal hierarchies. Retrieved from https://ssrn.com/abstract=3918315 or.
[179]
Jean-François Toubeau, Jérémie Bottieau, François Vallée, and Zacharie De Grève. 2018. Deep learning-based multivariate probabilistic forecasting for short-term scheduling in power markets. IEEE Transactions on Power Systems 34, 2 (2018), 1203–1215.
[180]
Ali Caner Turkmen, Tim Januschowski, Yuyang Wang, and Ali Taylan Cemgil. 2021. Forecasting intermittent and sparse time series: A unified probabilistic framework via deep renewal processes. PlosOne (2021).
[181]
Ali Caner Turkmen, Yuyang Wang, and Tim Januschowski. 2019. Intermittent demand forecasting with deep renewal processes. arXiv:1911.10416. Retrieved from https://arxiv.org/abs/1911.10416.
[182]
Ali Caner Türkmen, Yuyang Wang, and Alexander J. Smola. 2019. FastPoint: Scalable deep point processes. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases.
[183]
Aäron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. 2016. WaveNet: A generative model for raw audio. SSW 125 (2016).
[184]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems. 5998–6008.
[185]
François-Xavier Vialard, Roland Kwitt, Suan Wei, and Marc Niethammer. 2020. A shooting formulation of deep learning. In Proceedings of the Advances in Neural Information Processing Systems.
[186]
Chris S. Wallace and David L. Dowe. 2000. MML clustering of multi-state, poisson, von mises circular and gaussian distributions. Statistics and Computing 10, 1 (2000), 73–83.
[187]
Rui Wang, Danielle Maddix, Christos Faloutsos, Yuyang Wang, and Rose Yu. 2021. Bridging physics-based and data-driven modeling for learning dynamical systems. In Proceedings of the Learning for Dynamics and Control. PMLR, 385–398.
[188]
Yuyang Wang, Alex Smola, Danielle Maddix, Jan Gasthaus, Dean Foster, and Tim Januschowski. 2019. Deep factors for forecasting. In Proceedings of the International Conference on Machine Learning. 6607–6617.
[189]
Ruofeng Wen and Kari Torkkola. 2019. Deep generative quantile-copula models for probabilistic forecasting. arXiv:1907.10697. Retrieved from https://arxiv.org/abs/1907.10697.
[190]
Ruofeng Wen, Kari Torkkola, Balakrishnan Narayanaswamy, and Dhruv Madeka. 2017. A multi-horizon quantile recurrent forecaster. arXiv preprint arXiv:1711.11053 (2017).
[191]
Shanika L. Wickramasuriya, George Athanasopoulos, Rob J. Hyndman, et al. 2015. Forecasting hierarchical and grouped time series through trace minimization. Department of Econometrics and Business Statistics, Monash University (2015).
[192]
Sifan Wu, Xi Xiao, Qianggang Ding, Peilin Zhao, Ying Wei, and Junzhou Huang. 2020. Adversarial sparse transformer for time series forecasting. Advances in Neural Information Processing Systems 33 (2020), 17105–17115.
[193]
Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, and Chengqi Zhang. 2020. Connecting the dots: Multivariate time series forecasting with graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 753–763.
[194]
Shuai Xiao, Mehrdad Farajtabar, Xiaojing Ye, Junchi Yan, Le Song, and Hongyuan Zha. 2017. Wasserstein learning of deep generative point process models. In Proceedings of the Advances in Neural Information Processing Systems. 3247–3257.
[195]
Shuai Xiao, Junchi Yan, Mehrdad Farajtabar, Le Song, Xiaokang Yang, and Hongyuan Zha. 2017. Joint modeling of event sequence and time series with attentional twin recurrent neural networks. arXiv:1703.08524. Retrieved from https://arxiv.org/abs/1703.08524.
[196]
Qifa Xu, Xi Liu, Cuixia Jiang, and Keming Yu. 2016. Quantile autoregression neural network model with applications to evaluating value at risk. Applied Soft Computing 49 (2016), 1–12.
[197]
Xing Yan, Weizhong Zhang, Lin Ma, Wei Liu, and Qi Wu. 2018. Parsimonious quantile regression of financial asset tail dynamics via sequential learning. Advances in Neural Information Processing Systems 31 (2018).
[198]
Jaemin Yoo and U. Kang. 2021. Attention-based autoregression for accurate and efficient multivariate time series forecasting. In Proceedings of the 2021 SIAM International Conference on Data Mining. SIAM, 531–539.
[199]
Jinsung Yoon, Daniel Jarrett, and Mihaela Van der Schaar. 2019. Time-series generative adversarial networks. Advances in Neural Information Processing Systems 32 (2019).
[200]
Hsiang-Fu Yu, N. Rao, and I. S. Dhillon. 2016. Temporal regularized matrix factorization for high-dimensional time series prediction. Advances in Neural Information Processing Systems (2016), 847–855.
[201]
Guoqiang Zhang, B. Eddy Patuwo, and Michael Y. Hu. 1998. Forecasting with artificial neural networks: The state of the art. International Journal of Forecasting 14, 1 (1998), 35–62.
[202]
G. Peter Zhang. 2003. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 50 (2003), 159–175.
[203]
Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. 2019. Self-attention generative adversarial networks. In Proceedings of the International Conference on Machine Learning. PMLR, 7354–7363.
[204]
Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 2020. Informer: Beyond efficient transformer for long sequence time-series forecasting. arXiv:2012.07436. Retrieved from https://arxiv.org/abs/2012.07436.
[205]
Lingxue Zhu and Nikolay Laptev. 2017. Deep and confident prediction for time series at Uber. In Proceedings of the IEEE International Conference on Data Mining Workshops. 103–110.
[206]
Daniel Zügner, François-Xavier Aubet, Victor Garcia Satorras, Tim Januschowski, Stephan Günnemann, and Jan Gasthaus. 2021. A study of joint graph inference and forecasting. arXiv:2109.04979. Retrieved from https://arxiv.org/abs/2109.04979.

Cited By

View all
  • (2025)An Advisor Neural Network framework using LSTM-based Informative Stock AnalysisExpert Systems with Applications10.1016/j.eswa.2024.125299259(125299)Online publication date: Jan-2025
  • (2025)Multi-swarm multi-tasking ensemble learning for multi-energy demand predictionApplied Energy10.1016/j.apenergy.2024.124553377(124553)Online publication date: Jan-2025
  • (2024)Experimental Comparison of Two Main Paradigms for Day-Ahead Average Carbon Intensity Forecasting in Power Grids: A Case Study in AustraliaSustainability10.3390/su1619858016:19(8580)Online publication date: 2-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 55, Issue 6
June 2023
781 pages
ISSN:0360-0300
EISSN:1557-7341
DOI:10.1145/3567471
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 December 2022
Online AM: 19 May 2022
Accepted: 23 April 2022
Revised: 06 March 2022
Received: 23 July 2021
Published in CSUR Volume 55, Issue 6

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Time series
  2. forecasting
  3. neural networks

Qualifiers

  • Survey
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4,140
  • Downloads (Last 6 weeks)480
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2025)An Advisor Neural Network framework using LSTM-based Informative Stock AnalysisExpert Systems with Applications10.1016/j.eswa.2024.125299259(125299)Online publication date: Jan-2025
  • (2025)Multi-swarm multi-tasking ensemble learning for multi-energy demand predictionApplied Energy10.1016/j.apenergy.2024.124553377(124553)Online publication date: Jan-2025
  • (2024)Experimental Comparison of Two Main Paradigms for Day-Ahead Average Carbon Intensity Forecasting in Power Grids: A Case Study in AustraliaSustainability10.3390/su1619858016:19(8580)Online publication date: 2-Oct-2024
  • (2024)Predicting Industrial Electricity Consumption Using Industry–Geography Relationships: A Graph-Based Machine Learning ApproachEnergies10.3390/en1717429617:17(4296)Online publication date: 28-Aug-2024
  • (2024)Traffic-Aware Intelligent Association and Task Offloading for Multi-Access Edge ComputingElectronics10.3390/electronics1316313013:16(3130)Online publication date: 7-Aug-2024
  • (2024)Machine Learning Models Informed by Connected Mixture Components for Short- and Medium-Term Time Series ForecastingAI10.3390/ai50400975:4(1955-1976)Online publication date: 22-Oct-2024
  • (2024)Evaluating Time-Series Prediction of Temperature, Relative Humidity, and CO2 in the Greenhouse with Transformer-Based and RNN-Based ModelsAgronomy10.3390/agronomy1403041714:3(417)Online publication date: 21-Feb-2024
  • (2024)A Flexible Forecasting StackProceedings of the VLDB Endowment10.14778/3685800.368581317:12(3883-3892)Online publication date: 1-Aug-2024
  • (2024)Forecasting Algorithms for Intelligent Resource Scaling: An Experimental AnalysisProceedings of the 2024 ACM Symposium on Cloud Computing10.1145/3698038.3698564(126-143)Online publication date: 20-Nov-2024
  • (2024)HiMTM: Hierarchical Multi-Scale Masked Time Series Modeling with Self-Distillation for Long-Term ForecastingProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679741(3352-3362)Online publication date: 21-Oct-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media