Authors
Rui Wang, Danielle Maddix, Christos Faloutsos, Wang Yuyang, Rose Yu
Publication date
2020
Conference
Interpretable Inductive Bias and Physically Structured Learning NeurIPS Workshop
Publisher
https://inductive-biases.github.io/papers/27.pdf
Description
The ability to generalize to unseen data is at the core of machine learning. A traditional view of generalization refers to unseen data from the same distribution. Dynamical systems challenge the conventional wisdom of generalization in learning systems due to distribution shifts from non-stationarity and chaos. In this paper, we investigate the generalization ability of dynamical systems in the forecasting setting. Through systematic experiments, we show deep learning models fail to generalize to shifted distributions in the data and parameter domains of dynamical systems. We find a sharp contrast between the performance of deep learning models on interpolation (same distribution) and extrapolation (shifted distribution). Our findings can help explain the inferior performance of deep learning models compared to physics-based models on the COVID-19 forecasting task.
Total citations
2021202220232024112
Scholar articles