Time Series Forecasting Final Report
Time Series Forecasting Final Report
Time Series Forecasting Final Report
The range of our data from 2010 to 2020 consists of Figure 8. Training MSE loss plot over 100 epochs.
2828 total data points. Prior to input to the models, all data
points are normalized between -1 and 1. The final output prediction results on the test set are
shown in Figure 9 with testing set RMSE of 137.24.
6. Experiments and Results
Figure 11. Output prediction (blue) and groundtruth (red) of the S&P 500
Figure 12. Trained with 3 qubits and 20 epochs.
adjusted close price test set using classical transformer model.
7. Limitations and Future Works [4] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N.
Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,”
Advances in neural information processing systems, vol. 30, 2017.
With this project we faced a few main challenges. One [5] A. Ceschini, A. Rosato, and M. Panella, “Hybrid quantum-classical
of our primary constraints was time. Initially, we expected recurrent neural networks for time series prediction,” in 2022 Interna-
training to be a trivial amount of time, however, we realized tional Joint Conference on Neural Networks (IJCNN). IEEE, 2022,
this was far from the case. Epochs took a lot of time to train pp. 1–8.
especially in simulation. Our Quantum LSTM even with it’s [6] M. Henderson, S. Shakya, S. Pradhan, and T. Cook, “Quanvolutional
relatively few number of qubits took on the order of minutes. neural networks: Powering image recognition with quantum circuits,”
2019. [Online]. Available: https://arxiv.org/abs/1904.04767
This only got worse as the number qubits increased. With
our transformer architecture epochs were on the order of
hours.
We initially thought that switching from local simulators
to AWS similators or actual QPUs would help our training
times, but we were met by a different challenge here,
our code wouldn’t compile on AWS! For this project we
intended to use Pennylane’s AWS Plug-in, which promised