Vector Autoregression (VAR) - Comprehensive Guide With Examples in Python - ML
Vector Autoregression (VAR) - Comprehensive Guide With Examples in Python - ML
(https://www.machinelearningplus.com/)
Let's DataScience
(https://www.ezoic.com/what-is-ezoic/)
. Split the Series into Training and Testing Data Dask – How to handle large dataframes in
. Check for Stationarity and Make the Time Series Stationary python using parallel computing
(https://www.machinelearningplus.com/python/dask-
. How to Select the Order (P) of VAR model tutorial/)
. Train the VAR Model of Selected Order(p) Text Summarization Approaches for NLP –
. Check for Serial Correlation of Residuals (Errors) using Durbin Watson Statistic Practical Guide with Generative Examples
(https://www.machinelearningplus.com/nlp/text-
. How to Forecast VAR model using statsmodels summarization-approaches-nlp-example/)
. Train the VAR Model of Selected Order(p)
Bias Variance Tradeoff – Clearly Explained
. Invert the transformation to get the real forecast (https://www.machinelearningplus.com/machine-
learning/bias-variance-tradeoff/)
. Plot of Forecast vs Actuals
Gradient Boosting – A Concise Introduction from
. Evaluate the Forecasts
Scratch
. Conclusion (https://www.machinelearningplus.com/machine-
learning/gradient-boosting/)
(https://www.machinelearningplus.com/tag/debugging/)
Evaluation Metrics
It is considered as an Autoregressive model because, each variable (Time Series) is
(https://www.machinelearningplus.com/tag/evaluation-
modeled as a function of the past values, that is the predictors are nothing but the lags
metrics/) Exercises
(time delayed value) of the series.
(https://www.machinelearningplus.com/tag/exerc
FastText
Ok, so how is VAR different from other Autoregressive models like AR, ARMA or ARIMA? (https://www.machinelearningplus.com/tag/fasttext/)
Gensim
(https://www.machinelearningplus.com/tag/gensim/)
The primary difference is those models are uni-directional, where, the predictors
HuggingFace
influence the Y and not vice-versa. Whereas, Vector Auto Regression (VAR) is bi-
(https://www.machinelearningplus.com/tag/huggingface/)
directional. That is, the variables influence each other.
Julia
(https://www.machinelearningplus.com/tag/ju
Julia Packages
(https://www.machinelearningplus.com/tag/julia-
packages/) LDA
(https://www.machinelearningplus.com/tag/lda/)
Lemmatization
Cheap Forex VPS (https://www.machinelearningplus.com/tag/lemmatization
Linear Regression
(https://www.machinelearningplus.com/tag/linear-
regression/) Logistic
OPEN
(https://www.machinelearningplus.com/tag/logistic/) Loop
(https://www.machinelearningplus.com/tag/loop/)
Machine Learning
(https://www.machinelearningplus.
learning/) Matplotlib
(https://www.machinelearningplus.com/
NLP
(https://www.machinelearningplus.com
Ad NLTK
(https://www.machinelearningplus.com/tag/nltk/)
Numpy
We will go more in detail in the next section.
(https://www.machinelearningplus.com/tag/numpy/)
P-Value (https://www.machinelearningplus.com/tag/p-
In this article you will gain a clear understanding of: value/) plots
(https://www.machinelearningplus.com/tag/plots/)
Practice Exercise
Intuition behind VAR Model formula
(https://www.machinelearningplus.com/tag/practice-exercise/)
How to check the bi-directional relationship using Granger Causality
Procedure to building a VAR model in Python Python
How to determine the right order of VAR model
Interpreting the results of VAR model
(https://www.machinelearnin
How to generate forecasts to original scale of time series R
(https://www.machinelearningplus.com/tag/
2. Intuition behind VAR Model Formula Regex (https://www.machinelearningplus.com/tag/regex/)
Regression
spaCy
similarity/)
(https://www.machinelearningplus.com/tag/s
Stationarity
(https://www.machinelearningplus.com/tag/stationarity/)
Statistics
(https://www.machinelearningplus.com/wp-
(https://www.machinelearningplus.com/tag/sta
content/uploads/2019/07/Equation_ARp_Model-min.png)
Tensorflow
(https://www.machinelearningplus.com/tag/tensorflow/)
where α is the intercept, a constant and β1, β2 till βp are the coefficients of the lags of TextBlob
Y till order p. (https://www.machinelearningplus.com/tag/textblob/)
TextSummarization
(https://www.machinelearningplus.com/tag/textsummarization/)
Order ‘p’ means, up to p-lags of Y is used and they are the predictors in the equation.
Text Summarization
The ε_{t} is the error, which is considered as white noise. (https://www.machinelearningplus.com/tag/text-
summarization/) Time Series
(https://www.machinelearningplus.com/tag/tim
series/) Topic Modeling
(https://www.machinelearningplus.com/tag/topic-
modeling/) T Test
(https://www.machinelearningplus.com/tag/t-test/)
Visualization
(https://www.machinelearningplus.com/tag/visualization/
Alright. So, how does a VAR model’s formula look like?
In the VAR model, each variable is modeled as a linear combination of past values of
itself and the past values of other variables in the system. Since you have multiple time
series that influence each other, it is modeled as a system of equations with one
equation per variable (time series).
That is, if you have 5 time series that influence each other, we will have a system of 5
equations.
Let’s suppose, you have two variables (Time series) Y1 and Y2, and you need to forecast
the values of these variables at time (t).
To calculate Y1(t), VAR will use the past values of both Y1 as well as Y2. Likewise, to
compute Y2(t), the past values of both Y1 and Y2 be used.
For example, the system of equations for a VAR(1) model with two time series (variables
`Y1` and `Y2`) is as follows:
(https://www.machinelearningplus.com/wp-
content/uploads/2019/07/Equation_VAR1_Model-min.png)
Where, Y{1,t-1} and Y{2,t-1} are the first lag of time series Y1 and Y2 respectively.
Likewise, the second order VAR(2) model for two variables would include up to two
lags for each variable (Y1 and Y2).
(https://www.machinelearningplus.com/wp-
content/uploads/2019/07/Equation_VAR2_Model-min.png)
Can you imagine what a second order VAR(2) model with three variables (Y1, Y2 and Y3)
would look like?
(https://www.machinelearningplus.com/wp-
content/uploads/2019/07/Equation_VAR2_Model_with_three_Ys-min.png)
As you increase the number of time series (variables) in the model the system of
equations become larger.
# Import Statsmodels
from statsmodels.tsa.api import VAR
from statsmodels.tsa.stattools import adfuller
from statsmodels.tools.eval_measures import rmse, aic
(https://www.machinelearningplus.com/wp-
content/uploads/2019/07/Multi_dimensional_time_series_VAR-min.png)
plt.tight_layout();
(https://www.machinelearningplus.com/wp-content/uploads/2019/07/actuals_VAR.png)
Each of the series have a fairly similar trend patterns over the years except for gdfce
and gdfim , where a different pattern is noticed starting in 1980.
Alright, next step in the analysis is to check for causality amongst these series. The
Granger’s Causality test and the Cointegration test can help us with that.
Using Granger’s Causality Test, it’s possible to test this relationship before even building
the model.
Granger’s causality tests the null hypothesis that the coefficients of past values in the
regression equation is zero.
In simpler terms, the past values of time series (X) do not cause the other series (Y). So,
if the p-value obtained from the test is lesser than the significance level of 0.05, then,
you can safely reject the null hypothesis.
The below code implements the Granger’s Causality test for all possible combinations
of the time series in a given dataframe and stores the p-values of each combination in
the output matrix.
from statsmodels.tsa.stattools import grangercausalitytests
maxlag=12
test = 'ssr_chi2test'
def grangers_causation_matrix(data, variables, test='ssr_chi2test', verbose=False):
"""Check Granger Causality of all possible combinations of the Time series.
The rows are the response variable, columns are predictors. The values in the table
are the P-Values. P-Values lesser than the significance level (0.05), implies
the Null Hypothesis that the coefficients of the corresponding past values is
zero, that is, the X does not cause Y can be rejected.
(https://www.machinelearningplus.com/wp-content/uploads/2019/07/Grangers-Causality-
Test-Results-Matrix-min.png)
The row are the Response (Y) and the columns are the predictor series (X).
For example, if you take the value 0.0003 in (row 1, column 2), it refers to the p-value of
pgnp_x causing rgnp_y . Whereas, the 0.000 in (row 2, column 1) refers to the p-value of
rgnp_y causing pgnp_x .
If a given p-value is < significance level (0.05), then, the corresponding X series
(column) causes the Y (row).
For example, P-Value of 0.0003 at (row 1, column 2) represents the p-value of the
Grangers Causality test for pgnp_x causing rgnp_y , which is less that the significance
level of 0.05.
So, you can reject the null hypothesis and conclude pgnp_x causes rgnp_y .
Looking at the P-Values in the above table, you can pretty much observe that all the
variables (time series) in the system are interchangeably causing each other.
This makes this system of multi time series a good candidate for using VAR models to
forecast.
7. Cointegration Test
Cointegration test helps to establish the presence of a statistically significant connection
between two or more time series.
To understand that, you first need to know what is ‘order of integration’ (d).
Now, when you have two or more time series, and there exists a linear combination of
them that has an order of integration (d) less than that of the individual series, then the
collection of series is said to be cointegrated.
Ok?
When two or more time series are cointegrated, it means they have a long run,
statistically significant relationship.
This is the basic premise on which Vector Autoregression(VAR) models is based on. So,
it’s fairly common to implement the cointegration test before starting to build VAR
models.
# Summary
print('Name :: Test Stat > C(95%) => Signif \n', '--'*20)
for col, trace, cvt in zip(df.columns, traces, cvts):
print(adjust(col), ':: ', adjust(round(trace,2), 9), ">", adjust(cvt, 8), ' => ' ,
cointegration_test(df)
Results:
The VAR model will be fitted on df_train and then used to forecast the next 4
observations. These forecasts will be compared against the actuals present in test data.
To do the comparisons, we will use multiple forecast accuracy metrics, as seen later in
this article.
nobs = 4
df_train, df_test = df[0:-nobs], df[-nobs:]
# Check size
print(df_train.shape) # (119, 8)
print(df_test.shape) # (4, 8)
Just to refresh, a stationary time series is one whose characteristics like mean and
variance does not change over time.
There is a suite of tests called unit-root tests. The popular ones are:
Since, differencing reduces the length of the series by 1 and since all the time series has
to be of the same length, you need to difference all the series in the system if you
choose to difference at all.
Got it?
Sign Up
First, we implement a nice function ( adfuller_test() ) that writes out the results of the ADF
test for any given time series and implement this function on each series one-by-one.
def adfuller_test(series, signif=0.05, name='', verbose=False):
"""Perform ADFuller to test for Stationarity of given series and print report"""
r = adfuller(series, autolag='AIC')
output = {'test_statistic':round(r[0], 4), 'pvalue':round(r[1], 4), 'n_lags':round(r[2]
p_value = output['pvalue']
def adjust(val, length= 6): return str(val).ljust(length)
# Print Summary
print(f' Augmented Dickey-Fuller Test on "{name}"', "\n ", '-'*47)
print(f' Null Hypothesis: Data has unit root. Non-Stationary.')
print(f' Significance Level = {signif}')
print(f' Test Statistic = {output["test_statistic"]}')
print(f' No. Lags Chosen = {output["n_lags"]}')
Results:
Augmented Dickey-Fuller Test on "rgnp"
-----------------------------------------------
Null Hypothesis: Data has unit root. Non-Stationary.
Significance Level = 0.05
Test Statistic = 0.5428
No. Lags Chosen = 2
Critical value 1% = -3.488
Critical value 5% = -2.887
Critical value 10% = -2.58
=> P-Value = 0.9861. Weak evidence to reject the Null Hypothesis.
=> Series is Non-Stationary.
The ADF test confirms none of the time series is stationary. Let’s difference all of them
once and check again.
# 1st difference
df_differenced = df_train.diff().dropna()
Sign Up
After the first difference, Real Wages (Manufacturing) is still not stationary. It’s critical
value is between 5% and 10% significance level.
All of the series in the VAR model should have the same number of observations.
That is, either proceed with 1st differenced series or difference all the series one more
time.
# Second Differencing
df_differenced = df_differenced.diff().dropna()
Results:
Augmented Dickey-Fuller Test on "rgnp"
-----------------------------------------------
Null Hypothesis: Data has unit root. Non-Stationary.
Significance Level = 0.05
Test Statistic = -9.0123
No. Lags Chosen = 2
Critical value 1% = -3.489
Critical value 5% = -2.887
Critical value 10% = -2.58
=> P-Value = 0.0. Rejecting Null Hypothesis.
=> Series is Stationary.
Sign Up
Let’s prepare the training and test datasets.
Though the usual practice is to look at the AIC, you can also check other best fit
comparison estimates of BIC , FPE and HQIC .
model = VAR(df_differenced)
for i in [1,2,3,4,5,6,7,8,9]:
result = model.fit(i)
print('Lag Order =', i)
print('AIC : ', result.aic)
print('BIC : ', result.bic)
print('FPE : ', result.fpe)
print('HQIC: ', result.hqic, '\n')
Results:
Lag Order = 1
AIC : -1.3679402315450664
BIC : 0.3411847146588838
FPE : 0.2552682517347198
HQIC: -0.6741331335699554
Lag Order = 2
AIC : -1.621237394447824
BIC : 1.6249432095295848
FPE : 0.2011349437137139
HQIC: -0.3036288826795923
Lag Order = 3
AIC : -1.7658008387012791
BIC : 3.0345473163767833
FPE : 0.18125103746164364
HQIC: 0.18239143783963296
Lag Order = 4
AIC : -2.000735164470318
BIC : 4.3712151376540875
FPE : 0.15556966521481097
HQIC: 0.5849359332771069
Lag Order = 5
AIC : -1.9619535608363954
BIC : 5.9993645622420955
FPE : 0.18692794389114886
HQIC: 1.268206331178333
Lag Order = 6
AIC : -2.3303386524829053
BIC : 7.2384526890885805
FPE : 0.16380374017443664
HQIC: 1.5514371669548073
Lag Order = 7
AIC : -2.592331352347129
BIC : 8.602387254937796
FPE : 0.1823868583715414
HQIC: 1.9483069621146551
Lag Order = 8
AIC : -3.317261976458205
BIC : 9.52219581032303
FPE : 0.15573163248209088
HQIC: 1.8896071386220985
Lag Order = 9
AIC : -4.804763125958631
BIC : 9.698613139231597
FPE : 0.08421466682671915
HQIC: 1.0758291640834052
In the above output, the AIC drops to lowest at lag 4, then increases at lag 5 and then
continuously drops further.
Let’s go with the lag 4 model.
An alternate method to choose the order(p) of the VAR models is to use the
model.select_order(maxlags) method.
The selected order(p) is the order that gives the lowest ‘AIC’, ‘BIC’, ‘FPE’ and ‘HQIC’
scores.
x = model.select_order(maxlags=12)
x.summary()
(https://www.machinelearningplus.com/wp-
content/uploads/2019/07/VAR_Order_Selection_Table-min.png)
According to FPE and HQIC, the optimal lag is observed at a lag order of 3.
I, however, don’t have an explanation for why the observed AIC and BIC values differ
when using result.aic versus as seen using model.select_order() .
Since the explicitly computed AIC is the lowest at lag 4, I choose the selected order as
4.
Results:
Summary of Regression Results
==================================
Model: VAR
Method: OLS
Date: Sat, 18, May, 2019
Time: 11:35:15
--------------------------------------------------------------------
No. of Equations: 8.00000 BIC: 4.37122
Nobs: 113.000 HQIC: 0.584936
Log likelihood: -905.679 FPE: 0.155570
AIC: -2.00074 Det(Omega_mle): 0.0200322
--------------------------------------------------------------------
Results for equation rgnp
===========================================================================
coefficient std. error t-stat prob
---------------------------------------------------------------------------
const 2.430021 2.677505 0.908 0.364
L1.rgnp -0.750066 0.159023 -4.717 0.000
L1.pgnp -0.095621 4.938865 -0.019 0.985
L1.ulc -6.213996 4.637452 -1.340 0.180
L1.gdfco -7.414768 10.184884 -0.728 0.467
L1.gdf -24.864063 20.071245 -1.239 0.215
L1.gdfim 1.082913 4.309034 0.251 0.802
L1.gdfcf 16.327252 5.892522 2.771 0.006
L1.gdfce 0.910522 2.476361 0.368 0.713
L2.rgnp -0.568178 0.163971 -3.465 0.001
L2.pgnp -1.156201 4.931931 -0.234 0.815
L2.ulc -11.157111 5.381825 -2.073 0.038
L2.gdfco 3.012518 12.928317 0.233 0.816
L2.gdf -18.143523 24.090598 -0.753 0.451
L2.gdfim -4.438115 4.410654 -1.006 0.314
L2.gdfcf 13.468228 7.279772 1.850 0.064
L2.gdfce 5.130419 2.805310 1.829 0.067
L3.rgnp -0.514985 0.152724 -3.372 0.001
L3.pgnp -11.483607 5.392037 -2.130 0.033
L3.ulc -14.195308 5.188718 -2.736 0.006
L3.gdfco -10.154967 13.105508 -0.775 0.438
L3.gdf -15.438858 21.610822 -0.714 0.475
L3.gdfim -6.405290 4.292790 -1.492 0.136
L3.gdfcf 9.217402 7.081652 1.302 0.193
L3.gdfce 5.279941 2.833925 1.863 0.062
L4.rgnp -0.166878 0.138786 -1.202 0.229
L4.pgnp 5.329900 5.795837 0.920 0.358
L4.ulc -4.834548 5.259608 -0.919 0.358
L4.gdfco 10.841602 10.526530 1.030 0.303
L4.gdf -17.651510 18.746673 -0.942 0.346
L4.gdfim -1.971233 4.029415 -0.489 0.625
L4.gdfcf 0.617824 5.842684 0.106 0.916
L4.gdfce -2.977187 2.594251 -1.148 0.251
===========================================================================
If there is any correlation left in the residuals, then, there is some pattern in the time
series that is still left to be explained by the model. In that case, the typical course of
action is to either increase the order of the model or induce more predictors into the
system or look for a different algorithm to model the time series.
So, checking for serial correlation is to ensure that the model is sufficiently able to
explain the variances and patterns in the time series.
A common way of checking for serial correlation of errors can be measured using the
Durbin Watson’s Statistic.
(https://www.machinelearningplus.com/wp-
content/uploads/2019/07/Durbin_Watson_Statistic_Formula-min.png)
The value of this statistic can vary between 0 and 4. The closer it is to the value 2, then
there is no significant serial correlation. The closer to 0, there is a positive serial
correlation, and the closer it is to 4 implies negative serial correlation.
Results:
rgnp : 2.09
pgnp : 2.02
ulc : 2.17
gdfco : 2.05
gdf : 2.25
gdfim : 1.99
gdfcf : 2.2
gdfce : 2.17
The serial correlation seems quite alright. Let’s proceed with the forecast.
This is because, the terms in the VAR model are essentially the lags of the various time
series in the dataset, so you need to provide it as many of the previous values as
indicated by the lag order used by the model.
Let’s forecast.
# Forecast
fc = model_fitted.forecast(y=forecast_input, steps=nobs)
df_forecast = pd.DataFrame(fc, index=df.index[-nobs:], columns=df.columns + '_2d')
df_forecast
(https://www.machinelearningplus.com/wp-
content/uploads/2019/07/VAR_Forecasts_raw.png)
The forecasts are generated but it is on the scale of the training data used by the
model. So, to bring it back up to its original scale, you need to de-difference it as many
times you had differenced the original input data.
(https://www.machinelearningplus.com/wp-content/uploads/2019/07/VAR-Forecasts-
min.png)
Sign Up
The forecasts are back to the original scale. Let’s plot the forecasts against the actuals
from test data.
plt.tight_layout();
(https://www.machinelearningplus.com/wp-
content/uploads/2019/07/forecast_vs_actuals_VAR.png)
17. Conclusion
In this article we covered VAR from scratch beginning from the intuition behind it,
interpreting the formula, causality tests, finding the optimal order of the VAR model,
preparing the data for forecasting, build the model, checking for serial autocorrelation,
inverting the transform to get the actual forecasts, plotting the results and computing
the accuracy metrics.
Hope you enjoyed reading this as much as I did writing it. I will see you in the next
one.
ALSO ON MACHINELEARNINGPLUS.COM
2 years ago • 25 comments 8 months ago • 1 comment 2 years ago • 25 comments 2 years ago • 2
Time series is a sequence of Text Classification is the A compilation of the Top 50 KPSS test is a
observations recorded at process of categorizing texts. matplotlib plots most useful to check for st
regular time intervals. … SpaCy makes text … in data analysis and … series around
Upvote Love
LOG IN WITH
OR SIGN UP WITH DISQUS ?
Name
Don't see harm in dong normalization, however by design it is not necessary. The weights
get adjusted accordingly, so, if the scale of one of the series is too high, you still get fairly
similar predictions
△ ▽ • Reply • Share ›
The stationarity must be checked first. I overlooked this at the time of writing I'm afraid.
Thanks for noticing this!
△ ▽ • Reply • Share ›
Approved
△ ▽ • Reply • Share ›
This example illustrates multivariate situation Moe. Not sure if I understood your question
correctly
△ ▽ • Reply • Share ›
According to FPE and HQIC, the optimal lag is observed at a lag order of 3
x = model.select_order(maxlags=12)
x.summary()
shows that the optimal lag order is 12 - this is where all 4 metrics (aic, bic, fpe, hqic) have their
minima.
Shouldn't it be
not
df_results = invert_transformation(train, df_forecast, second_diff=True)
Also, what is the "FALSE" in Cointegration Test mean? Is it mean that the data is not correlated to
others?
Also, I have tried to test the model with other dataset. The result is quite poor. Is there any way to
improve the accuracy?
△ ▽ • Reply • Share ›
2) That matrix is not symmetric could you please explain which one is causing which one.
△ ▽ • Reply • Share ›
1) Probably perhaps
2) I've named the predictor with suffic '_x' and dependent variable with suffix '_y'
△ ▽ • Reply • Share ›
Y = rgnp, X = rgnp, P Values = [0.0, 1.0, 0.0024, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0]
Y = pgnp, X = rgnp, P Values = [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Y = ulc, X = rgnp, P Values = [0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0]
Y = gdfco, X = rgnp, P Values = [0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0]
min_p_value = np.min(p_values)
df.loc[r, c] = min_p_value
which selects the minimum p-value for all lags tested and is the reason for my causality matrix to
be filled with all zeros. My questions are:
Q1: The results you display for the grangers_causation_matrix are very different from what I am
i l i hi
see more
△ ▽ • Reply • Share ›
Thanks for sharing William. Apologize for the very late reply.
△ ▽ • Reply • Share ›
As i started to study for VAR, I realized this example has seasonality.So how do you handle with
that ? I could not find good explanation about seasonality in VAR so can you please explain. Thank
you!
△ ▽ • Reply • Share ›
Could you explain more about how you say there is seasonality?
VAR models the information contained only among the chosen group. Any seasonality
modeled has to come from within this group
△ ▽ • Reply • Share ›
the idea is to undo the differencing operations. Try reverse engineering it without the
solution I've stated
△ ▽ • Reply • Share ›
Hi Wilson,
We need a reproducible example to be able to pinpoint whats happening. You might want to
try and follow this thread: https://stackoverflow.com/q... to resolve this.
△ ▽ • Reply • Share ›
Btw, there's also a built in test_causality function inside the VAR class. But I just
started working with statsmodels, so not sure yet what the difference is (besides
what data it uses)
△ ▽ • Reply • Share ›
Welcome :)
△ ▽ • Reply • Share ›
welcome :)
△ ▽ • Reply • Share ›
Welcome :)
△ ▽ • Reply • Share ›
(https://www.ezoic.com/what-is-
ezoic/)
report this ad
Email Address
Subscribe