Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Difference Equation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Difference Equations

Weijie Chen
Department of Political and Economic Studies
University of Helsinki

16 Aug, 2011

Contents
1 First Order Difference Equation 2
1.1 Iterative Method . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 General Method . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 One Example . . . . . . . . . . . . . . . . . . . . . . . 6

2 Second-Order Difference Equation 7


2.1 Complementary Solution . . . . . . . . . . . . . . . . . . . . . 7
2.2 Particular Solutions . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 One Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 pth -Order Difference Equation 10


3.1 Iterative Method . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Analytical Solution . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.1 Distinct Real Eigenvalues . . . . . . . . . . . . . . . . 14
3.2.2 Distinct Complex Eigenvalues . . . . . . . . . . . . . . 19
3.2.3 Repeated Eigenvalues . . . . . . . . . . . . . . . . . . 20

4 Lag Operator 20
th
4.1 p -order difference equation with lag operator . . . . . . . . 21

1
Abstract
Difference equations are close cousin of differential equations, they
have remarkable similarity as you will soon find out. So if you have
learned differential equations, you will have a rather nice head start.
Conventionally we study differential equations first, then difference
equations, it is not simply because it is better to study them chronolog-
ically, it is mainly because diffence equations has a naturally stronger
bond with computational science, sometimes we even need to study
how to make a differential equation into its discrete version-difference
equation-since which can be simulated by MATLAB. Difference equa-
tions are always the first lesson of any advanced time series anal-
ysis course, difference equation largely overshadows its econometric
brother, lag operation, that is because difference equation can be ex-
pressed by matrix, which tremendously increase it power, you will see
its shockingly powerful application of space-state model and Kalman
filter.

1 First Order Difference Equation


Difference equations emerge because we need to deal with discrete time
model, which is more realistic when we study econometrics, time series
datasets are apparently discrete. First we will discuss about iterative mathod,
which is almost the topic of first chapter of every time series textbook. In
preface of Ender (2004)[5],‘In my experience, this material (difference equa-
tion) and a knowledge of regression analysis is sufficient to bring students to
the point where they are able to read the professional journals and embark
on a serious applied study.’ Although I do not full agree with his optimism,
I do concur that the knowledge of difference equation is the key to all further
study of time series and advanced macreconomic theory.
A simple difference equation is a dynamic model, which describes the
time path of evolution, highly resembles the differential equations, so its
solution should be a function of t, completed free of yt+1 − yt , a time instant
t can exactly locate the position of its variable.

1.1 Iterative Method


This method is also called recursive substitution, basically it means if you
know the y0 , you know y1 , and rest of yt can be expressed by a recursive
relation. We start with a simple example which appears in very time series
textbook,
yt = ayt−1 + wt (1)

2
where  
w0
w1 
w =  . .
 
 .. 
wt
w is a determinstic vector, later we will drop this assumption when we
study stochastic difference equations, but for the time being and simplicity,
we assume w nonstochastic yet.
Notice that

y0 = ay−1 + w0
y1 = ay0 + w1
y2 = ay1 + w2
..
.
yt = ayt−1 + wt

Assume that we know y−1 and w. Thus, substitute y0 into y1 ,

y1 = a(ay−1 + w0 ) + w1 = a2 y−1 + aw0 + w1

Now we have y1 , follow this procedure, substitute y1 into y2 ,

y2 = a(a2 y−1 + aw0 + w1 ) + w2 = a3 y−1 + a2 w0 + aw1 + w2

Again, substitute y2 into y3 ,

y3 = a(a3 y−1 + a2 w0 + aw1 + w2 ) + w3


= a4 y−1 + a3 w0 + a2 w1 + aw2 + w3

I trust you are able to perceive the pattern of the dynamics,


t
X
yt = at+1 y−1 + ai wt−i
i=0

Actually what we have done is not the simplest version of difference


equation, to show your the outrageously simplest version, let us run through
next example in a lightning fast speed. A homogeneous difference equation1 ,

myt − nyt−1 = 0
1
If this is first time you hear about this, refer to my notes of differential equation[2].

3
To rearrange it in a familiar manner,
 
n
yt = yt−1
m

As the recurisve methods implies, if we have an intial value y0 2 , then


 
n
y1 = y0
m
 
n
y2 = y1
m
...
 
n
yt = yt−1
m
Substitute y1 into y2 ,
    2
n n n
y2 = y0 = y0
m m m
Then substitute y2 into y3 ,
  2  3
n n n
y3 = y1 = y0
m m m
Well the pattern is clear, and solution is
 t
n
yt = y0
m

If we denote (n/m)t as bt and y0 as A, it becomes Abt , this is the counterpart


of solution of first order differential equation Aert . They both play the
fundamental role in solving differential or difference equations. Notice that
the solution is a function of t, here only t is a varible, y0 is an initial value
which is a known constant.

1.2 General Method


As you no doubt have guess the general solution will difference equation will
also be the counterpart of its differential version.

y = yp + yc

where yp is the particular solution and yc is the comlementary solution 3 .


2
For sake of mathematical convenience, we assume initial value y0 rather than y−1
this time, but the essence is identical.
3
All these terminology is full explained in my notes of differential equations.

4
The process will be clear with the help of an example,

yt+1 + ayt = c

We follow the standard procedure, we will find the complementary solution


first, which is the solution of the according homogeneous difference equation,

yt+1 + ayt = 0

With the knowledge of last section, we can try a solution of Abt , when I
say ‘try’ it does not mean we just randomly guess, because we don’t need
to perform the crude iterative method every time, we can make use of the
solution of previously solved equation, so

yt+1 + ayt = Abt+1 + aAbt = 0,

Cancel the common factor,

b+a=0
b = −a

If b = −a, this solution will work, so the complementary solution is

yc = Abt = A(−a)t

Next step, find the particular solution of

yt+1 + ayt = c.

The most intreguing thing comes, to make the equation above hold, we can
choose any yt to satisfy it. Might to your surprise, we can even choose yt = k
for −∞ < t < ∞, which is just a constant time series, every period we have
the same value k. So

k + ak = c
c
k = yp =
1+a
You can easily notice that if we want to make this solution work, then
a 6= −1. Then the question we ask will simply be, what if a = −1? Of
course it is not define, we have to change the form of the solution, here we
use yt = kt which is similar to the one we used in differential equation,

(k + 1)t + akt = c
c
k= ,
t + 1 + at

5
because a = −1,

k = c,

But this time yp = kt, so yp = ct, still it is a function of t.


Add yp and yc together,
c
yt = A(−a)t + if a 6= −1 (2)
1+a
yt = A(−a)t + ct if a = −1 (3)

Last, of course you can solve A if you have an initial condition, say yt = y0 ,
and a 6= −1,
c c
y0 = A + A = y0 −
1+a 1+a
If a = 1,
y0 = A(−a)0 + 0 = A
Then just plug them back to the according solution (2) and (3).

1.2.1 One Example


Solve
yt+1 − 2yt = 2
First solve the complementary equation,

yt+1 − 2yt = 0

use Abt ,

Abt+1 − 2Abt = 0
b−2=0
b=2

So, complementary solution,

yc = A(2)t

To find particular solution, let yt = k for −∞ < t < ∞,

k − 2k = 2
k = yp = −2

So the general solution,

yt = yp + yc = −2 + 2t A

6
If we are given an initial value, y0 = 4,

4 = −2 + 20 A
A=6

then the definite solution is

yt = −2 + 6 · 2t

2 Second-Order Difference Equation


Second order difference equation is that an equation involves ∆2 yt , which is

∆(∆yt ) = ∆(yt+1 − yt )
∆2 yt = ∆yt+1 − ∆yt
∆2 yt = (yt+2 − yt+1 ) − (yt+1 − yt )
∆2 yt = yt+2 − 2yt+1 + yt

We define a linear second-order difference equation as,

yt+2 + a1 yt+1 + a2 yt = c

But we better not use iterative method to mess with it, trust me, it is more
confusing then illuminating, so we come straightly to the general solution.
As we we have studied by far, the general solution is the addition of the
complementary solution and particular solution, we will talk about them
next.

2.1 Complementary Solution


As interestingly as differential equation, we have several situations to discuss.
So first we try to solve the complementary equation, some textbook might
call this reduced equation, it is simply the homogeneous version of the orginal
equation,
yt+2 + a1 yt+1 + a2 yt = 0
We learned from first-order difference equation that homogeneous difference
equation has a solution form of yt = Abt , we try it,

Abt+2 + a1 Abt+1 + a2 Abt = 0


(b2 + a1 b + a2 )Abt = 0

We assume that Abt is nonzero,

b2 + a1 b + a2 = 0

7
This is our characteristic equation, same as we had seen in differential equa-
tion, high school math can sometime bring us a little fun,
p
−a1 ± a21 − 4a2
b=
2
I guess you are much familiar with it in textbook form, if we have a qudractic
eqution x2 + bx + c = 0,

−b ± b2 − 4ab
x=
2a
The same as we did in second-order differential equations, now we have three
cases to discuss.
Case I a21 − 4a2 > 0 Then we have two distinct real roots, and the
complementary solution will be

yc = A1 bt1 + A2 bt2

Note that, A1 bt1 and A2 bt2 are linearly independent, we can’t only use one of
them to represent complementary solution, because we need two constants
A1 and A2 .
Case II a21 − 4a2 = 0 Only one real root is available to us.
a1
b = b1 = b2 = −
2
Then the complementary solution collapses,

yc = A1 bt + A2 bt = (A1 + A2 )bt

We just need another constant to fill the position, say A4 ,

yc = A3 bt + A4 tbt

where A3 = A1 + A2 , and tbt is just old trick we have similiarly used in


differential equation, which we used tert .
Case III a21 − 4a2 < 0 Complex numbers are our old friends, we need
to make use of them again here.

b1 = α + iβ
b2 = α − iβ

4a2 −a21
where α = − a21 , β= 2 . Thus,

yc = A1 (α + iβ)t + A2 (α − iβ)t

Here we simply need to make use of De Moivre’s theorem,

yc = A1 kb1 kt [cos (tθ) + i sin (tθ)] + A2 kb2 kt [cos (tθ) − i sin (tθ)]

8
where kb1 k = kb2 k, because
p
kb1 k = kb2 k = α2 + β 2

Thus,

yc = A1 kbkt [cos (tθ) + i sin (tθ)] + A2 kbkt [cos (tθ) − i sin (tθ)]
= kbkt {A1 [cos (tθ) + i sin (tθ)] + A2 [cos (tθ) − i sin (tθ)]}
= kbkt [(A1 + A2 ) cos (tθ) + (A1 − A2 )i sin (tθ)
= kbkt [A5 cos (tθ) + A6 sin (tθ)]

2.2 Particular Solutions


We pick any yp to satisfy

yt+2 + a1 yy+1 + a2 yt = c

The simplest case is to choose yt+2 = yt+1 = yt = k, thus


c
k + a1 k + a2 k = c, and k =
1 + a1 + a2
But we have to make sure that a1 + a2 6= −1. If a1 + a2 = −1 happens we
will use choose yp = kt, and following this pattern there will be kt2 , kt3 , etc
to choose depending on the situation we have.

2.3 One Example


Solve
yt+2 − 4yy+1 + 4yt = 7
First, complementary solution, its characteristic equation is

b2 − 4b + 4 = 0
(b + 2)(b − 2) = 0

The complementary solution is

yc = A1 b21 + A2 b−2
2

For particular solution, we try yt+2 = yt+1 = yt = k, then

k − 4k + 4k = 7, which is k = 7

Then the general solution is

y = yc + yp = A1 2t − A2 (−2)t + 7

9
And we have two initial conditions, y0 = 1 and y1 = 3,

y0 = A1 − A2 + 7 = 1
y1 = 2A1 + 2A2 + 7 = 3

Solve for
A1 = −4 A2 = 2
Difinite solution is

y = yc + yp = −4 · 2t − 2(−2)t + 7

3 pth -Order Difference Equation


Previous sections are simply teaching your to solve the low order difference
equations, no much of insight and difficulty. From this section on, difficulty
increases significantly, for those of you who do not prepare linear algebra
well enough, it might not be a good idea to study this section in a hurry.
Every mathematics is built on another, it is not quite possible to progress
in jumps. And one warning, as the mathematics is becoming deeper, the
notation is also becoming complicated. However, don’t get dismayed, this
is not rocket science.
We generalize our difference equation into pth -Order,

yt = a1 yt−1 + a2 yt−2 + · · · + ap yt−p + ωt (4)

This is actually a generalization of (1), we need to set ω here in order to


prepare for stochastic difference equations. It is a conventional that we
diffence backwards when deal with high orders. But here we still take it
as constant. We can’t handle this difference equation in this form since it
doesn’t leave us much of room to perform any useful algebraic operation.
Even if you try to use old method, its characteristic equation will be

bt − a1 bt−1 − a2 bt−2 · · · + ap bt−p = 0,

factor out bt−p ,


bp − a1 bp−1 − a2 bp−2 · · · ap = 0
If the p is high enough make you have headache, it means this isn’t the right
way to advance.
Everything will be reshaped into its matrix form, define
 
yt
 yt−1 
 
ψ t =  yt−2 
 
 .. 
 . 
yt−p+1

10
which is a p × 1 vector. And define
 
a1 a2 a3 ··· ap−1 ap
1 0 0
 ··· 0 0 
F =0 1 0
 ··· 0 0 
 .. .. .. .. .. 
. . . ··· . .
0 0 0 ··· 1 0
Finally, define  
ωt
0
 
vt =  0 
 
 .. 
.
0
Put them together, we have new vector form first-order difference equation,
ψ t = F ψ t−1 + v t
you will see what is ψ t−1 in next explicit form,
      
yt a1 a2 a3 · · · ap−1 ap yt−1 ωt
 yt−1   1 0 0 · · · 0 0  yt−2   0 
   
   
 yt−2   0 1 0 · · · 0 0  yt−3   0 
 =  + 
 ..   .. .. .. .. ..   ..   .. 
 .  . . . ··· . .   .  .
yt−p+1 0 0 0 ··· 1 0 yt−p 0
The first equation is
yt = a1 yt−1 + a2 yt−2 + · · · + ap yt−p + ωt
which is exactly (4). And it is quite obvious for you that from second to pth
equation is simple a yi = yi form. The reason that we write it like this is
not obvious till now, but one reason is that we reduce the system down to
a first-order difference equation, although it is in matrix form, we can use
old methods to analyse it.

3.1 Iterative Method


We list evolution of difference equations as follows,
ψ 0 = F ψ −1 + v 0
ψ1 = F ψ0 + v1
ψ2 = F ψ1 + v2
..
.
ψ t = F ψ t−1 + v t

11
And we need to assume that ψ −1 and v 0 are known. Then old tricks of
recursive substitution,

ψ 1 = F (F ψ −1 + v 0 ) + v 1
= F 2 ψ −1 + F v 0 + v 1

Again,

ψ2 = F ψ1 + v2
= F (F 2 ψ −1 + F v 0 + v 1 ) + v 2
= F 3 ψ −1 + F 2 v 0 + F v 1 + v 2

Till step t,

ψ t = F t+1 ψ −1 + F t v 0 + F t−1 v 1 + . . . + F v t−1 + v t


t
X
= F t+1 ψ −1 + F i v t−i
i=0

which has explicit form,


           
yt y−1 ω0 ω1 ωt−1 ωt
 yt−1  y−2  0 0  0  0
           
 yt−2 
 = F t+1 y−3  + F t  0  + F t−1  0  + · · · + F
 0  0
+ 
     
 
 ..   ..   ..   ..   ..   .. 
 .   .  . .  .  .
yt−p+1 y−p 0 0 0 0

Note that how we use v i here. Unfortunatly, more notations are needed. We
(t)
denote the (1, 1) elelment of F t as f11 , and so on so forth. We made these
notation in order to extract the first equation from above unwieldy system,
thus
(t+1) (t+1) (t+1) (t+1)
yt = f11 y−1 + f12 y−2 + f13 y−3 + . . . + f1p y−p
(t) (t−1)
+ f11 ω0 + f11 ω1 + . . . + f11 ωt−1 + ωt

I have to admit, most of time mathematics looks more difficult than it


really is, notation is evil, and trust me, more evil stuff you haven’t seen
yet. However, keep on telling youself this is just the first equation of the
system, nothing more. Because the idea is the same in matrix form, we are
told that we know ψ −1 and v 0 as initial value. Here we just write them in a
explicit way, yt is a function of initial values from y−1 to y−p , and a sequence
of ωi . The scalar first-order difference equation need one initial value, and
pth -order need p initial values. But if we turn it into vector form, it need
one initial value again, which is ψ −1 .

12
If you want, you can even take partial derivative to find dynamic multi-
plier, say we find
∂yt (t)
= f11
∂ω0
(t)
this measure one-unit increase in ω0 , and is give by f11 .

3.2 Analytical Solution


We need to study further about matrix F , because it is the core of the
system, all information are hidden in this matrix. We’d better use a small
scale F to study the general pattern. Let’s set
 
a1 a2
F =
1 0

So the first equation of the system, we can write,

yt = a1 yt−1 + a2 yt−2 + ωt , or you might like, yt − a1 yt−1 − a2 yt−2 = ωt

It is your familiar form, a nonhomogenenous second-order difference equa-


tion. And calculate its eigenvalue,

a1 − λ a2
=0
1 −λ

Write determinant in algebraic form,

λ2 − a1 λ − a2 = 0

We finally reveal the myth why we always solve a characteristic equation


first, because we are finding it eigenvalues. In general the characterstic
equal of pth -order difference equation will be,

λp − a1 λp−1 − a2 λp−2 − · · · − ap = 0 (5)

we of course can prove it by using |F − λI| = 0, but the process looks


rather messy, the basic idea is to perform row operations to turn it into
upper triangle matrix, then the determinant is just the product of diagonal
entries. Not much insight we can get from it, so we omit the process here.
In second order differential or difference equation, we usually discuss
three cases of solution, two distinct roots (now you know they are eigen-
values), one repeated root, complex roots. We have a root formula two
catagorize them b2 − 4ac, however, that is only used in second order, when
we come to high orders, we don’t have anything like that. But it actually
makes high orders more interesting than low ones, because we can use linear
algebra.

13
3.2.1 Distinct Real Eigenvalues
Distinct eigenvalue category here corresponds to two distinct real roots in
second order. It is becoming hardcore in following content, be sure you are
well-prepared in linear algebra.
If F has p distinct eigenvalues, you should be happy, because diagonal-
ization is waving hands towards you. Recall that

A = P DP −1

where P is a nonsingular matrix, because distinct eigenvalues assure that


we have linearly dependent eigenvectors. We need to make some cosmetic
change to suit our need,
F = P ΛP −1
We use a capital Λ to indicate that all eigenvalues on the principle diagonal.
You should naturally respond that

F t = P Λt P −1 ,

simply because,

F 2 = P ΛP −1 P ΛP −1
= P ΛΛP −1
−1
= P Λ2 P

Diagonal matrix shows its convenience,


 t 
λ1 0 · · · 0
 0 λt · · · 0
2
Λt =  .
 
.. . . .. 
 .. . . .
0 0 ··· λtp
And unfortunately again, we need more notation, we denote tij to be the
ith row, j th column entry of P , and tij to be the ith row, j th column entry
of P −1 .
We try to write F t in explicit matrix form,
 t   11 12
· · · t1p
 
t11 t12 · · · t1p λ1 0 · · · 0 t t
t21 t22 · · · t2p   0 λt · · · 0  t21 t22 · · · t2p 
t 2
F = .
   
. .. . . ..   .. .. . . ..   .. .. . . .. 
 . . . .  . . . .  . . . . 
tp1 tp2 · · · tpp 0 0 · · · λtp tp1 tp2 · · · tpp
t11 λt1 t12 λt2 · · · t1p λtp
  11 12
· · · t1p
 
t t
t21 λt t22 λt · · · t2p λt  t21 t22 · · · t2p 
1 2 p 
= .
 
. .. . . ..   .. .. . . .. 
 . . . .  . . . . 
t t t p1 p2 ··· t pp
tp1 λ1 tp2 λ2 · · · tpp λp t t

14
t is
Then f11
t
f11 = t11 t11 λt1 + t12 t21 λt2 + · + t1p tp1 λtp
if we denote ci = t1i ti1 , then
t
f11 = c1 λt1 + c2 λt2 + · + cp λtp
Obviously if you pay attention to,
c1 + c2 + . . . + cp = t11 t11 + t12 t21 + . . . + t1p tp1 ,
you would realize it is a scalar product, it is from the first row of P and first
column of P −1 . In other words, it is the first element of P P −1 . Magically,
P P −1 is an identity matrix. Thus,
c1 + c2 + . . . + cp = 1
This time if you want to calculate dynamic multiplier,
∂yt (t)
= f11
∂ω0
= c1 λt1 + c2 λt2 + · + cp λtp
the dynamic multiplier is a weighted average of all tth powered eigenvalues.
Here one problem must be solved before we move on, how can we find
ci ? Or is it just some unclosed form expression, we can stop here?
What we do next might look very strange to you, but don’t stop and
finish it you will get a sense of what we are preparing here.
Set ti to be the ith eigenvalue,
 p−1 
λi
λp−2 
 i 
ti =  ... 
 
 
 λ1 
i
1
Then,
   p−1 
a1 a2 a3 · · · ap−1 ap λi
 1 0 0 ···
 0 0  λip−2 
 

  ... 
 0 1 0 ··· 0 0
F ti = 
 
 .. .. .. .. ..   1 
. . . ··· . .   λi 
0 0 0 ··· 1 0 1
 p−1 p−2 1 
a1 λi + a2 λi + . . . + ap−1 λi + ap

 λp−1
i


p−2
=
 λi 

 .. 
 . 
λ1i

15
Recall you have seen the characteristic equation of pth -order (5),

λp − a1 λp−1 − a2 λp−2 − · · · − ap−1 λ − ap = 0

Rearange,
λp = a1 λp−1 + a2 λp−2 + · · · + ap−1 λ + ap (6)
Interestingly, we get what we want here, the right-hand side of last equation
is just the first element of F ti .
 p 
λi
λp−1 
 ip−2 
F ti = λi 
 
 . 
 .. 
λ1i

We factor out a λi ,  p−1 


λi
λp−2 
 i 
F ti = λi  ...  = λi ti
 
 
 λ1 
i
1
We have successfully showed that ti is the eigenvalue of F . Now we can set
up a P , its every column is eigenvalue,
 p−1
λp−1 · · · λpp−1

λ1 2
λp−2 λp−2 · · · λp−2 p 

 1 2
 . . .. .
P =  .. .. .. 

 . 
 λ1 λ1 · · · λ1 
1 2 p
1 1 ··· 1

Actually this is a transposed Vandermonde matrix, which is mainly made


use of in signal processing and polynomial interpolation. Since we do not
know P −14 yet. We use the notation of tij , we postmultiply P by the first
column of P −1 ,
 p−1
λp−1 · · · λp−1
  
λ1 2 p  11  1
λp−2 λp−2 · · · λp−2 p
 t 0
 1 2  t21   
 . .. .. ..     .. 
 .. .  . = 
 . .   ..   . 
 λ 1 1
λ2 · · · 1
λp  p1 0
1 t
1 1 ··· 1 0
4
Don’t ever try to calculate its inverse matrix by ajugate matrix or Gauss-Jordon
elimination by hands, it is very inaccurate and time consuming.

16
This is a linear equation system, solution will look like,
1
t11 =
(λ1 − λ2 )(λ1 − λ3 ) · · · (λ1 − λp )
1
t21 =
(λ2 − λ2 )(λ2 − λ3 ) · · · (λ2 − λp )
..
.
1
tp1 =
(λp − λ2 )(λp − λ3 ) · · · (λp − λp )

Thus,

11 λp−1
1
c1 = t11 t =
(λ1 − λ2 )(λ1 − λ3 ) · · · (λ1 − λp )
λp−1
c2 = t12 t21 = 2
(λ2 − λ1 )(λ2 − λ3 ) · · · (λ2 − λp )
..
.
p1 λpp−1
cp = t1p t =
(λp − λ2 )(λp − λ3 ) · · · (λp − λp−1 )

I know all this must been quite uncomfortable to you if you don’t fancy
mathematics too much. We’d better go through a small example to get
familiar with these artilleries.
Let’s look at a second-order difference equation,

yt = 0.4yt−1 + 0.7yt−2 + ωt .

But I guess you would prefer to it like this,

yt+2 − 0.4yt+1 − 0.7yt = ωt+2 .

Calculate its characteristic equation,

|F − λI| = 0,

which is
     
0.4 0.7 λ 0 0.4 − λ 0.7
F − λI = − =
1 0 0 λ 1 −λ

Now calculate its determinant,

F − λI = λ2 − 0.4λ − 0.7

17
2

1.8

1.6

1.4

1.2

0.8

0.6

0.4

0.2

0
0 5 10 15 20 25

Figure 1: Dynamic multiplier as a function of t.

Use root formula,


p
(−0.4)2 − 4(−0.7)
0.4 +
λ1 = = 1.0602
p 2
0.4 − (−0.4)2 − 4(−0.7)
λ2 = = −0.6602
2
For ci ,
λ1 1.0602
c1 = = = 0.6163
λ1 − λ2 1.0602 + 0.6602
λ2 −0.6602
c2 = = = 0.3837
λ2 − λ1 −0.6602 − 1.0602
And note that 0.6163 + 0.3837 = 1. Dynamic multiplier is
∂yt
= c1 λt + c2 λt = 0.6163 · 1.0602t + 0.3837 · (−0.6602)t
∂ωt
The figure 1 shows the dynamic multiplier as a function of t, MATLAB
code is at the appendix. The dynamic multiplier will explode as t → ∞,
because we have an eigenvalue λ1 > 1.

18
3.2.2 Distinct Complex Eigenvalues
We also need to talk about distinct complex eigenvalues, but the example
will still resort to second-order difference equation, because we have a handy
root formula. The pace might be a little fast, but easily understandable.
Suppose we have two complex eigenvalues,
λ1 = α + iβ
λ2 = α − iβ
And modulus of the conjugate pair is the same,
p
kλk = α2 + β 2
Then rewritten conjugate pair as,
λ1 = kλk (cos θ + i sin θ)
λ2 = kλk (cos θ − i sin θ)
According to De Moivre’s theorem, to raise the power of complex number,
λt1 = kλkt (cos tθ + i sin tθ)
λt2 = kλkt (cos tθ − i sin tθ)
Back to dynamic multiplier,
∂yt
= c1 λt1 + c2 λt2
∂ωt
= c1 kλkt (cos tθ + i sin tθ) + c2 kλkt (cos tθ − i sin tθ)

rearrange, we have

= (c1 + c2 ) kλkt cos tθ + i(c1 − c2 ) kλkt sin tθ


However, ci is calculated from λi , so since eigenvalues are complexe number,
and so are ci ’s. We can denote the conjugate pair as,
c1 = γ + iδ
c2 = γ − iδ
Thus,
c1 λt1 + c2 λt2 = [(γ + iδ) + (γ − iδ)] kλkt cos tθ + i[((γ + iδ) − (γ − iδ)] kλkt sin tθ
= 2γ kλkt cos tθ + i2iδ kλkt sin tθ
= 2γ kλkt cos tθ − 2δ kλkt sin tθ
it turns out real number again. And the time path is determined by modulus
kλk, if kλk > |1|, the dynamic multiplier will explode, if kλk < |1| dynamic
multiplier follow either a oscillating or nonoscillating decaying pattern.

19
3.2.3 Repeated Eigenvalues
If we have repeat eigenvalues, diagonalization might not be the choice, since
we might encounter singular P . But since we have enough mathematical
artillery to use, we just switch to another kind of more general diagonaliza-
tion, which is called the Jordon decomposition. Let’s give a definition first,
a h × h matrix which is called Jordon block matrix, is defined as
 
λ 1 0 ··· 0
0 λ 1 · · · 0
 
J h (λ) =  0 0 λ · · ·
 

 .. .. .. . . .
. . . . .. 
0 0 0 ··· λ

This is a Jordon canonical form,5


The Jordon decomposition says, if we have a m×m matrix A, then there
must be a nonsingular matrix B such that,
 
J h1 (λ1 ) (0) ··· (0)
 (0) J h2 (λ2 ) · · · (0) 
B −1 AB = J = 
 
.. .. . . .. 
 . . . . 
(0) (0) · · · J hr (λr )
Cosmetically change to our notation,

F = M J M −1

As you can imagine,


F t = M J t M −1
And  t 
J h1 (λ1 ) (0) ··· (0)
t
 (0) J h2 2(λ ) · ·· (0) 
Jt = 
 
.. .. . .. .. 
 . . . 
(0) (0) ··· J thr (λr )
What you need to do is just to calculate J thi (λi ) one by one, of course we
should leave this to computer. Then you can calculate dynamic multiplier
(t)
as usual, once you figure out the F , then pick the (1, 1) entry f11 the rest
will be done.

4 Lag Operator
In econometrics, the convention is to use lag operator to function as what we
did with difference equation. Essentially, they are the identical. But there
5
Study my notes of Linear Algebra and Matrix Analysis II [1].

20
is still some subtle conceptual discrepancy, difference equation is a kind of
equation, a balanced expression on both sides, but lag operator represents a
kind of operation, no different from addition, substraction or multiplication.
Lag operator will turn a time series into a difference equation.
We conventionally use L to represent lag operator, for instance,

Lxt = xt−1 L2 xt = xt−2

4.1 pth -order difference equation with lag operator


Basically this section is to reproduce the crucial results of what we did with
difference equation. We are trying to show you that we can reach the same
goal with the help of either difference equations or lag operators.
Turn this pth -order difference equation into lag operator form,

yt − a1 yt−1 − a2 yt−2 − · · · − ap yt−p = ωt

which will be
(1 − a1 L − a2 L2 − · · · − ap Lp )yt = ωt
Because it is an operation in the equation above, some steps below might
not be appropriate, but if we switch to

(1 − a1 z − a2 z 2 − · · · − ap z p )yt = ωt ,

we can use some algebraic operation to analyse it. Multiply both sides by
z −p ,

z −p (1 − a1 z − a2 z 2 − · · · − ap z p )yt = z −p ωt
(z −p − a1 zz −p − a2 z 2 z −p − · · · − ap z p z −p )yt = z −p ωt
(z −p − a1 z 1−p − a2 z 2−p − · · · − ap )yt = z −p ωt

And define λ = z −1 , and set the right-hand side zero,

λp − a1 λp−1 − a2 λp−2 − · · · − ap−1 λ − ap = 0

This is the characteristic equation, reproduction of equation (6).

21
References
[1] Chen W. (2011): Linear Algebra and Matrix Analysis II, study notes

[2] Chen W. (2011): Differential Equations, study notes

[3] Chiang A.C. and Wainwright K.(2005): Fundamental Methods of Math-


ematical Economics, McGraw-Hill

[4] Simon C. and Blume L.(1994): Mathematics For Economists,


W.W.Norton & Company

[5] Enders W. (2004): Applied Econometric Time Series, John Wiley &
Sons

22

You might also like