QuantEconlectures Python3
QuantEconlectures Python3
lectures-python3 PDF
Release 2018-Sep-29
This pdf presents a series of lectures on quantitative economic modeling, designed and written
by Thomas J. Sargent and John Stachurski. The primary programming languages are Python and
Julia. You can send feedback to the authors via contact@quantecon.org.
Note: You are currently viewing an automatically generated pdf version of our online
lectures, which are located at
https://lectures.quantecon.org
Please visit the website for more information on the aims and scope of the lectures and the two
language options (Julia or Python).
Due to automatic generation of this pdf, presentation quality is likely to be lower than that
of the website.
i
ii
CONTENTS
1 Introduction to Python 1
1.1 About Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Setting up Your Python Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3 An Introductory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.4 Python Essentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.5 OOP I: Introduction to Object Oriented Programming . . . . . . . . . . . . . . . . . . . . 78
iii
6 Dynamic Programming 485
6.1 Shortest Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
6.2 Job Search I: The McCall Search Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
6.3 Job Search II: Search and Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
6.4 A Problem that Stumped Milton Friedman . . . . . . . . . . . . . . . . . . . . . . . . . . 518
6.5 Job Search III: Search with Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
6.6 Job Search IV: Modeling Career Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
6.7 Job Search V: On-the-Job Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
6.8 Optimal Growth I: The Stochastic Optimal Growth Model . . . . . . . . . . . . . . . . . . 583
6.9 Optimal Growth II: Time Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
6.10 Optimal Growth III: The Endogenous Grid Method . . . . . . . . . . . . . . . . . . . . . . 621
6.11 LQ Dynamic Programming Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
6.12 Optimal Savings I: The Permanent Income Model . . . . . . . . . . . . . . . . . . . . . . 662
6.13 Optimal Savings II: LQ Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
6.14 Consumption and Tax Smoothing with Complete and Incomplete Markets . . . . . . . . . . 697
6.15 Optimal Savings III: Occasionally Binding Constraints . . . . . . . . . . . . . . . . . . . . 716
6.16 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
6.17 Discrete State Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
iv
9.7 Fiscal Risk and Government Debt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
9.8 Competitive Equilibria of Chang Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272
9.9 Credible Government Policies in Chang Model . . . . . . . . . . . . . . . . . . . . . . . . 1303
Bibliography 1347
v
vi
CHAPTER
ONE
INTRODUCTION TO PYTHON
This first part of the course provides a relatively fast-paced introduction to the Python programming language
Contents
• About Python
– Overview
– Whats Python?
– Scientific Programming
– Learn More
1.1.1 Overview
Python is a general purpose programming language conceived in 1989 by Dutch programmer Guido van
Rossum
1
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Python is free and open source, with development coordinated through the Python Software Foundation
Python has experienced rapid adoption in the last decade, and is now one of the most popular programming
languages
Common Uses
Relative Popularity
The following chart, produced using Stack Overflow Trends, shows one measure of the relative popularity
of Python
The figure indicates not only that Python is widely used but also that adoption of Python has accelerated
significantly since 2012
We suspect this is driven at least in part by uptake in the scientific domain, particularly in rapidly growing
fields like data science
For example, the popularity of pandas, a library for data analysis with Python has exploded, as seen here
(The corresponding time path for MATLAB is shown for comparison)
Note that pandas takes off in 2012, which is the same year that we seek Pythons popularity begin to spike in
the first figure
Overall, its clear that
• Python is one of the most popular programming languages worldwide
• Python is a major tool for scientific computing, accounting for a rapidly rising share of scientific work
around the globe
Features
One nice feature of Python is its elegant syntax well see many examples later on
Elegant code might sound superfluous but in fact its highly beneficial because it makes the syntax easy to
read and easy to remember
Remembering how to read from files, sort dictionaries and other such routine tasks means that you dont need
to break your flow in order to hunt down correct syntax
Closely related to elegant syntax is elegant design
Features like iterators, generators, decorators, list comprehensions, etc. make Python highly expressive,
allowing you to get more done with less code
Namespaces improve productivity by cutting down on bugs and syntax errors
Numerical programming
Fundamental matrix and array processing capabilities are provided by the excellent NumPy library
NumPy provides the basic array data type plus some simple processing operations
For example, lets build some arrays
b @ c
1.5265566588595902e-16
The number you see here might vary slightly but its essentially zero
(For older versions of Python and NumPy you need to use the np.dot function)
The SciPy library is built on top of NumPy and provides additional functionality
∫2
For example, lets calculate −2 ϕ(z)dz where ϕ is the standard normal density
= norm()
value, error = quad(.pdf, -2, 2) # Integrate using Gaussian quadrature
value
0.9544997361036417
Graphics
The most popular and comprehensive Python library for creating figures and graphs is Matplotlib
• Plots, histograms, contour images, 3D, bar charts, etc., etc.
• Output in many formats (PDF, PNG, EPS, etc.)
• LaTeX integration
Example 2D plot with embedded LaTeX annotations
Example 3D plot
Symbolic Algebra
3*x + y
expression = (x + y)**2
expression.expand()
solve polynomials
solve(x**2 + x + 2)
limit(1 / x, x, 0)
oo
limit(sin(x) / x, x, 0)
diff(sin(x), x)
cos(x)
The beauty of importing this functionality into Python is that we are working within a fully fledged pro-
gramming language
Can easily create tables of derivatives, generate LaTeX output, add it to figures, etc., etc.
Statistics
Pythons data manipulation and statistics libraries have improved rapidly over the last few years
Pandas
One of the most popular libraries for working with data is pandas
Pandas is fast, efficient, flexible and well designed
Heres a simple example, using some fake data
import pandas as pd
np.random.seed(1234)
price weight
2010-12-28 0.471435 -1.190976
2010-12-29 1.432707 -0.312652
2010-12-30 -0.720589 0.887163
2010-12-31 0.859588 -0.636524
2011-01-01 0.015696 -2.242685
df.mean()
price 0.411768
weight -0.699135
dtype: float64
import networkx as nx
import matplotlib.pyplot as plt
np.random.seed(1234)
Cloud Computing
Running your Python code on massive servers in the cloud is becoming easier and easier
A nice example is Anaconda Enterprise
See also
• Amazon Elastic Compute Cloud
• The Google App Engine (Python, Java, PHP or Go)
• Pythonanywhere
• Sagemath Cloud
Parallel Processing
Apart from the cloud computing options listed above, you might like to consider
• Parallel computing through IPython clusters
• The Starcluster interface to Amazons EC2
• GPU programming through PyCuda, PyOpenCL, Theano or similar
Other Developments
There are many other interesting developments with scientific programming in Python
Some representative examples include
• Jupyter Python in your browser with code cells, embedded images, etc.
• Numba Make Python run at the same speed as native machine code!
• Blaze a generalization of NumPy
• PyTables manage large data sets
• CVXPY convex optimization in Python
Contents
1.2.1 Overview
1.2.2 Anaconda
The core Python package is easy to install but not what you should choose for these lectures
These lectures require the entire scientific programming ecosystem, which
• the core installation doesnt provide
• is painful to install one piece at a time
Hence the best approach for our purposes is to install a free Python distribution that contains
1. the core Python language and
2. the most popular scientific libraries
The best such distribution is Anaconda
Anaconda is
• very popular
• cross platform
• comprehensive
• completely unrelated to the Nicki Minaj song of the same name
Anaconda also comes with a great package management system to organize your code libraries
All of what follows assumes that you adopt this recommendation!
Installing Anaconda
Installing Anaconda is straightforward: download the binary and follow the instructions
Important points:
• Install the latest version
• If you are asked during the installation process whether youd like to make Anaconda your default
Python installation, say yes
• Otherwise you can accept all of the defaults
Well be using your browser to interact with Python, so now might be a good time to
1. update your browser, or
2. install a free modern browser such as Chrome or Firefox
Jupyter notebooks are one of the many possible ways to interact with Python and the scientific libraries
They use a browser-based interface to Python with
• The ability to write and execute Python commands
• Formatted output in the browser, including tables, figures, animation, etc.
• The option to mix in formatted text and mathematical expressions
Because of these possibilities, Jupyter is fast turning into a major player in the scientific computing ecosys-
tem
Heres an image of showing execution of some code (borrowed from here) in a Jupyter notebook
You can find a nice example of the kinds of things you can do in a Jupyter notebook (such as include maths
and text) here
Further examples can be found at QuantEcons notebook archive or the NB viewer site
While Jupyter isnt the only way to code in Python, its great for when you wish to
Once you have installed Anaconda, you can start the Jupyter notebook
Either
• search for Jupyter in your applications menu, or
• open up a terminal and type jupyter notebook
– Windows users should substitute Anaconda command prompt for terminal in the previous line
If you use the second option, you will see something like this (click to enlarge)
Hopefully your default browser has also opened up with a web page that looks something like this (click to
enlarge)
The notebook displays an active cell, into which you can type Python commands
Notebook Basics
Lets start with how to edit code and run simple programs
Running Cells
Notice that in the previous figure the cell is surrounded by a green border
This means that the cell is in edit mode
As a result, you can type in Python code and it will appear in the cell
When youre ready to execute the code in a cell, hit Shift-Enter instead of the usual Enter
(Note: There are also menu and button options for running code in a cell that you can find by exploring)
Modal Editing
The next thing to understand about the Jupyter notebook is that it uses a modal editing system
This means that the effect of typing at the keyboard depends on which mode you are in
The two modes are
1. Edit mode
Python 3 introduced support for unicode characters, allowing the use of characters such as α and β in your
code
Unicode characters can be typed quickly in Jupyter using the tab key
Try creating a new code cell and typing \alpha, then hitting the tab key on your keyboard
A Test Program
N = 20
θ = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
ax = plt.subplot(111, polar=True)
bars = ax.bar(θ, radii, width=width, bottom=0.0)
plt.show()
Dont worry about the details for now lets just run it and see what happens
The easiest way to run this code is to copy and paste into a cell in the notebook
You should see something like this
(In older versions of Jupyter you might need to add the command %matplotlib inline before you
generate the figure)
Tab Completion
On-Line Help
Clicking in the top right of the lower split closes the on-line help
Other Content
In addition to executing code, the Jupyter notebook allows you to embed text, equations, figures and even
videos in the page
For example, here we enter a mixture of plain text and LaTeX instead of code
Next we Esc to enter command mode and then type m to indicate that we are writing Markdown, a mark-up
language similar to (but simpler than) LaTeX
(You can also use your mouse to select Markdown from the Code drop-down box just below the list of
menu items)
Now we Shift+Enter to produce this
Sharing Notebooks
Notebook files are just text files structured in JSON and typically ending with .ipynb
You can share them in the usual way that you share files or by using web services such as nbviewer
The notebooks you see on that site are static html representations
To run one, download it as an ipynb file by clicking on the download icon at the top right
Save it somewhere, navigate to it from the Jupyter dashboard and then run as discussed above
1.2.4 QuantEcon.py
In these lectures well make extensive use of code from the QuantEcon organization
On the Python side well be using the QuantEcon.py version
This code has been organized into a Python package
• A Python package is a software library that has been bundled for distribution
• Hosted Python packages can be found through channels like Anaconda and PyPi
You can install QuantEcon.py by starting Jupyter and typing
!pip install quantecon
into a cell
Alternatively, you can type the following into a terminal
pip install quantecon
More instructions can be found on the library page
Note: The QuantEcon.py package can also be installed using conda by:
For these lectures to run without error you need to keep your software up to date
Updating Anaconda
Anaconda supplies a tool called conda to manage and upgrade your Anaconda packages
One conda command you should execute regularly is the one that updates the whole Anaconda distribution
As a practice run, please execute the following
1. Open up a terminal
2. Type conda update anaconda
For more information on conda, type conda help in a terminal
Updating QuantEcon.py
Or open up Jupyter and type the same thing in a notebook cell with ! in front of it
Method 2: Run
Using the run command is often easier than copy and paste
• For example, %run test.py will run the file test.py
(You might find that the % is unnecessary use %automagic to toggle the need for %)
Note that Jupyter only looks for test.py in the present working directory (PWD)
If test.py isnt in that directory, you will get an error
Lets look at a successful example, where we run a file test.py with contents:
for i in range(5):
print('foobar')
Here
• pwd asks Jupyter to show the PWD (or %pwd see the comment about automagic above)
– This is where Jupyter is going to look for files to run
– Your output will look a bit different depending on your OS
• ls asks Jupyter to list files in the PWD (or %ls)
– Note that test.py is there (on our computer, because we saved it there earlier)
• cat test.py asks Jupyter to print the contents of test.py (or !type test.py on Windows)
• run test.py runs the file and prints any output
If youre trying to run a file not in the present working director, youll get an error
To fix this error you need to either
1. Shift the file into the PWD, or
2. Change the PWD to where the file lives
One way to achieve the first option is to use the Upload button
• The button is on the top level dashboard, where Jupyter first opened to
• Look where the pointer is in this picture
Loading Files
Its often convenient to be able to see your code before you run it
Saving Files
The preceding discussion covers most of what you need to know to interact with this website
However, as you start to write longer programs, you might want to experiment with your workflow
There are many different options and we mention them only in passing
JupyterLab
Text Editors
A text editor is an application that is specifically designed to work with text files such as Python programs
Nothing beats the power and efficiency of a good text editor for working with program text
A good text editor will provide
• efficient text editing commands (e.g., copy, paste, search and replace)
• syntax highlighting, etc.
Among the most popular are Sublime Text and Atom
For a top quality open source text editor with a steeper learning curve, try Emacs
If you want an outstanding free text editor and dont mind a seemingly vertical learning curve plus long days
of pain and suffering while all your neural pathways are rewired, try Vim
The IPython shell has many of the features of the notebook: tab completion, color syntax, etc.
It also has command history through the arrow key
The up arrow key to brings previously typed commands to the prompt
This saves a lot of typing
Heres one set up, on a Linux box, with
• a file being edited in Vim
• An IPython shell next to it, to run the file
IDEs
IDEs are Integrated Development Environments, which allow you to edit, execute and interact with code
from an integrated environment
One of the most popular in recent times is VS Code, which is now available via Anaconda
We hear good things about VS Code please tell us about your experiences on the forum
1.2.8 Exercises
Exercise 1
If Jupyter is still running, quit by using Ctrl-C at the terminal where you started it
Now launch again, but this time using jupyter notebook --no-browser
This should start the kernel without launching the browser
Note also the startup message: It should give you a URL such as http://localhost:8888 where the
notebook is running
Now
1. Start your browser or open a new tab if its already running
2. Enter the URL from above (e.g. http://localhost:8888) in the address bar at the top
You should now be able to run a standard Jupyter notebook session
This is an alternative way to start the notebook that can also be handy
Exercise 2
Contents
• An Introductory Example
– Overview
– The Task: Plotting a White Noise Process
– Version 1
– Alternative Versions
– Exercises
– Solutions
1.3.1 Overview
In this lecture we will write and then pick apart small Python programs
The objective is to introduce you to basic Python syntax and data structures
Deeper concepts will be covered in later lectures
Prerequisites
Suppose we want to simulate and plot the white noise process ϵ0 , ϵ1 , . . . , ϵT , where each draw ϵt is indepen-
dent standard normal
In other words, we want to generate figures that look something like this:
1.3.3 Version 1
import numpy as np
import matplotlib.pyplot as plt
x = np.random.randn(100)
plt.plot(x)
plt.show()
Import Statements
import numpy as np
np.sqrt(4)
2.0
import numpy
numpy.sqrt(4)
2.0
Packages
In fact you can find and explore the directory for NumPy on your computer easily enough if you look around
On this machine its located in
anaconda3/lib/python3.6/site-packages/numpy
Subpackages
import numpy as np
np.sqrt(4)
2.0
sqrt(4)
2.0
ts_length = 100
_values = [] # Empty list
for i in range(ts_length):
e = np.random.randn()
_values.append(e)
plt.plot(_values)
plt.show()
In brief,
• The first pair of lines import functionality as before
• The next line sets the desired length of the time series
• The next line creates an empty list called _values that will store the ϵt values as we generate them
• The next three lines are the for loop, which repeatedly draws a new random number ϵt and appends it
to the end of the list _values
• The last two lines generate the plot and display it to the user
Lets study some parts of this program in more detail
Lists
list
The first element of x is an integer, the next is a string and the third is a Boolean value
When adding a value to a list, we can use the syntax list_name.append(some_value)
x.append(2.5)
x
Here append() is whats called a method, which is a function attached to an objectin this case, the list x
Well learn all about methods later on, but just to give you some idea,
• Python objects such as lists, strings, etc. all have methods that are used to manipulate the data con-
tained in the object
• String objects have string methods, list objects have list methods, etc.
Another useful list method is pop()
x.pop()
2.5
x[0]
10
x[1]
'foo'
Now lets consider the for loop from the program above, which was
for i in range(ts_length):
e = np.random.randn()
_values.append(e)
Python executes the two indented lines ts_length times before moving on
These two lines are called a code block, since they comprise the block of code that we are looping over
Unlike most other languages, Python knows the extent of the code block only from indentation
In our program, indentation decreases after line _values.append(e), telling Python that this line marks
the lower limit of the code block
More on indentation belowfor now lets look at another example of a for loop
animals = ['dog', 'cat', 'bird']
for animal in animals:
print("The plural of " + animal + " is " + animal + "s")
If you put this in a text file or Jupyter cell and run it you will see
The plural of dog is dogs
The plural of cat is cats
The plural of bird is birds
This example helps to clarify how the for loop works: When we execute a loop of the form
In discussing the for loop, we explained that the code blocks being looped over are delimited by indentation
In fact, in Python all code blocks (i.e., those occurring inside loops, if clauses, function definitions, etc.) are
delimited by indentation
Thus, unlike most other languages, whitespace in Python code affects the output of the program
Once you get used to it, this is a good thing: It
• forces clean, consistent indentation, improving readability
• removes clutter, such as the brackets or end statements used in other languages
On the other hand, it takes a bit of care to get right, so please remember:
• The line before the start of a code block always ends in a colon
– for i in range(10):
– if x > y:
– while x < 100:
– etc., etc.
• All lines in a code block must have the same amount of indentation
• The Python standard is 4 spaces, and thats what you should use
Tabs vs Spaces
One small gotcha here is the mixing of tabs and spaces, which often leads to errors
(Important: Within text files, the internal representation of tabs and spaces is not the same)
You can use your Tab key to insert 4 spaces, but you need to make sure its configured to do so
If you are using a Jupyter notebook you will have no problems here
Also, good text editors will allow you to configure the Tab key to insert spaces instead of tabs trying
searching on line
While Loops
The for loop is the most common technique for iteration in Python
But, for the purpose of illustration, lets modify the program above to use a while loop instead
ts_length = 100
_values = []
i = 0
while i < ts_length:
e = np.random.randn()
_values.append(e)
i = i + 1
plt.plot(_values)
plt.show()
Note that
• the code block for the while loop is again delimited only by indentation
• the statement i = i + 1 can be replaced by i += 1
User-Defined Functions
Now lets go back to the for loop, but restructure our program to make the logic clearer
To this end, we will break our program into two parts:
1. A user-defined function that generates a list of random variables
2. The main part of the program that
(a) calls this function to get data
(b) plots the data
This is accomplished in the next program
def generate_data(n):
_values = []
for i in range(n):
e = np.random.randn()
_values.append(e)
return _values
data = generate_data(100)
plt.plot(data)
plt.show()
Lets go over this carefully, in case youre not familiar with functions and how they work
We have defined a function called generate_data() as follows
• def is a Python keyword used to start function definitions
• def generate_data(n): indicates that the function is called generate_data, and that it
has a single argument n
• The indented code is a code block called the function bodyin this case it creates an iid list of random
draws using the same logic as before
• The return keyword indicates that _values is the object that should be returned to the calling
code
This whole function definition is read by the Python interpreter and stored in memory
When the interpreter gets to the expression generate_data(100), it executes the function body with n
set equal to 100
The net result is that the name data is bound to the list _values returned by the function
Conditions
Hopefully the syntax of the if/else clause is self-explanatory, with indentation again delimiting the extent of
the code blocks
Notes
• We are passing the argument U as a string, which is why we write it as 'U'
• Notice that equality is tested with the == syntax, not =
– For example, the statement a = 10 assigns the name a to the value 10
– The expression a == 10 evaluates to either True or False, depending on the value of a
Now, there are several ways that we can simplify the code above
For example, we can get rid of the conditionals all together by just passing the desired generator type as a
function
To understand this, consider the following version
Now, when we call the function generate_data(), we pass np.random.uniform as the second
argument
This object is a function
When the function call generate_data(100, np.random.uniform) is executed, Python runs the
function code block with n equal to 100 and the name generator_type bound to the function np.
random.uniform
• While these lines are executed, the names generator_type and np.random.uniform are
synonyms, and can be used in identical ways
This principle works more generallyfor example, consider the following piece of code
m = max
m(7, 2, 4)
Here we created another name for the built-in function max(), which could then be used in identical ways
In the context of our program, the ability to bind new names to functions means that there is no problem
passing a function as an argument to another functionas we did above
List Comprehensions
We can also simplify the code for generating the list of random draws considerably by using something
called a list comprehension
List comprehensions are an elegant Python tool for creating lists
Consider the following example, where the list comprehension is on the right-hand side of the second line
animals = ['dog', 'cat', 'bird']
plurals = [animal + 's' for animal in animals]
plurals
[0, 1, 2, 3, 4, 5, 6, 7]
into
_values = [generator_type() for i in range(n)]
1.3.5 Exercises
Exercise 1
There are functions to compute this in various modules, but lets write our own version as an exercise
In particular, write a function factorial such that factorial(n) returns n! for any positive integer n
Exercise 2
The binomial random variable Y ∼ Bin(n, p) represents the number of successes in n binary trials, where
each trial succeeds with probability p
Without any import besides from numpy.random import uniform, write a function
binomial_rv such that binomial_rv(n, p) generates one draw of Y
Hint: If U is uniform on (0, 1) and p ∈ (0, 1), then the expression U < p evaluates to True with probability
p
Exercise 3
Exercise 4
Write a program that prints one realization of the following random device:
• Flip an unbiased coin 10 times
• If 3 consecutive heads occur one or more times within this sequence, pay one dollar
• If not, pay nothing
Use no import besides from numpy.random import uniform
Exercise 5
Your next task is to simulate and plot the correlated time series
xt+1 = α xt + ϵt+1 where x0 = 0 and t = 0, . . . , T
The sequence of shocks {ϵt } is assumed to be iid and standard normal
In your solution, restrict your import statements to
import numpy as np
import matplotlib.pyplot as plt
Exercise 6
To do the next exercise, you will need to know how to produce a plot legend
The following example should be sufficient to convey the idea
import numpy as np
import matplotlib.pyplot as plt
Now, starting with your solution to exercise 5, plot three simulated time series, one for each of the cases
α = 0, α = 0.8 and α = 0.98
In particular, you should produce (modulo randomness) a figure that looks as follows
(The figure nicely illustrates how time series with the same one-step-ahead conditional volatilities, as these
three processes have, can have very different unconditional volatilities.)
Use a for loop to step through the α values
Important hints:
• If you call the plot() function multiple times before calling show(), all of the lines you produce
will end up on the same figure
– And if you omit the argument 'b-' to the plot function, Matplotlib will automatically select
different colors for each line
• The expression 'foo' + str(42) evaluates to 'foo42'
1.3.6 Solutions
Exercise 1
def factorial(n):
k = 1
for i in range(n):
k = k * (i + 1)
return k
factorial(4)
24
Exercise 2
binomial_rv(10, 0.5)
Exercise 3
n = 100000
count = 0
for i in range(n):
u, v = np.random.uniform(), np.random.uniform()
d = np.sqrt((u - 0.5)**2 + (v - 0.5)**2)
if d < 0.5:
count += 1
area_estimate = count / n
3.1496
Exercise 4
payoff = 0
count = 0
for i in range(10):
U = uniform()
count = count + 1 if U < 0.5 else 0
if count == 3:
payoff = 1
print(payoff)
Exercise 5
The next line embeds all subsequent figures in the browser itself
α = 0.9
ts_length = 200
current_x = 0
x_values = []
for i in range(ts_length + 1):
x_values.append(current_x)
current_x = α * current_x + np.random.randn()
plt.plot(x_values)
plt.show()
Exercise 6
for α in αs:
x_values = []
current_x = 0
for i in range(ts_length):
x_values.append(current_x)
current_x = α * current_x + np.random.randn()
plt.plot(x_values, label=f'α = {α}')
plt.legend()
plt.show()
Contents
• Python Essentials
– Data Types
– Input and Output
– Iterating
– Comparisons and Logical Operators
– More Functions
– Coding Style and PEP8
– Exercises
– Solutions
In this lecture well cover features of the language that are essential to reading and writing Python code
Weve already met several built in Python data types, such as strings, integers, floats and lists
Lets learn a bit more about them
One simple data type is Boolean values, which can be either True or False
x = True
x
True
In the next line of code, the interpreter evaluates the expression on the right of = and binds y to this value
y = 100 < 10
y
False
type(y)
bool
x + y
x * y
True + True
sum(bools)
The two most common data types used to represent numbers are integers and floats
a, b = 1, 2
c, d = 2.5, 10.0
type(a)
int
type(c)
float
Computers distinguish between the two because, while floats are more informative, arithmetic operations on
integers are faster and more accurate
As long as youre using Python 3.x, division of integers yields floats
1 / 2
0.5
But be careful! If youre still using Python 2.x, division of two integers returns only the integer part
For integer division in Python 3.x use this syntax:
1 // 2
x = complex(1, 2)
y = complex(2, 1)
x * y
5j
Containers
Python has several basic types for storing collections of (possibly heterogeneous) data
Weve already discussed lists
('a', 'b')
type(x)
tuple
In Python, an object is called immutable if, once created, the object cannot be changed
Conversely, an object is mutable if it can still be altered after creation
Python lists are mutable
x = [1, 2]
x[0] = 10
x
[10, 2]
x = (1, 2)
x[0] = 10
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<python-input-21-6cb4d74ca096> in <module>()
----> 1 x[0]=10
Well say more about the role of mutable and immutable data a bit later
Tuples (and lists) can be unpacked as follows
10
20
Slice Notation
To access multiple elements of a list or tuple, you can use Pythons slice notation
For example,
a = [2, 4, 6, 8]
a[1:]
[4, 6, 8]
a[1:3]
[4, 6]
[6, 8]
s = 'foobar'
s[-3:] # Select the last three elements
'bar'
Two other container types we should mention before moving on are sets and dictionaries
Dictionaries are much like lists, except that the items are named instead of numbered
dict
d['age']
33
s1 = {'a', 'b'}
type(s1)
set
s2 = {'b', 'c'}
s1.issubset(s2)
False
s1.intersection(s2)
set(['b'])
Lets briefly review reading and writing to text files, starting with writing
Here
• The built-in function open() creates a file object for writing to
• Both write() and close() are methods of file objects
Where is this file that weve created?
Recall that Python maintains a concept of the present working directory (pwd) that can be located from with
Jupyter or IPython via
%pwd
f = open('newfile.txt', 'r')
out = f.read()
out
'Testing\nTesting again'
print(out)
Testing
Testing again
Paths
Note that if newfile.txt is not in the present working directory then this call to open() fails
In this case you can shift the file to the pwd or specify the full path to the file
f = open('insert_full_path_to_file/newfile.txt', 'r')
1.4.3 Iterating
One of the most important tasks in computing is stepping through a sequence of data and performing a given
action
One of Pythons strengths is its simple, flexible interface to this kind of iteration via the for loop
Many Python objects are iterable, in the sense that they can looped over
To give an example, lets write the file us_cities.txt, which lists US cities and their population, to the present
working directory
%%file us_cities.txt
new york: 8244910
los angeles: 3819702
chicago: 2707120
houston: 2145146
philadelphia: 1536471
phoenix: 1469471
san antonio: 1359758
san diego: 1326179
dallas: 1223229
Suppose that we want to make the information more readable, by capitalizing names and adding commas to
mark thousands
The program us_cities.py program reads the data in and makes the conversion:
Here format() is a string method used for inserting variables into strings
The output is as follows
The reformatting of each line is the result of three different string methods, the details of which can be left
till later
The interesting part of this program for us is line 2, which shows that
1. The file object f is iterable, in the sense that it can be placed to the right of in within a for loop
2. Iteration steps through each line in the file
This leads to the clean, convenient syntax shown in our program
Many other kinds of objects are iterable, and well discuss some of them later on
One thing you might have noticed is that Python tends to favor looping without explicit indexing
For example,
is preferred to
for i in range(len(x_values)):
print(x_values[i] * x_values[i])
When you compare these two alternatives, you can see why the first one is preferred
The zip() function is also useful for creating dictionaries for example
names = ['Tom', 'John']
marks = ['E', 'F']
dict(zip(names, marks))
If we actually need the index from a list, one option is to use enumerate()
To understand what enumerate() does, consider the following example
letter_list = ['a', 'b', 'c']
for index, letter in enumerate(letter_list):
print(f"letter_list[{index}] = '{letter}'")
Comparisons
Many different kinds of expressions evaluate to one of the Boolean values (i.e., True or False)
A common type is comparisons, such as
x, y = 1, 2
x < y
True
x > y
False
1 < 2 < 3
True
1 <= 2 <= 3
True
x = 1 # Assignment
x == 2 # Comparison
False
1 != 2
True
Note that when testing conditions, we can use any valid Python expression
'yes'
'no'
Combining Expressions
These are the standard logical connectives (conjunction, disjunction and denial)
True
False
True
not True
False
True
Remember
• P and Q is True if both are True, else False
• P or Q is False if both are False, else True
Lets talk a bit more about functions, which are all-important for good programming style
Python has a number of built-in functions that are available without import
We have already met some
max(19, 20)
20
range(0, 4)
[0, 1, 2, 3]
str(22)
'22'
type(22)
int
False
True
User defined functions are important for improving the clarity of your code by
• separating different strands of logic
• facilitating code reuse
(Writing the same thing twice is almost always a bad idea)
The basics of user defined functions were discussed here
def f(x):
if x < 0:
return 'negative'
return 'nonnegative'
Functions without a return statement automatically return the special Python object None
Docstrings
Python has a system for adding comments to functions, modules, etc. called docstrings
The nice thing about docstrings is that they are available at run-time
Try running this
def f(x):
"""
This function squares its argument
"""
return x**2
f?
Type: function
String Form:<function f at 0x2223320>
File: /home/john/temp/temp.py
Definition: f(x)
Docstring: This function squares its argument
f??
Type: function
String Form:<function f at 0x2223320>
File: /home/john/temp/temp.py
Definition: f(x)
Source:
def f(x):
"""
This function squares its argument
"""
return x**2
With one question mark we bring up the docstring, and with two we get the source code as well
def f(x):
return x**3
and
f = lambda x: x**3
quad(lambda x: x**3, 0, 2)
(4.0, 4.440892098500626e-14)
Here the function created by lambda is said to be anonymous, because it was never given a name
Keyword Arguments
If you did the exercises in the previous lecture, you would have come across the statement
In this call to Matplotlibs plot function, notice that the last argument is passed in name=argument
syntax
This is called a keyword argument, with label being the keyword
Non-keyword arguments are called positional arguments, since their meaning is determined by order
• plot(x, 'b-', label="white noise") is different from plot('b-', x,
label="white noise")
Keyword arguments are particularly useful when a function has a lot of arguments, in which case its hard to
remember the right order
You can adopt keyword arguments in user defined functions with no difficulty
The next example illustrates the syntax
The keyword argument values we supplied in the definition of f become the default values
f(2)
14
To learn more about the Python programming philosophy type import this at the prompt
Among other things, Python strongly favors consistency in programming style
Weve all heard the saying about consistency and little minds
In programming, as in mathematics, the opposite is true
• A mathematical paper where the symbols ∪ and ∩ were reversed would be very hard to read, even if
the author told you so on the first page
In Python, the standard style is set out in PEP8
(Occasionally well deviate from PEP8 in these lectures to better match mathematical notation)
1.4.7 Exercises
Exercise 1
Part 1: Given two numeric lists or tuples x_vals and y_vals of equal length, compute their inner product
using zip()
Part 2: In one line, count the number of even numbers in 0,,99
• Hint: x % 2 returns 0 if x is even, 1 otherwise
Part 3: Given pairs = ((2, 5), (4, 2), (9, 8), (12, 10)), count the number of pairs
(a, b) such that both a and b are even
Exercise 2
∑
n
p(x) = a0 + a1 x + a2 x2 + · · · an xn = ai x i (1.1)
i=0
Write a function p such that p(x, coeff) that computes the value in (1.1) given a point x and a list of
coefficients coeff
Try to use enumerate() in your loop
Exercise 3
Write a function that takes a string as an argument and returns the number of capital letters in the string
Hint: 'foo'.upper() returns 'FOO'
Exercise 4
Write a function that takes two sequences seq_a and seq_b as arguments and returns True if every
element in seq_a is also an element of seq_b, else False
• By sequence we mean a list, a tuple or a string
• Do the exercise without using sets and set methods
Exercise 5
When we cover the numerical libraries, we will see they include many alternatives for interpolation and
function approximation
Nevertheless, lets write our own function approximation routine as an exercise
In particular, without using any imports, write a function linapprox that takes as arguments
• A function f mapping some interval [a, b] into R
• two scalars a and b providing the limits of this interval
• An integer n determining the number of grid points
• A number x satisfying a <= x <= b
and returns the piecewise linear interpolation of f at x, based on n evenly spaced grid points a =
point[0] < point[1] < ... < point[n-1] = b
Aim for clarity, not efficiency
1.4.8 Solutions
Exercise 1
Part 1 solution:
x_vals = [1, 2, 3]
y_vals = [1, 1, 1]
sum([x * y for x, y in zip(x_vals, y_vals)])
Part 2 solution:
One solution is
50
50
Some less natural alternatives that nonetheless help to illustrate the flexibility of list comprehensions are
50
and
50
Part 3 solution
Exercise 2
Exercise 3
def f(string):
count = 0
for letter in string:
if letter == letter.upper() and letter.isalpha():
count += 1
return count
f('The Rain in Spain')
Exercise 4
Heres a solution:
# == test == #
True
False
Of course if we use the sets data type then the solution is easier
Exercise 5
Parameters
===========
f : function
The function to approximate
n : integer
Number of grid points
Returns
=========
A float. The interpolant evaluated at x
"""
length_of_interval = b - a
num_subintervals = n - 1
step = length_of_interval / num_subintervals
# === x must lie between the gridpoints (point - step) and point === #
u, v = point - step, point
Contents
1.5.1 Overview
Python is pragmatic language that blends object oriented and procedural styles, rather than taking a purist
approach
However, at a foundational level, Python is object oriented
In particular, in Python, everything is an object
In this lecture we explain what that statement means and why it matters
1.5.2 Objects
In Python, an object is a collection of data and instructions held in computer memory that consists of
1. a type
2. a unique identity
3. data (i.e., content)
4. methods
These concepts are defined and discussed sequentially below
Type
Python provides for different types of objects, to accommodate different categories of data
For example
s = 'This is a string'
type(s)
str
int
'300' + 'cc'
'300cc'
300 + 400
700
'300' + 400
Here we are mixing types, and its unclear to Python whether the user wants to
• convert '300' to an integer and then add it to 400, or
• convert 400 to string and then concatenate it with '300'
Some languages might try to guess but Python is strongly typed
• Type is important, and implicit type conversion is rare
• Python will respond instead by raising a TypeError
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-9b7dffd27f2d> in <module>()
----> 1 '300' + 400
To avoid the error, you need to clarify by changing the relevant type
For example,
700
Identity
In Python, each object has a unique identifier, which helps Python (and us) keep track of the object
The identity of an object can be obtained via the id() function
y = 2.5
z = 2.5
id(y)
166719660
id(z)
166719740
In this example, y and z happen to have the same value (i.e., 2.5), but they are not the same object
The identity of an object is in fact just the address of the object in memory
If we set x = 42 then we create an object of type int that contains the data 42
In fact it contains more, as the following example shows
x = 42
x
42
x.imag
x.__class__
int
When Python creates this integer object, it stores with it various auxiliary information, such as the imaginary
part, and the type
Any name following a dot is called an attribute of the object to the left of the dot
• e.g.,‘‘imag‘‘ and __class__ are attributes of x
We see from this example that objects have attributes that contain auxillary information
They also have attributes that act like functions, called methods
These attributes are important, so lets discuss them in depth
Methods
x = ['foo', 'bar']
callable(x.append)
True
callable(x.__doc__)
False
Methods typically act on the data contained in the object they belong to, or combine that data with other
data
x = ['a', 'b']
x.append('c')
s = 'This is a string'
s.upper()
'THIS IS A STRING'
s.lower()
'this is a string'
s.replace('This', 'That')
'That is a string'
x = ['a', 'b']
x[0] = 'aa' # Item assignment using square bracket notation
x
['aa', 'b']
It doesnt look like there are any methods used here, but in fact the square bracket assignment notation is just
a convenient interface to a method call
What actually happens is that Python calls the __setitem__ method, as follows
x = ['a', 'b']
x.__setitem__(0, 'aa') # Equivalent to x[0] = 'aa'
x
['aa', 'b']
(If you wanted to you could modify the __setitem__ method, so that square bracket assignment does
something totally different)
1.5.3 Summary
<function __main__.f>
type(f)
function
id(f)
3074342220L
f.__name__
'f'
We can see that f has type, identity, attributes and so onjust like any other object
It also has methods
One example is the __call__ method, which just evaluates the function
f.__call__(3)
import math
id(math)
3074329380L
This uniform treatment of data in Python (everything is an object) helps keep the language simple and
consistent
TWO
Next we cover the third party libraries most useful for scientific work in Python
2.1 NumPy
Contents
• NumPy
– Overview
– Introduction to NumPy
– NumPy Arrays
– Operations on Arrays
– Additional Functionality
– Exercises
– Solutions
Lets be clear: the work of science has nothing whatever to do with consensus. Consensus is the
business of politics. Science, on the contrary, requires only one investigator who happens to be
right, which means that he or she has results that are verifiable by reference to the real world.
In science consensus is irrelevant. What is relevant is reproducible results. – Michael Crichton
2.1.1 Overview
85
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
References
import numpy as np
x = np.random.uniform(0, 1, size=1000000)
x.mean()
0.49990566939719772
The operations of creating the array and computing its mean are both passed out to carefully optimized
machine code compiled from C
More generally, NumPy sends operations in batches to optimized C and Fortran code
This is similar in spirit to Matlab, which provides an interface to fast Fortran routines
A Comment on Vectorization
The most important thing that NumPy defines is an array data type formally called a numpy.ndarray
NumPy arrays power a large proportion of the scientific Python ecosystem
To create a NumPy array containing only zeros we use np.zeros
a = np.zeros(3)
a
type(a)
numpy.ndarray
NumPy arrays are somewhat like native Python lists, except that
• Data must be homogeneous (all elements of the same type)
• These types must be one of the data types (dtypes) provided by NumPy
The most important of these dtypes are:
• float64: 64 bit floating point number
• int64: 64 bit integer
• bool: 8 bit True or False
There are also dtypes to represent complex numbers, unsigned integers, etc
On modern machines, the default dtype for arrays is float64
a = np.zeros(3)
type(a[0])
numpy.float64
a = np.zeros(3, dtype=int)
type(a[0])
numpy.int64
2.1. NumPy 87
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
z = np.zeros(10)
Here z is a flat array with no dimension neither row nor column vector
The dimension is recorded in the shape attribute, which is a tuple
z.shape
(10,)
Here the shape tuple has only one element, which is the length of the array (tuples with one element end
with a comma)
To give it dimension, we can change the shape attribute
z.shape = (10, 1)
z
array([[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.],
[0.]])
z = np.zeros(4)
z.shape = (2, 2)
z
array([[0., 0.],
[0., 0.]])
In the last case, to make the 2 by 2 array, we could also pass a tuple to the zeros() function, as in z =
np.zeros((2, 2))
Creating Arrays
z = np.empty(3)
z
z = np.identity(2)
z
array([[1., 0.],
[0., 1.]])
In addition, NumPy arrays can be created from Python lists, tuples, etc. using np.array
array([10, 20])
type(z)
numpy.ndarray
array([[1, 2],
[3, 4]])
See also np.asarray, which performs a similar function, but does not make a distinct copy of data already
in a NumPy array
na = np.linspace(10, 20, 2)
na is np.asarray(na) # Does not copy NumPy arrays
2.1. NumPy 89
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
True
False
To read in the array data from a text file containing numeric data use np.loadtxt or np.
genfromtxtsee the documentation for details
Array Indexing
z = np.linspace(1, 2, 5)
z
z[0]
1.0
array([ 1. , 1.25])
z[-1]
2.0
array([[1, 2],
[3, 4]])
z[0, 0]
z[0, 1]
And so on
Note that indices are still zero-based, to maintain compatibility with Python sequences
Columns and rows can be extracted as follows
z[0, :]
array([1, 2])
z[:, 1]
array([2, 4])
z = np.linspace(2, 4, 5)
z
array([2. , 3. , 3.5])
z[d]
array([2.5, 3. ])
2.1. NumPy 91
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
z = np.empty(3)
z
array([2. , 3. , 3.5])
z[:] = 42
z
Array Methods
a = np.array((4, 3, 2, 1))
a
array([4, 3, 2, 1])
array([1, 2, 3, 4])
a.sum() # Sum
10
a.mean() # Mean
2.5
a.max() # Max
array([ 1, 3, 6, 10])
array([ 1, 2, 6, 24])
a.var() # Variance
1.25
1.1180339887498949
a.shape = (2, 2)
a.T # Equivalent to a.transpose()
array([[1, 3],
[2, 4]])
z = np.linspace(2, 4, 5)
z
z.searchsorted(2.2)
Many of the methods discussed above have equivalent functions in the NumPy namespace
a = np.array((4, 3, 2, 1))
np.sum(a)
10
np.mean(a)
2.5
2.1. NumPy 93
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Arithmetic Operations
a = np.array([1, 2, 3, 4])
b = np.array([5, 6, 7, 8])
a + b
a * b
a + 10
a * 10
A = np.ones((2, 2))
B = np.ones((2, 2))
A + B
array([[2., 2.],
[2., 2.]])
A + 10
array([[11., 11.],
[11., 11.]])
A * B
array([[1., 1.],
[1., 1.]])
Matrix Multiplication
With Anacondas scientific Python package based around Python 3.5 and above, one can use the @ symbol
for matrix multiplication, as follows:
A = np.ones((2, 2))
B = np.ones((2, 2))
A @ B
array([[2., 2.],
[2., 2.]])
(For older versions of Python and NumPy you need to use the np.dot function)
We can also use @ to take the inner product of two flat arrays
A = np.array((1, 2))
B = np.array((10, 20))
A @ B
50
array([[1, 2],
[3, 4]])
A @ (0, 1)
array([2, 4])
a = np.array([42, 44])
a
2.1. NumPy 95
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
array([42, 44])
array([42, 0])
Mutability leads to the following behavior (which can be shocking to MATLAB programmers)
a = np.random.randn(3)
a
b = a
b[0] = 0.0
a
Making Copies
a = np.random.randn(3)
a
b = np.copy(a)
b
b[:] = 1
b
Vectorized Functions
NumPy provides versions of the standard functions log, exp, sin, etc. that act element-wise on arrays
z = np.array([1, 2, 3])
np.sin(z)
n = len(z)
y = np.empty(n)
for i in range(n):
y[i] = np.sin(z[i])
Because they act element-wise on arrays, these functions are called vectorized functions
In NumPy-speak, they are also called ufuncs, which stands for universal functions
As we saw above, the usual arithmetic operations (+, *, etc.) also work element-wise, and combining these
with the ufuncs gives a very large set of fast element-wise functions
array([1, 2, 3])
2.1. NumPy 97
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
def f(x):
return 1 if x > 0 else 0
x = np.random.randn(4)
x
array([0, 1, 0, 0])
f = np.vectorize(f)
f(x) # Passing the same vector x as in the previous example
array([0, 1, 0, 0])
However, this approach doesnt always obtain the same speed as a more carefully crafted vectorized function
Comparisons
z = np.array([2, 3])
y = np.array([2, 3])
z == y
y[0] = 5
z == y
array([False, True])
z != y
z = np.linspace(0, 10, 5)
z
z > 3
b = z > 3
b
z[b]
z[z > 3]
Subpackages
NumPy provides some additional functionality related to scientific programming through its subpackages
Weve already seen how we can generate random variables using np.random
5.096
2.1. NumPy 99
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
-2.0000000000000004
array([[-2. , 1. ],
[ 1.5, -0.5]])
Much of this functionality is also available in SciPy, a collection of modules that are built on top of NumPy
Well cover the SciPy versions in more detail soon
For a comprehensive list of whats available in NumPy see this documentation
2.1.6 Exercises
Exercise 1
∑
N
p(x) = a0 + a1 x + a2 x2 + · · · aN xN = an x n (2.1)
n=0
Earlier, you wrote a simple function p(x, coeff) to evaluate (2.1) without considering efficiency
Now write a new function that does the same job, but uses NumPy arrays and array operations for its
computations, rather than any form of Python loop
(Such functionality is already implemented as np.poly1d, but for the sake of the exercise dont use this
class)
• Hint: Use np.cumprod()
Exercise 2
def sample(q):
a = 0.0
U = uniform(0, 1)
for i in range(len(q)):
if a < U <= a + q[i]:
return i
a = a + q[i]
If you cant see how this works, try thinking through the flow for a simple example, such as q = [0.25,
0.75] It helps to sketch the intervals on paper
Your exercise is to speed it up using NumPy, avoiding explicit loops
• Hint: Use np.searchsorted and np.cumsum
If you can, implement the functionality as a class called discreteRV, where
• the data for an instance of the class is the vector of probabilities q
• the class has a draw() method, which returns one draw according to the algorithm described above
If you can, write the method so that draw(k) returns k draws from q
Exercise 3
2.1.7 Solutions
Exercise 1
Lets test it
coef = np.ones(3)
print(coef)
print(p(1, coef))
# For comparison
q = np.poly1d(coef)
print(q(1))
[1. 1. 1.]
3.0
3.0
Exercise 2
class DiscreteRV:
"""
Generates an array of draws from a discrete random variable with vector of
probabilities given by q.
"""
The logic is not obvious, but if you take your time and read it slowly, you will understand
There is a problem here, however
Suppose that q is altered after an instance of discreteRV is created, for example by
q = (0.1, 0.9)
d = DiscreteRV(q)
d.q = (0.5, 0.5)
The problem is that Q does not change accordingly, and Q is the data used in the draw method
To deal with this, one option is to compute Q every time the draw method is called
Exercise 3
"""
Modifies ecdf.py from QuantEcon to add in a plot method
"""
class ECDF:
"""
One-dimensional empirical distribution function given a vector of
observations.
Parameters
----------
observations : array_like
An array of observations
Attributes
----------
observations : array_like
An array of observations
"""
Parameters
----------
x : scalar(float)
The x at which the ecdf is evaluated
Returns
-------
scalar(float)
Fraction of the sample less than x
"""
return np.mean(self.observations <= x)
Parameters
----------
a : scalar(float), optional(default=None)
Lower end point of the plot interval
b : scalar(float), optional(default=None)
Upper end point of the plot interval
"""
X = np.random.randn(1000)
F = ECDF(X)
F.plot()
2.2 Matplotlib
Contents
• Matplotlib
– Overview
– The APIs
– More Features
– Further Reading
– Exercises
– Solutions
2.2.1 Overview
Weve already generated quite a few figures in these lectures using Matplotlib
Matplotlib is an outstanding graphics library, designed for scientific computing, with
• high quality 2D and 3D plots
• output in all the usual formats (PDF, PNG, etc.)
• LaTeX integration
• fine grained control over all aspects of presentation
• animation, etc.
Heres the kind of easy example you might find in introductory treatments
This is simple and convenient, but also somewhat limited and un-Pythonic
For example, in the function calls, a lot of objects get created and passed around without making themselves
known to the programmer
Python programmers tend to prefer a more explicit style of programming (run import this in a code
block and look at the second line)
This leads us to the alternative, object oriented Matplotlib API
Heres the code corresponding to the preceding figure using the object-oriented API
fig, ax = plt.subplots()
ax.plot(x, y, 'b-', linewidth=2)
plt.show()
Tweaks
fig, ax = plt.subplots()
ax.plot(x, y, 'r-', linewidth=2, label='sine function', alpha=0.6)
ax.legend()
plt.show()
Weve also used alpha to make the line slightly transparentwhich makes it look smoother
The location of the legend can be changed by replacing ax.legend() with ax.legend(loc='upper
center')
fig, ax = plt.subplots()
ax.plot(x, y, 'r-', linewidth=2, label='sine function', alpha=0.6)
ax.legend(loc='upper center')
plt.show()
fig, ax = plt.subplots()
ax.plot(x, y, 'r-', linewidth=2, label='$y=\sin(x)$', alpha=0.6)
ax.legend(loc='upper center')
plt.show()
fig, ax = plt.subplots()
ax.plot(x, y, 'r-', linewidth=2, label='$y=\sin(x)$', alpha=0.6)
ax.legend(loc='upper center')
ax.set_yticks([-1, 0, 1])
ax.set_title('Test plot')
plt.show()
Matplotlib has a huge array of functions and features, which you can discover over time as you have need
for them
We mention just a few
fig, ax = plt.subplots()
x = np.linspace(-4, 4, 150)
for i in range(3):
m, s = uniform(-1, 1), uniform(1, 2)
y = norm.pdf(x, loc=m, scale=s)
current_label = f'$\mu = {m:.2}$'
ax.plot(x, y, linewidth=2, alpha=0.6, label=current_label)
ax.legend()
plt.show()
Multiple Subplots
num_rows, num_cols = 3, 2
fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 12))
for i in range(num_rows):
for j in range(num_cols):
m, s = uniform(-1, 1), uniform(1, 2)
x = norm.rvs(loc=m, scale=s, size=100)
axes[i, j].hist(x, alpha=0.6, bins=20)
t = f'$\mu = {m:.2}, \quad \sigma = {s:.2}$'
axes[i, j].set(title=t, xticks=[-4, 0, 4], yticks=[])
plt.show()
3D Plots
A Customizing Function
Perhaps you will find a set of customizations that you regularly use
Suppose we usually prefer our axes to go through the origin, and to have a grid
Heres a nice example from Matthew Doty of how the object-oriented API can be used to build a custom
subplots function that implements these changes
Read carefully through the code and see if you can follow whats going on
def subplots():
"Custom subplots with axes throught the origin"
fig, ax = plt.subplots()
ax.grid()
return fig, ax
Heres the figure it produces (note axes through the origin and the grid)
2.2.5 Exercises
Exercise 1
2.2.6 Solutions
Exercise 1
for θ in θ_vals:
ax.plot(x, np.cos(np.pi * θ * x) * np.exp(- x))
plt.show()
2.3 SciPy
Contents
• SciPy
– SciPy versus NumPy
– Statistics
– Roots and Fixed Points
– Optimization
– Integration
– Linear Algebra
– Exercises
– Solutions
SciPy builds on top of NumPy to provide common tools for scientific programming, such as
• linear algebra
• numerical integration
• interpolation
• optimization
• distributions and random number generation
• signal processing
• etc., etc
SciPy is a package that contains various tools that are built on top of NumPy, using its array data type and
related functionality
In fact, when we import SciPy we also get NumPy, as can be seen from the SciPy initialization file
__all__ = []
__all__ += _num.__all__
__all__ += ['randn', 'rand', 'fft', 'ifft']
del _num
# Remove the linalg imported from numpy so that the scipy.linalg package can
,→be
# imported.
del linalg
__all__.remove('linalg')
However, its more common and better practice to use NumPy functionality explicitly
import numpy as np
a = np.identity(3)
2.3.2 Statistics
np.random.beta(5, 5, size=3)
x(a−1) (1 − x)(b−1)
f (x; a, b) = ∫ 1 (0 ≤ x ≤ 1) (2.2)
(a−1) u(b−1) du
0 u
Sometimes we need access to the density itself, or the cdf, the quantiles, etc.
For this we can use scipy.stats, which provides all of this functionality as well as random number
generation in a single consistent interface
Heres an example of usage
In this code we created a so-called rv_frozen object, via the call q = beta(5, 5)
The frozen part of the notation implies that q represents a particular distribution with a particular set of
parameters
Once weve done so, we can then generate random numbers, evaluate the density, etc., all from this fixed
distribution
0.2665676800000002
2.0901888000000004
0.63391348346427079
q.mean()
0.5
identifier = scipy.stats.distribution_name(shape_parameters)
where distribution_name is one of the distribution names in scipy.stats
There are also two keyword arguments, loc and scale, which following our example above, are called as
identifier = scipy.stats.distribution_name(shape_parameters,
loc=c, scale=d)
These transform the original random variable X into Y = c + dX
The methods rvs, pdf, cdf, etc. are transformed accordingly
Before finishing this section, we note that there is an alternative way of calling the methods described above
For example, the previous code can be replaced by
fig, ax = plt.subplots()
ax.hist(obs, bins=40, normed=True)
ax.plot(grid, beta.pdf(grid, 5, 5), 'k-', linewidth=2)
plt.show()
x = np.random.randn(200)
y = 2 * x + 0.1 * np.random.randn(200)
gradient, intercept, r_value, p_value, std_err = linregress(x, y)
gradient, intercept
(1.9962554379482236, 0.008172822032671799)
plt.figure(figsize=(10, 8))
plt.plot(x, f(x))
plt.axhline(ls='--', c='k')
plt.show()
Bisection
One of the most common algorithms for numerical root finding is bisection
To understand the idea, recall the well known game where
• Player A thinks of a secret number between 1 and 100
In fact SciPy provides its own bisection function, which we now test using the function f defined in (2.3)
bisect(f, 0, 1)
0.40829350427936706
Lets investigate this using the same function f , first looking at potential instability
0.40829350427935679
0.70017000000002816
%timeit bisect(f, 0, 1)
Hybrid Methods
So far we have seen that the Newton-Raphson method is fast but not robust
This bisection algorithm is robust but relatively slow
This illustrates a general principle
• If you have specific knowledge about your function, you might be able to exploit it to generate effi-
ciency
• If not, then the algorithm choice involves a trade-off between speed of convergence and robustness
In practice, most default algorithms for root finding, optimization and fixed points use hybrid methods
These methods typically combine a fast method with a robust method in the following manner:
1. Attempt to use a fast method
2. Check diagnostics
3. If diagnostics are bad, then switch to a more robust algorithm
In scipy.optimize, the function brentq is such a hybrid method, and a good default
brentq(f, 0, 1)
0.40829350427936706
%timeit brentq(f, 0, 1)
Here the correct solution is found and the speed is almost the same as newton
Fixed Points
array(1.0)
If you dont get good results, you can always switch back to the brentq root finder, since the fixed point of
a function f is the root of g(x) := x − f (x)
2.3.4 Optimization
0.0
Multivariate Optimization
Multivariate local optimizers include minimize, fmin, fmin_powell, fmin_cg, fmin_bfgs, and
fmin_ncg
Constrained multivariate local optimizers include fmin_l_bfgs_b, fmin_tnc, fmin_cobyla
See the documentation for details
2.3.5 Integration
Most numerical integration methods work by computing the integral of an approximating polynomial
The resulting error depends on how well the polynomial fits the integrand, which in turn depends on how
regular the integrand is
In SciPy, the relevant module for numerical integration is scipy.integrate
A good default for univariate integration is quad
0.33333333333333337
In fact quad is an interface to a very standard numerical integration routine in the Fortran library QUAD-
PACK
It uses Clenshaw-Curtis quadrature, based on expansion in terms of Chebychev polynomials
There are other options for univariate integrationa useful one is fixed_quad, which is fast and hence
works well inside for loops
There are also functions for multivariate integration
See the documentation for more details
We saw that NumPy provides a module for linear algebra called linalg
SciPy also provides a module for linear algebra with the same name
The latter is not an exact superset of the former, but overall it has more functionality
We leave you to investigate the set of available routines
2.3.7 Exercises
Exercise 1
2.3.8 Solutions
Exercise 1
2.4 Numba
Contents
• Numba
– Overview
– Where are the Bottlenecks?
– Vectorization
– Numba
2.4.1 Overview
In our lecture on NumPy we learned one method to improve speed and efficiency in numerical work
That method, called vectorization, involved sending array processing operations in batch to efficient low
level code
This clever idea dates back to Matlab, which uses it extensively
Unfortunately, vectorization is limited and has several weaknesses
One weakness is that it is highly memory intensive
To understand what Numba does and why, we need some background knowledge
Lets start by thinking about higher level languages, such as Python
These languages are optimized for humans
This means that the programmer can leave many details to the runtime environment
• specifying variable types
• memory allocation/deallocation, etc.
The upside is that, compared to low-level languages, Python is typically faster to write, less error prone and
easier to debug
The downside is that Python is harder to optimize that is, turn into fast machine code than languages like
C or Fortran
Indeed, the standard implementation of Python (called CPython) cannot match the speed of compiled lan-
guages such as C or Fortran
Does that mean that we should just switch to C or Fortran for everything?
The answer is no, no and one hundred times no
High productivity languages should be chosen over high speed languages for the great majority of scientific
computing tasks
This is because
1. Of any given program, relatively few lines are ever going to be time-critical
2. For those lines of code that are time-critical, we can achieve C-like speed using a combination of
NumPy and Numba
This lecture provides a guide
Lets start by trying to understand why high level languages like Python are slower than compiled code
Dynamic Typing
a, b = 10, 10
a + b
20
Even for this simple operation, the Python interpreter has a fair bit of work to do
For example, in the statement a + b, the interpreter has to know which operation to invoke
If a and b are strings, then a + b requires string concatenation
a, b = 'foo', 'bar'
a + b
'foobar'
a, b = ['foo'], ['bar']
a + b
['foo', 'bar']
(We say that the operator + is overloaded its action depends on the type of the objects on which it acts)
As a result, Python must check the type of the objects and then call the correct operation
This involves substantial overheads
Static Types
#include <stdio.h>
int main(void) {
int i;
int sum = 0;
for (i = 1; i <= 10; i++) {
sum = sum + i;
}
printf("sum = %d\n", sum);
return 0;
}
Data Access
In C or Fortran, these integers would typically be stored in an array, which is a simple data structure for
storing homogeneous data
Such an array is stored in a single contiguous block of memory
• In modern computers, memory addresses are allocated to each byte (one byte = 8 bits)
• For example, a 64 bit integer is stored in 8 bytes of memory
• An array of n such integers occupies 8n consecutive memory slots
Moreover, the compiler is made aware of the data type by the programmer
• In this case 64 bit integers
Hence, each successive data point can be accessed by shifting forward in memory space by a known and
fixed amount
• In this case 8 bytes
2.4.3 Vectorization
Operations on Arrays
import random
import numpy as np
import quantecon as qe
qe.util.tic()
n = 100_000
x = np.random.uniform(0, 1, n)
np.sum(x**2)
qe.util.toc()
The second code block which achieves the same thing as the first runs much faster
The reason is that in the second implementation we have broken the loop down into three basic operations
1. draw n uniforms
2. square them
3. sum them
These are sent as batch operators to optimized machine code
Apart from minor overheads associated with sending data back and forth, the result is C or Fortran-like
speed
When we run batch operations on arrays like this, we say that the code is vectorized
Universal Functions
Many functions provided by NumPy are so-called universal functions also called ufuncs
This means that they
• map scalars into scalars, as expected
• map arrays into arrays, acting element-wise
For example, np.cos is a ufunc:
np.cos(1.0)
0.54030230586813977
np.cos(np.linspace(0, 1, 3))
cos(x2 + y 2 )
f (x, y) = and a = 3
1 + x2 + y 2
Heres a plot of f
f(x, y),
rstride=2, cstride=2,
cmap=cm.jet,
alpha=0.7,
linewidth=0.25)
ax.set_zlim(-0.5, 1.0)
plt.show()
qe.tic()
for x in grid:
for y in grid:
z = f(x, y)
if z > m:
m = z
qe.toc()
qe.tic()
np.max(f(x, y))
qe.toc()
In the vectorized version, all the looping takes place in compiled code
As you can see, the second version is much faster
(Well make it even faster again below, when we discuss Numba)
2.4.4 Numba
Prerequisites
An Example
xt+1 = 4xt (1 − xt )
Heres the plot of a typical trajectory, starting from x0 = 0.1, with t on the x-axis
x = qm(0.1, 250)
fig, ax = plt.subplots(figsize=(10, 6))
ax.plot(x, 'b-', lw=2, alpha=0.8)
ax.set_xlabel('time', fontsize=16)
plt.show()
Lets time and compare identical function calls across these two versions:
qe.util.tic()
qm(0.1, int(10**5))
time1 = qe.util.toc()
qe.util.tic()
qm_numba(0.1, int(10**5))
time2 = qe.util.toc()
The first execution is relatively slow because of JIT compilation (see below)
Next time and all subsequent times it runs much faster:
qe.util.tic()
qm_numba(0.1, int(10**5))
time2 = qe.util.toc()
182.8322188449848
Decorator Notation
If you dont need a separate name for the numbafied version of qm, you can just put @jit before the function
@jit
def qm(x0, n):
x = np.empty(n+1)
x[0] = x0
for t in range(n):
x[t+1] = 4 * x[t] * (1 - x[t])
return x
Numba attempts to generate fast machine code using the infrastructure provided by the LLVM Project
It does this by inferring type information on the fly
As you can imagine, this is easier for simple Python objects (simple scalar data types, such as floats, integers,
etc.)
Numba also plays well with NumPy arrays, which it treats as typed memory regions
In an ideal setting, Numba can infer all necessary type information
This allows it to generate native machine code, without having to call the Python runtime environment
In such a setting, Numba will be on par with machine code from low level languages
When Numba cannot infer all type information, some Python objects are given generic object status, and
some code is generated using the Python runtime
In this second setting, Numba typically provides only minor speed gains or none at all
Hence, its prudent when using Numba to focus on speeding up small, time-critical snippets of code
This will give you much better performance than blanketing your Python programs with @jit statements
a = 1
@jit
def add_x(x):
return a + x
print(add_x(10))
11
a = 2
print(add_x(10))
11
Notice that changing the global had no effect on the value returned by the function
When Numba compiles machine code for functions, it treats global variables as constants to ensure type
stability
Numba can also be used to create custom ufuncs with the @vectorize decorator
To illustrate the advantage of using Numba to vectorize a function, we return to a maximization problem
discussed above
@vectorize
def f_vec(x, y):
return np.cos(x**2 + y**2) / (1 + x**2 + y**2)
qe.tic()
np.max(f_vec(x, y))
qe.toc()
qe.tic()
np.max(f_vec(x, y))
qe.toc()
Contents
2.5.1 Overview
In this lecture we review some other scientific libraries that are useful for economic research and analysis
We have, however, already picked most of the low hanging fruit in terms of economic research
Hence you should feel free to skip this lecture on first pass
2.5.2 Cython
Like Numba, Cython provides an approach to generating fast compiled code that can be used from Python
As was the case with Numba, a key problem is the fact that Python is dynamically typed
As youll recall, Numba solves this problem (where possible) by inferring type
Cythons approach is different programmers add type definitions directly to their Python code
As such, the Cython language can be thought of as Python with type definitions
In addition to a language specification, Cython is also a language translator, transforming Cython code into
optimized C and C++ code
Cython also takes care of building language extensions the wrapper code that interfaces between the result-
ing compiled code and Python
Important Note:
In what follows code is executed in a Jupyter notebook
This is to take advantage of a Cython cell magic that makes Cython particularly easy to use
Some modifications are required to run the code outside a notebook
• See the book Cython by Kurt Smith or the online documentation
A First Example
∑
n
1 − αn+1
αi =
1−α
i=0
Python vs C
If youre not familiar with C, the main thing you should take notice of is the type definitions
• int means integer
• double means double precision floating point number
• the double in double geo_prog(... indicates that the function will return a double
Not surprisingly, the C code is faster than the Python code
A Cython Implementation
%load_ext Cython
%%cython
def geo_prog_cython(double alpha, int n):
cdef double current = 1.0
cdef double sum = current
cdef int i
for i in range(n):
current = current * alpha
sum = sum + current
return sum
Here cdef is a Cython keyword indicating a variable declaration, and is followed by a type
The %%cython line at the top is not actually Cython code its a Jupyter cell magic indicating the start of
Cython code
After executing the cell, you can now call the function geo_prog_cython from within Python
What you are in fact calling is compiled C code with a Python call interface
import quantecon as qe
qe.util.tic()
geo_prog(0.99, int(10**6))
qe.util.toc()
qe.util.tic()
geo_prog_cython(0.99, int(10**6))
qe.util.toc()
Lets go back to the first problem that we worked with: generating the iterates of the quadratic map
xt+1 = 4xt (1 − xt )
The problem of computing iterates and returning a time series requires us to work with arrays
The natural array type to work with is NumPy arrays
Heres a Cython implementation that initializes, populates and returns a NumPy array
%%cython
import numpy as np
If you run this code and time it, you will see that its performance is disappointing nothing like the speed
gain we got from Numba
qe.util.tic()
qm_cython_first_pass(0.1, int(10**5))
qe.util.toc()
This example was also computed in the Numba lecture, and you can see Numba is around 90 times faster
The reason is that working with NumPy arrays incurs substantial Python overheads
We can do better by using Cythons typed memoryviews, which provide more direct access to arrays in
memory
When using them, the first step is to create a NumPy array
Next, we declare a memoryview and bind it to the NumPy array
Heres an example:
%%cython
import numpy as np
from numpy cimport float_t
Here
• cimport pulls in some compile-time information from NumPy
• cdef float_t [:] x = x_np_array creates a memoryview on the NumPy array
x_np_array
• the return statement uses np.asarray(x) to convert the memoryview back to a NumPy array
Lets time it:
qe.util.tic()
qm_cython(0.1, int(10**5))
qe.util.toc()
Summary
Cython requires more expertise than Numba, and is a little more fiddly in terms of getting good performance
In fact, its surprising how difficult it is to beat the speed improvements provided by Numba
Nonetheless,
2.5.3 Joblib
Caching
Perhaps, like us, you sometimes run a long computation that simulates a model at a given set of parameters
to generate a figure, say, or a table
20 minutes later you realize that you want to tweak the figure and now you have to do it all again
What caching will do is automatically store results at each parameterization
With Joblib, results are compressed and stored on file, and automatically served back up to you when you
repeat the calculation
An Example
Lets look at a toy example, related to the quadratic map model discussed above
Lets say we want to generate a long trajectory from a certain initial condition x0 and see what fraction of
the sample is below 0.1
(Well omit JIT compilation or other speed ups for simplicity)
Heres our code
memory = Memory(cachedir='./joblib_cache')
@memory.cache
def qm(x0, n):
x = np.empty(n+1)
x[0] = x0
for t in range(n):
x[t+1] = 4 * x[t] * (1 - x[t])
return np.mean(x < 0.1)
We are using joblib to cache the result of calling qm at a given set of parameters
With the argument cachedir=./joblib_cache, any call to this function results in both the input values and
output values being stored a subdirectory joblib_cache of the present working directory
(In UNIX shells, . refers to the present working directory)
The first time we call the function with a given set of parameters we see some extra output that notes
information being cached
qe.util.tic()
n = int(1e7)
qm(0.2, n)
qe.util.toc()
The next time we call the function with the same set of parameters, the result is returned almost instanta-
neously
qe.util.tic()
n = int(1e7)
qm(0.2, n)
qe.util.toc()
0.204758079524
TOC: Elapsed: 0.0009872913360595703 seconds.
There are in fact many other approaches to speeding up your Python code
One is interfacing with Fortran
If you are comfortable writing Fortran you will find it very easy to create extension modules from Fortran
code using F2Py
F2Py is a Fortran-to-Python interface generator that is particularly simple to use
Robert Johansson provides a very nice introduction to F2Py, among other things
Recently, a Jupyter cell magic for Fortran has been developed you might want to give it a try
2.5.5 Exercises
Exercise 1
For now, lets just concentrate on simulating a very simple example of such a chain
Suppose that the volatility of returns on an asset can be in one of two regimes high or low
The transition probabilities across states are as follows
For example, let the period length be one month, and suppose the current state is high
We see from the graph that the state next month will be
• high with probability 0.8
• low with probability 0.2
Your task is to simulate a sequence of monthly volatility states according to this rule
Set the length of the sequence to n = 100000 and start in the high state
Implement a pure Python version, a Numba version and a Cython version, and compare speeds
To test your code, evaluate the fraction of time that the chain spends in the low state
If your code is correct, it should be about 2/3
2.5.6 Solutions
Exercise 1
We let
• 0 represent low
• 1 represent high
def compute_series(n):
x = np.empty(n, dtype=int)
x[0] = 1 # Start in state 1
U = np.random.uniform(0, 1, size=n)
for t in range(1, n):
current_x = x[t-1]
if current_x == 0:
Lets run this code and check that the fraction of time spent in the low state is about 0.666
n = 100000
x = compute_series(n)
print(np.mean(x == 0)) # Fraction of time x is in state 0
0.66951
qe.util.tic()
compute_series(n)
qe.util.toc()
compute_series_numba = jit(compute_series)
x = compute_series_numba(n)
print(np.mean(x == 0))
0.66764
qe.util.tic()
compute_series_numba(n)
qe.util.toc()
%load_ext Cython
%%cython
import numpy as np
compute_series_cy(10)
array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0])
x = compute_series_cy(n)
print(np.mean(x == 0))
0.66927
qe.util.tic()
compute_series_cy(n)
qe.util.toc()
THREE
Contents
3.1.1 Overview
When computer programs are small, poorly written code is not overly costly
But more data, more sophisticated models, and more computer power are enabling us to take on more
challenging problems that involve writing longer programs
For such programs, investment in good coding practices will pay high returns
The main payoffs are higher productivity and faster code
In this lecture, we review some elements of good coding practice
We also touch on modern developments in scientific computing such as just in time compilation and how
they affect good program design
149
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Here
• kt is capital at time t and
• s, α, δ are parameters (savings, a productivity parameter and depreciation)
For each parameterization, the code
1. sets k0 = 1
2. iterates using (3.1) to produce a sequence k0 , k1 , k2 . . . , kT
3. plots the sequence
The plots will be grouped into three subfigures
In each subfigure, two parameters are held fixed while another varies
import numpy as np
import matplotlib.pyplot as plt
for j in range(3):
k[0] = 1
for t in range(49):
k[t+1] = s * k[t]**α[j] + (1 - δ) * k[t]
axes[0].plot(k, 'o-', label=rf"$\alpha = {α[j]},\; s = {s},\; \delta={δ}$
,→")
axes[0].grid(lw=0.2)
axes[0].set_ylim(0, 18)
axes[0].set_xlabel('time')
axes[0].set_ylabel('capital')
axes[0].legend(loc='upper left', frameon=True, fontsize=14)
α = 0.33
s = (0.3, 0.4, 0.5)
for j in range(3):
k[0] = 1
for t in range(49):
k[t+1] = s[j] * k[t]**α + (1 - δ) * k[t]
axes[1].plot(k, 'o-', label=rf"$\alpha = {α},\; s = {s},\; \delta={δ}$")
axes[1].grid(lw=0.2)
axes[1].set_xlabel('time')
axes[1].set_ylabel('capital')
axes[1].set_ylim(0, 18)
axes[1].legend(loc='upper left', frameon=True, fontsize=14)
for j in range(3):
k[0] = 1
for t in range(49):
k[t+1] = s * k[t]**α + (1 - δ[j]) * k[t]
axes[2].plot(k, 'o-', label=rf"$\alpha = {α},\; s = {s},\; \delta={δ[j]}$
,→")
axes[2].set_ylim(0, 18)
axes[2].set_xlabel('time')
axes[2].set_ylabel('capital')
axes[2].grid(lw=0.2)
axes[2].legend(loc='upper left', frameon=True, fontsize=14)
plt.show()
There are usually many different ways to write a program that accomplishes a given task
For small programs, like the one above, the way you write code doesnt matter too much
But if you are ambitious and want to produce useful things, youll write medium to large programs too
In those settings, coding style matters a great deal
Fortunately, lots of smart people have thought about the best way to write code
Here are some basic precepts
If you look at the code above, youll see numbers like 50 and 49 and 3 scattered through the code
These kinds of numeric literals in the body of your code are sometimes called magic numbers
This is not a complement
While numeric literals are not all evil, the numbers shown in the program above should certainly be replaced
by named constants
For example, the code above could declare the variable time_series_length = 50
Then in the loops, 49 should be replaced by time_series_length - 1
The advantages are:
• the meaning is much clearer throughout
• to alter the time series length, you only need to change one value
Sure, global variables (i.e., names assigned to values outside of any function or class) are convenient
Rookie programmers typically use global variables with abandon as we once did ourselves
But global variables are dangerous, especially in medium to large size programs, since
• they can affect what happens in any part of your program
• they can be changed by any function
This makes it much harder to be certain about what some small part of a given piece of code actually
commands
Heres a useful discussion on the topic
While the odd global in small scripts is no big deal, we recommend that you teach yourself to avoid them
(Well discuss how just below)
JIT Compilation
Fortunately, we can easily avoid the evils of global variables and WET code
• WET stands for we love typing and is the opposite of DRY
We can do this by making frequent use of functions or classes
In fact, functions and classes are designed specifically to help us avoid shaming ourselves by repeating code
or excessive use of global variables
Both can be useful, and in fact they work well with each other
Well learn more about these topics over time
(Personal preference is part of the story too)
Whats really important is that you use one or the other or both
Heres some code that reproduces the plot above with better coding style
It uses a function to avoid repetition
Note also that
• global variables are quarantined by collecting together at the end, not the start of the program
• magic numbers are avoided
• the loop at the end where the actual work is done is short and relatively simple
ax.grid(lw=0.2)
ax.set_xlabel('time')
ax.set_ylabel('capital')
ax.set_ylim(0, 18)
ax.legend(loc='upper left', frameon=True, fontsize=14)
plt.show()
3.1.5 Summary
We recommend that you cultivate good habits and style even when you write relatively short programs
Contents
3.2.1 Overview
Key Concepts
As discussed an earlier lecture, in the OOP paradigm, data and functions are bundled together into objects
An example is a Python list, which not only stores data, but also knows how to sort itself, etc.
x = [1, 5, 4]
x.sort()
x
[1, 4, 5]
As we now know, sort is a function that is part of the list object and hence called a method
If we want to make our own types of objects we need to use class definitions
A class definition is a blueprint for a particular class of objects (e.g., lists, strings or complex numbers)
It describes
• What kind of data the class stores
• What methods it has for acting on these data
An object or instance is a realization of the class, created from the blueprint
• Each instance has its own unique data
• Methods set out in the class definition act on this (and other) data
In Python, the data and methods of an object are collectively referred to as attributes
Attributes are accessed via dotted attribute notation
• object_name.data
• object_name.method_name()
In the example
x = [1, 5, 4]
x.sort()
x.__class__
list
• x is an object or instance, created from the definition for Python lists, but with its own particular data
• x.sort() and x.__class__ are two attributes of x
• dir(x) can be used to view all the attributes of x
OOP is useful for the same reason that abstraction is useful: for recognizing and exploiting common struc-
ture
For example,
• a Markov chain consists of a set of states and a collection of transition probabilities for moving across
states
• a general equilibrium theory consists of a commodity space, preferences, technologies, and an equi-
librium definition
• a game consists of a list of players, lists of actions available to each player, player payoffs as functions
of all players actions, and a timing protocol
These are all abstractions that collect together objects of the same type
Recognizing common structure allows us to employ common tools
In economic theory, this might be a proposition that applies to all games of a certain type
In Python, this might be a method thats useful for all Markov chains (e.g., simulate)
When we use OOP, the simulate method is conveniently bundled together with the Markov chain object
class Consumer:
Usage
c1.earn(15)
c1.spend(100)
Insufficent funds
We can of course create multiple instances each with its own data
c1 = Consumer(10)
c2 = Consumer(12)
c2.spend(4)
c2.wealth
c1.wealth
10
c1.__dict__
{'wealth': 10}
c2.__dict__
{'wealth': 8}
When we access or set attributes were actually just modifying the dictionary maintained by the instance
Self
If you look at the Consumer class definition again youll see the word self throughout the code
The rules with self are that
• Any instance data should be prepended with self
– e.g., the earn method references self.wealth rather than just wealth
• Any method defined within the class should have self as its first argument
– e.g., def earn(self, y) rather than just def earn(y)
• Any method referenced within the class should be called as self.method_name
There are no examples of the last rule in the preceding code but we will see some shortly
Details
In this section we look at some more formal details related to classes and self
• You might wish to skip to the next section on first pass of this lecture
• You can return to these details after youve familiarized yourself with more examples
Methods actually live inside a class object formed when the interpreter reads the class definition
Note how the three methods __init__, earn and spend are stored in the class object
Consider the following code
c1 = Consumer(10)
c1.earn(10)
c1.wealth
20
When you call earn via c1.earn(10) the interpreter passes the instance c1 and the argument 10 to Con-
sumer.earn
In fact the following are equivalent
• c1.earn(10)
• Consumer.earn(c1, 10)
In the function call Consumer.earn(c1, 10) note that c1 is the first argument
Recall that in the definition of the earn method, self is the first parameter
The end result is that self is bound to the instance c1 inside the function call
Thats why the statement self.wealth += y inside earn ends up modifying c1.wealth
For our next example, lets write a simple class to implement the Solow growth model
The Solow growth model is a neoclassical growth model where the amount of capital stock per capita kt
evolves according to the rule
szktα + (1 − δ)kt
kt+1 = (3.2)
1+n
Here
• s is an exogenously given savings rate
• z is a productivity parameter
• α is capitals share of income
• n is the population growth rate
• δ is the depreciation rate
The steady state of the model is the k that solves (3.2) when kt+1 = kt = k
Heres a class that implements this model
Some points of interest in the code are
• An instance maintains a record of its current capital stock in the variable self.k
• The h method implements the right hand side of (3.2)
• The update method uses h to update capital as per (3.2)
– Notice how inside update the reference to the local method h is self.h
The methods steady_state and generate_sequence are fairly self explanatory
class Solow:
r"""
Implements the Solow growth model with update rule
"""
def __init__(self, n=0.05, # population growth rate
s=0.25, # savings rate
δ=0.1, # depreciation rate
α=0.3, # share of labor
z=2.0, # productivity
k=1.0): # current capital stock
def h(self):
"Evaluate the h function"
# Unpack parameters (get rid of self to simplify notation)
n, s, δ, α, z = self.n, self.s, self.δ, self.α, self.z
# Apply the update rule
return (s * z * self.k**α + (1 - δ) * self.k) / (1 + n)
def update(self):
def steady_state(self):
"Compute the steady state value of capital."
# Unpack parameters (get rid of self to simplify notation)
n, s, δ, α, z = self.n, self.s, self.δ, self.α, self.z
# Compute and return steady state
return ((s * z) / (n + δ))**(1 / (1 - α))
Heres a little program that uses the class to compute time series from two different initial conditions
The common steady state is also plotted for comparison
s1 = Solow()
s2 = Solow(k=8.0)
T = 60
fig, ax = plt.subplots(figsize=(9, 6))
ax.legend()
plt.show()
Example: A Market
Next lets write a class for a simple one good market where agents are price takers
The market consists of the following objects:
• A linear demand curve Q = ad − bd p
• A linear supply curve Q = az + bz (p − t)
Here
• p is price paid by the consumer, Q is quantity, and t is a per unit tax
• Other symbols are demand and supply parameters
The class provides methods to compute various values of interest, including competitive equlibrium price
and quantity, tax revenue raised, consumer surplus and producer surplus
Heres our implementation
from scipy.integrate import quad
class Market:
"""
self.ad, self.bd, self.az, self.bz, self.tax = ad, bd, az, bz, tax
if ad < az:
def price(self):
"Return equilibrium price"
return (self.ad - self.az + self.bz * self.tax) / (self.bd + self.bz)
def quantity(self):
"Compute equilibrium quantity"
return self.ad - self.bd * self.price()
def consumer_surp(self):
"Compute consumer surplus"
# == Compute area under inverse demand function == #
integrand = lambda x: (self.ad / self.bd) - (1 / self.bd) * x
area, error = quad(integrand, 0, self.quantity())
return area - self.price() * self.quantity()
def producer_surp(self):
"Compute producer surplus"
# == Compute area above inverse supply curve, excluding tax == #
integrand = lambda x: -(self.az / self.bz) + (1 / self.bz) * x
area, error = quad(integrand, 0, self.quantity())
return (self.price() - self.tax) * self.quantity() - area
def taxrev(self):
"Compute tax revenue"
return self.tax * self.quantity()
Heres a short program that uses this class to plot an inverse demand curve together with inverse supply
curves with and without taxes
import numpy as np
q_max = m.quantity() * 2
q_grid = np.linspace(0.0, q_max, 100)
pd = m.inverse_demand(q_grid)
ps = m.inverse_supply(q_grid)
psno = m.inverse_supply_no_tax(q_grid)
fig, ax = plt.subplots()
ax.plot(q_grid, pd, lw=2, alpha=0.6, label='demand')
ax.plot(q_grid, ps, lw=2, alpha=0.6, label='supply')
ax.plot(q_grid, psno, '--k', lw=2, alpha=0.6, label='supply without tax')
ax.set_xlabel('quantity', fontsize=14)
ax.set_xlim(0, q_max)
ax.set_ylabel('price', fontsize=14)
ax.legend(loc='lower right', frameon=False, fontsize=14)
plt.show()
def deadw(m):
"Computes deadweight loss for market m."
# == Create analogous market with no tax == #
m_no_tax = Market(m.ad, m.bd, m.az, m.bz, 0)
# == Compare surplus, return difference == #
surp1 = m_no_tax.consumer_surp() + m_no_tax.producer_surp()
surp2 = m.consumer_surp() + m.producer_surp() + m.taxrev()
return surp1 - surp2
1.125
Example: Chaos
Lets look at one more example, related to chaotic dynamics in nonlinear systems
One simple transition rule that can generate complex dynamics is the logistic map
Lets write a class for generating time series from this model
Heres one implementation
class Chaos:
"""
Models the dynamical system with :math:`x_{t+1} = r x_t (1 - x_t)`
"""
def __init__(self, x0, r):
"""
Initialize with state x0 and parameter r
"""
self.x, self.r = x0, r
def update(self):
"Apply the map to update state."
self.x = self.r * self.x *(1 - self.x)
self.update()
return path
ch = Chaos(0.1, 4.0)
ts_length = 250
fig, ax = plt.subplots()
ax.set_xlabel('$t$', fontsize=14)
ax.set_ylabel('$x_t$', fontsize=14)
x = ch.generate_sequence(ts_length)
ax.plot(range(ts_length), x, 'bo-', alpha=0.5, lw=2, label='$x_t$')
plt.show()
fig, ax = plt.subplots()
ch = Chaos(0.1, 4)
r = 2.5
while r < 4:
ch.r = r
t = ch.generate_sequence(1000)[950:]
ax.plot([r] * len(t), t, 'b.', ms=0.6)
r = r + 0.005
ax.set_xlabel('$r$', fontsize=16)
plt.show()
For r a little bit higher than 3.45, the time series settles down to oscillating among the four values plotted
on the vertical axis
Notice that there is no value of r that leads to a steady state oscillating among three values
Python provides special methods with which some neat tricks can be performed
For example, recall that lists and tuples have a notion of length, and that this length can be queried via the
len function
x = (10, 20)
len(x)
If you want to provide a return value for the len function when applied to your user-defined object, use the
__len__ special method
class Foo:
def __len__(self):
return 42
Now we get
f = Foo()
len(f)
42
class Foo:
f = Foo()
f(8) # Exactly equivalent to f.__call__(8)
50
3.2.5 Exercises
Exercise 1
The empirical cumulative distribution function (ecdf) corresponding to a sample {Xi }ni=1 is defined as
1∑
n
Fn (x) := 1{Xi ≤ x} (x ∈ R) (3.4)
n
i=1
Here 1{Xi ≤ x} is an indicator function (one if Xi ≤ x and zero otherwise) and hence Fn (x) is the fraction
of the sample that falls below x
The Glivenko–Cantelli Theorem states that, provided that the sample is iid, the ecdf Fn converges to the
true distribution function F
Implement Fn as a class called ECDF, where
• A given sample {Xi }ni=1 are the instance data, stored as self.observations
• The class implements a __call__ method that returns Fn (x) for any x
Your code should work as follows (modulo randomness)
0.29
0.479
Exercise 2
∑
N
p(x) = a0 + a1 x + a2 x + · · · aN x
2 N
= an xn (x ∈ R) (3.5)
n=0
The instance data for the class Polynomial will be the coefficients (in the case of (3.5), the numbers
a0 , . . . , aN )
Provide methods that
1. Evaluate the polynomial (3.5), returning p(x) for any x
2. Differentiate the polynomial, replacing the original coefficients with those of its derivative p′
Avoid using any import statements
3.2.6 Solutions
Exercise 1
class ECDF:
# == test == #
print(F(0.5))
0.5
0.486
Exercise 2
class Polynomial:
def differentiate(self):
"Reset self.coefficients to those of p' instead of p."
new_coefficients = []
for i, a in enumerate(self.coefficients):
new_coefficients.append(i * a)
# Remove the first element, which is zero
del new_coefficients[0]
# And reset coefficients data to new values
self.coefficients = new_coefficients
return new_coefficients
Contents
3.3.1 Overview
This lecture creates nonstochastic and stochastic versions of Paul Samuelsons celebrated multiplier acceler-
ator model [Sam39]
In doing so, we extend the example of the Solow model class in our second OOP lecture
Our objectives are to
• provide a more detailed example of OOP and classes
• review a famous model
• review linear difference equations, both deterministic and stochastic
Samuelsons Model
Samuelson used a second-order linear difference equation to represent a model of national output based on
three components:
• a national output identity asserting that national outcome is the sum of consumption plus investment
plus government purchases
• a Keynesian consumption function asserting that consumption at time t is equal to a constant times
national output at time t − 1
• an investment accelerator asserting that investment at time t equals a constant called the accelerator
coefficient times the difference in output between period t − 1 and t − 2
• the idea that consumption plus investment plus government purchases constitute aggregate demand,
which automatically calls forth an equal amount of aggregate supply
(To read about linear difference equations see here or chapter IX of [Sar87])
Samuelson used the model to analyze how particular values of the marginal propensity to consume and the
accelerator coefficient might give rise to transient business cycles in national output
Possible dynamic properties include
• smooth convergence to a constant level of output
• damped business cycles that eventually converge to a constant level of output
• persistent business cycles that neither dampen nor explode
Later we present an extension that adds a random shock to the right side of the national income identity
representing random fluctuations in aggregate demand
This modification makes national output become governed by a second-order stochastic linear difference
equation that, with appropriate parameter values, gives rise to recurrent irregular business cycles.
(To read about stochastic linear difference equations see chapter XI of [Sar87])
3.3.2 Details
Ct = aYt−1 + γ (3.6)
Yt = Ct + It + Gt (3.8)
• The parameter a is peoples marginal propensity to consume out of income - equation (3.6) asserts that
people consume a fraction of math:a in (0,1) of each additional dollar of income
• The parameter $b > 0 $ is the investment accelerator coefficient - equation (3.7) asserts that people
invest in physical capital when income is increasing and disinvest when it is decreasing
Equations (3.6), (3.7), and (3.8) imply the following second-order linear difference equation for national
income:
Yt = (a + b)Yt−1 − bYt−2 + (γ + Gt )
or
where ρ1 = (a + b) and ρ2 = −b
Well ordinarily set the parameters (a, b) so that starting from an arbitrary pair of initial conditions
(Ȳ−1 , Ȳ−2 ), national income $Y_t $ converges to a constant value as t becomes large
We are interested in studying
• the transient fluctuations in Yt as it converges to its steady state level
• the rate at which it converges to a steady state level
The deterministic version of the model described so far meaning that no random shocks hit aggregate
demand has only transient fluctuations
We can convert the model to one that has persistent irregular fluctuations by adding a random shock to
aggregate demand
We create a random or stochastic version of the model by adding a random process of shocks or dis-
turbances {σϵt } to the right side of equation (3.9), leading to the second-order scalar linear stochastic
difference equation:
Yt = ρ1 Yt−1 + ρ2 Yt−2
or
To discover the properties of the solution of (3.11), it is useful first to form the characteristic polynomial
for (3.11):
z 2 − ρ1 z − ρ2 (3.12)
We want to find the two zeros (a.k.a. roots) – namely λ1 , λ2 – of the characteristic polynomial
These are two special values of z, say z = λ1 and z = λ2 , such that if we set z equal to one of these values
in expression (3.12), the characteristic polynomial (3.12) equals zero:
z 2 − ρ1 z − ρ2 = (z − λ1 )(z − λ2 ) = 0 (3.13)
λ1 = reiω , λ2 = re−iω
where r is the amplitude of the complex number and ω is its angle or phase
These can also be represented as
λ1 = r(cos(ω) + i sin(ω))
λ2 = r(cos(ω) − i sin(ω))
(To read about the polar form, see here)
Given initial conditions Y−1 , Y−2 , we want to generate a solution of the difference equation (3.11)
It can be represented as
Yt = λt1 c1 + λt2 c2
where c1 and c2 are constants that depend on the two initial conditions and on ρ1 , ρ2
When the roots are complex, it is useful to pursue the following calculations
Notice that
The only way that Yt can be a real number for each t is if c1 + c2 is a real number and c1 − c2 is an imaginary
number This happens only when c1 and c2 are complex conjugates, in which case they can be written in the
polar forms
c1 = veiθ , c2 = ve−iθ
So we can write
where v and θ are constants that must be chosen to satisfy initial conditions for Y−1 , Y−2
where c̃1 , c̃2 is a pair of constants chosen to satisfy the given initial conditions for Y−1 , Y−2
This formula shows that when the roots are complex, Yt displays oscillations with period p̌ = 2π ω and
damping factor r We say that p̌ is the period because in that amount of time the cosine wave cos(ωt + θ)
goes through exactly one complete cycles (Draw a cosine funtion to convince yourself of this please)
Remark: Following [Sam39], we want to choose the parameters a, b of the model so that the absolute values
(of the possibly complex) roots λ1 , λ2 of the characteristic polynomial are both strictly less than one:
Remark: When both roots λ1 , λ2 of the characteristic polynomial have absolute values strictly less than
one, the absolute value of the larger one governs the rate of convergence to the steady state of the non
stochastic version of the model
We can use a LinearStateSpace instance to do various things that we did above with our homemade function
and class
Among other things, we show by example that the eigenvalues of the matrix A that we use to form the
instance of the LinearStateSpace class for the Samuelson model equal the roots of the characteristic poly-
nomial (3.12) for the Samuelson multiplier accelerator model
Here is the formula for the matrix A in the linear state space system in the case that government expenditures
are a constant G:
1 0 0
A = γ + G ρ 1 ρ2
0 1 0
3.3.3 Implementation
def param_plot():
# Set axis
xmin, ymin = -3, -2
xmax, ymax = -xmin, -ymin
plt.axis([xmin, xmax, ymin, ymax])
return fig
param_plot()
plt.show()
The graph portrays regions in which the (λ1 , λ2 ) root pairs implied by the (ρ1 = (a+b), ρ2 = −b) difference
equation parameter pairs in the Samuelson model are such that:
• (λ1 , λ2 ) are complex with modulus less than 1 - in this case, the {Yt } sequence displays damped
oscillations
• (λ1 , λ2 ) are both real, but one is strictly greater than 1 - this leads to explosive growth
• (λ1 , λ2 ) are both real, but one is strictly less than −1 - this leads to explosive oscillations
• (λ1 , λ2 ) are both real and both are less than 1 in absolute value - in this case, there is smooth conver-
gence to the steady state without damped cycles
Later well present the graph with a red mark showing the particular point implied by the setting of (a, b)
discriminant = ρ1 ** 2 + 4 * ρ2
if ρ2 > 1 + ρ1 or ρ2 < -1:
print('Explosive oscillations')
elif ρ1 + ρ2 > 1:
print('Explosive growth')
elif discriminant < 0:
print('Roots are complex with modulus less than one; therefore damped
,→oscillations')
else:
print('Roots are real and absolute values are less than zero;
,→therefore get smooth convergence to a steady state')
categorize_solution(1.3, -.4)
Roots are real and absolute values are less than zero; therefore get smooth
,→convergence to a steady state
def plot_y(function=None):
"""function plots path of Y_t"""
plt.subplots(figsize=(12, 8))
plt.plot(function)
plt.xlabel('Time $t$')
plt.ylabel('$Y_t$', rotation=0)
plt.grid()
plt.show()
The following function calculates roots of the characteristic polynomial using high school algebra
(Well calculate the roots in other ways later)
The function also plots a Yt starting from initial conditions that we set
roots = []
ρ1 = α + β
ρ2 = -β
print(f'ρ_1 is {ρ1}')
print(f'ρ_2 is {ρ2}')
discriminant = ρ1 ** 2 + 4 * ρ2
if discriminant == 0:
roots.append(-ρ1 / 2)
print('Single real root: ')
print(''.join(str(roots)))
elif discriminant > 0:
roots.append((-ρ1 + sqrt(discriminant).real) / 2)
roots.append((-ρ1 - sqrt(discriminant).real) / 2)
print('Two real roots: ')
print(''.join(str(roots)))
else:
roots.append((-ρ1 + sqrt(discriminant)) / 2)
roots.append((-ρ1 - sqrt(discriminant)) / 2)
print('Two complex roots: ')
print(''.join(str(roots)))
return y_t
plot_y(y_nonstochastic())
ρ_1 is 1.42
ρ_2 is -0.5
Two real roots:
[-0.6459687576256715, -0.7740312423743284]
Absolute values of roots are less than one
The next cell writes code that takes as inputs the modulus r and phase ϕ of a conjugate pair of complex
numbers in polar form
λ1 = r exp(iϕ), λ2 = r exp(−iϕ)
• The code assumes that these two complex numbers are the roots of the characteristic polynomial
• It then reverse engineers (a, b) and (ρ1 , ρ2 ), pairs that would generate those roots
import cmath
import math
def f(r, ):
"""
Takes modulus r and angle of complex number r exp(j )
and creates ρ1 and ρ2 of characteristic polynomial for which
r exp(j ) and r exp(- j ) are complex roots.
r = .95
period = 10 # Length of cycle in units of time
= 2 * math.pi/period
a, b = (0.6346322893124001+0j) (0.9024999999999999-0j)
ρ1, ρ2 = (1.5371322893124+0j) (-0.9024999999999999+0j)
ρ1 = ρ1.real
ρ2 = ρ2.real
ρ1, ρ2
(1.5371322893124, -0.9024999999999999)
Here well use numpy to compute the roots of the characteristic polynomial
p1 = cmath.polar(r1)
p2 = cmath.polar(r2)
r, = 0.95 0.6283185307179586
p1, p2 = (0.95, 0.6283185307179586) (0.95, -0.6283185307179586)
a, b = (0.6346322893124001+0j) (0.9024999999999999-0j)
ρ1, ρ2 = 1.5371322893124 -0.9024999999999999
# Useful constants
ρ1 = α + β
ρ2 = -β
categorize_solution(ρ1, ρ2)
return y_t
plot_y(y_nonstochastic())
Roots are complex with modulus less than one; therefore damped oscillations
Roots are [ 0.85+0.27838822j 0.85-0.27838822j]
Roots are complex
Roots are less than one
The next cell studies the implications of reverse engineered complex roots
Well generate an undamped cycle of period 10
b = b.real
a, b = 0.6180339887498949 1.0
Roots are complex with modulus less than one; therefore damped oscillations
Roots are [ 0.80901699+0.58778525j 0.80901699-0.58778525j]
Roots are complex
Roots are less than one
We can also use sympy to compute analytic formulas for the roots
import sympy
from sympy import Symbol, init_printing
init_printing()
r1 = Symbol("ρ_1")
r2 = Symbol("ρ_2")
z = Symbol("z")
[ √ √ ]
ρ1 1 ρ1 1
− ρ21 + 4ρ2 , + ρ21 + 4ρ2
2 2 2 2
a = Symbol("α")
b = Symbol("β")
r1 = a + b
r2 = -b
Now well construct some code to simulate the stochastic version of the model that emerges when we add a
random shock process to aggregate demand
# Useful constants
ρ1 = α + β
ρ2 = -β
# Categorize solution
categorize_solution(ρ1, ρ2)
# Generate shocks
= np.random.normal(0, 1, n)
return y_t
plot_y(y_stochastic())
Roots are real and absolute values are less than zero; therefore get smooth
,→convergence to a steady state
[ 0.7236068 0.2763932]
Roots are real
Roots are less than one
Lets do a simulation in which there are shocks and the characteristic polynomial has complex roots
r = .97
b = b.real
a, b = 0.6285929690873979 0.9409000000000001
Roots are complex with modulus less than one; therefore damped oscillations
[ 0.78474648+0.57015169j 0.78474648-0.57015169j]
Roots are complex
Roots are less than one
This function computes a response to either a permanent or one-off increase in government expenditures
def y_stochastic_g(y_0=20,
y_1=20,
α=0.8,
β=0.2,
γ=10,
n=100,
σ=2,
g=0,
g_t=0,
duration='permanent'):
# Useful constants
ρ1 = α + β
ρ2 = -β
# Categorize solution
categorize_solution(ρ1, ρ2)
# Generate shocks
= np.random.normal(0, 1, n)
# Stochastic
else:
= np.random.normal(0, 1, n)
return ρ1 * x[t - 1] + ρ2 * x[t - 2] + γ + g + σ * [t]
# No government spending
if g == 0:
y_t.append(transition(y_t, t))
Roots are real and absolute values are less than zero; therefore get smooth
,→convergence to a steady state
[ 0.7236068 0.2763932]
Roots are real
Roots are less than one
We can also see the response to a one time jump in government expenditures
Roots are real and absolute values are less than zero; therefore get smooth
,→convergence to a steady state
[ 0.7236068 0.2763932]
Roots are real
Roots are less than one
class Samuelson():
.. math::
Parameters
----------
y_0 : scalar
Initial condition for Y_0
y_1 : scalar
Initial condition for Y_1
α : scalar
Marginal propensity to consume
β : scalar
Accelerator coefficient
n : int
Number of iterations
σ : scalar
Volatility parameter. Must be greater than or equal to 0. Set
equal to 0 for non-stochastic model.
g : scalar
Government spending shock
g_t : int
Time at which government spending shock occurs. Must be specified
when duration != None.
duration : {None, 'permanent', 'one-off'}
Specifies type of government spending shock. If none, government
spending equal to g for all t.
"""
def __init__(self,
y_0=100,
y_1=50,
α=1.3,
β=0.2,
γ=10,
n=100,
σ=0,
g=0,
g_t=0,
duration=None):
def root_type(self):
if all(isinstance(root, complex) for root in self.roots):
return 'Complex conjugate'
elif len(self.roots) > 1:
return 'Double real'
else:
return 'Single real'
def root_less_than_one(self):
if all(abs(root) < 1 for root in self.roots):
return True
def solution_type(self):
ρ1, ρ2 = self.ρ1, self.ρ2
discriminant = ρ1 ** 2 + 4 * ρ2
if ρ2 >= 1 + ρ1 or ρ2 <= -1:
return 'Explosive oscillations'
elif ρ1 + ρ2 >= 1:
return 'Explosive growth'
elif discriminant < 0:
return 'Damped oscillations'
else:
return 'Steady state'
# Stochastic
else:
= np.random.normal(0, 1, self.n)
return self.ρ1 * x[t - 1] + self.ρ2 * x[t - 2] + self.γ + g +
,→self.σ * [t]
def generate_series(self):
# No government spending
if self.g == 0:
y_t.append(self._transition(y_t, t))
def summary(self):
print('Summary\n' + '-' * 50)
print(f'Root type: {self.root_type()}')
print(f'Solution type: {self.solution_type()}')
print(f'Roots: {str(self.roots)}')
if self.root_less_than_one() == True:
print('Absolute value of roots is less than one')
else:
print('Absolute value of roots is not less than one')
if self.σ > 0:
print('Stochastic series with σ = ' + str(self.σ))
else:
print('Non-stochastic series')
if self.g != 0:
print('Government spending equal to ' + str(self.g))
if self.duration != None:
print(self.duration.capitalize() +
' government spending shock at t = ' + str(self.g_t))
def plot(self):
fig, ax = plt.subplots(figsize=(12, 8))
ax.plot(self.generate_series())
ax.set(xlabel='Iteration', xlim=(0, self.n))
ax.set_ylabel('$Y_t$', rotation=0)
ax.grid()
return fig
def param_plot(self):
fig = param_plot()
ax = fig.gca()
plt.legend(fontsize=12, loc=3)
return fig
Summary
---------------------------------------------------------
Root type: Complex conjugate
Solution type: Damped oscillations
Roots: [ 0.65+0.27838822j 0.65-0.27838822j]
Absolute value of roots is less than one
Stochastic series with σ = 2
Government spending equal to 10
Permanent government spending shock at t = 20
sam.plot()
plt.show()
Well use our graph to show where the roots lie and how their location is consistent with the behavior of the
path just graphed
The red $+$ sign shows the location of the roots
sam.param_plot()
plt.show()
It turns out that we can use the QuantEcon.py LinearStateSpace class to do much of the work that we have
done from scratch above
Here is how we map the Samuelson model into an instance of a LinearStateSpace class
from quantecon import LinearStateSpace
""" This script maps the Samuelson model in the the LinearStateSpace class"""
α = 0.8
β = 0.9
ρ1 = α + β
ρ2 = -β
γ = 10
σ = 1
g = 10
n = 100
A = [[1, 0, 0],
[γ + g, ρ1, ρ2],
[0, 1, 0]]
x, y = sam_t.simulate(ts_length=n)
axes[-1].set_xlabel('Iteration')
plt.show()
Lets plot impulse response functions for the instance of the Samuelson model using a method in the Lin-
earStateSpace class
imres = sam_t.impulse_response()
imres = np.asarray(imres)
y1 = imres[:, :, 0]
y2 = imres[:, :, 1]
y1.shape
(2, 6, 1)
Now lets compute the zeros of the characteristic polynomial by simply calculating the eigenvalues of A
A = np.asarray(A)
w, v = np.linalg.eig(A)
print(w)
We could also create a subclass of LinearStateSpace (inheriting all its methods and attributes) to add more
functions to use
class SamuelsonLSS(LinearStateSpace):
"""
this subclass creates a Samuelson multiplier-accelerator model
as a linear state space system
"""
def __init__(self,
y_0=100,
y_1=100,
α=0.8,
β=0.9,
γ=10,
σ=1,
g=10):
self.α, self.β = α, β
self.y_0, self.y_1, self.g = y_0, y_1, g
self.γ, self.σ = γ, σ
self.ρ1 = α + β
self.ρ2 = -β
self.µ_0 = self.µ_y
self.Σ_0 = self.σ_y
# Exception where no convergence achieved when calculating
,→stationary distributions
except ValueError:
print('Stationary distribution does not exist')
x, y = self.simulate(ts_length)
axes[-1].set_xlabel('Iteration')
return fig
x, y = self.impulse_response(j)
return fig
Illustrations
samlss = SamuelsonLSS()
samlss.plot_simulation(100, stationary=False)
plt.show()
samlss.plot_simulation(100, stationary=True)
plt.show()
samlss.plot_irf(100)
plt.show()
samlss.multipliers()
Lets shut down the accelerator by setting b = 0 to get a pure multiplier model
• the absence of cycles gives an idea about why Samuelson included the accelerator
pure_multiplier.plot_simulation()
pure_multiplier.plot_simulation()
pure_multiplier.plot_irf(100)
3.3.9 Summary
In this lecture, we wrote functions and classes to represent non-stochastic and stochastic versions of the
Samuelson (1939) multiplier-accelerator model, described in [Sam39]
We saw that different parameter values led to different output paths, which could either be stationary, explo-
sive, or oscillating
We also were able to represent the model using the QuantEcon.py LinearStateSpace class
Contents
– Generators
– Recursive Function Calls
– Exercises
– Solutions
3.4.1 Overview
With this last lecture, our advice is to skip it on first pass, unless you have a burning desire to read it
Its here
1. as a reference, so we can link back to it when required, and
2. for those who have worked through a number of applications, and now want to learn more about the
Python language
A variety of topics are treated in the lecture, including generators, exceptions and descriptors
Iterators
f = open('us_cities.txt')
f.__next__()
f.__next__()
We see that file objects do indeed have a __next__ method, and that calling this method returns the next
line in the file
The next method can also be accessed via the builtin function next(), which directly calls this method
next(f)
e = enumerate(['foo', 'bar'])
next(e)
(0, 'foo')
next(e)
(1, 'bar')
f = open('test_table.csv', 'r')
nikkei_data = reader(f)
next(nikkei_data)
next(nikkei_data)
All iterators can be placed to the right of the in keyword in for loop statements
In fact this is how the for loop works: If we write
for x in iterator:
<code block>
f = open('somefile.txt', 'r')
for line in f:
# do something
Iterables
You already know that we can put a Python list to the right of in in a for loop
spam
eggs
x = ['foo', 'bar']
type(x)
list
next(x)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-17-5e4e57af3a97> in <module>()
----> 1 next(x)
x = ['foo', 'bar']
type(x)
list
y = iter(x)
type(y)
list_iterator
next(y)
'foo'
next(y)
'bar'
next(y)
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-62-75a92ee8313a> in <module>()
----> 1 y.next()
StopIteration:
iter(42)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-63-826bbd6e91fc> in <module>()
----> 1 iter(42)
Some built-in functions that act on sequences also work with iterables
• max(), min(), sum(), all(), any()
For example
x = [10, -10]
max(x)
10
y = iter(x)
type(y)
listiterator
max(y)
10
One thing to remember about iterators is that they are depleted by use
x = [10, -10]
y = iter(x)
max(y)
10
max(y)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-72-1d3b6314f310> in <module>()
----> 1 max(y)
x = 42
We now know that when this statement is executed, Python creates an object of type int in your computers
memory, containing
• the value 42
• some associated attributes
g = f
id(g) == id(f)
True
g('test')
test
In the first step, a function object is created, and the name f is bound to it
After binding the name g to the same object, we can use it anywhere we would use f
What happens when the number of names bound to an object goes to zero?
Heres an example of this situation, where the name x is first bound to one object and then rebound to another
x = 'foo'
id(x)
164994764
What happens here is that the first object, with identity 164994764 is garbage collected
In other words, the memory slot that stores that object is deallocated, and returned to the operating system
Namespaces
x = 42
# Filename: math2.py
pi = 'foobar'
import math2
Next lets import the math module from the standard library
import math
math.pi
3.1415926535897931
math2.pi
'foobar'
These two different bindings of pi exist in different namespaces, each one implemented as a dictionary
We can look at the dictionary directly, using module_name.__dict__
import math
math.__dict__
import math2
math2.__dict__
As you know, we access elements of the namespace using the dotted attribute notation
math.pi
3.1415926535897931
math.__dict__['pi'] == math.pi
True
Viewing Namespaces
vars(math)
dir(math)
print(math.__doc__)
math.__name__
'math'
Interactive Sessions
To check this, we can look at the current module name via the value of __name__ given at the prompt
print(__name__)
__main__
When we run a script using IPythons run command, the contents of the file are executed as part of
__main__ too
To see this, lets create a file mod.py that prints its own __name__ attribute
# Filename: mod.py
print(__name__)
mod
__main__
In the second case, the code is executed as part of __main__, so __name__ is equal to __main__
To see the contents of the namespace of __main__ we use vars() rather than vars(__main__)
If you do this in IPython, you will see a whole lot of variables that IPython needs, and has initialized when
you started up your session
If you prefer to see only the variables you have initialized, use whos
x = 2
y = 3
import numpy as np
%whos
For example, suppose that we start the interpreter and begin making assignments
We are now working in the module __main__, and hence the namespace for __main__ is the global
namespace
Next, we import a module called amodule
import amodule
At this point, the interpreter creates a namespace for the module amodule and starts executing commands
in the module
While this occurs, the namespace amodule.__dict__ is the global namespace
Once execution of the module finishes, the interpreter returns to the module from where the import statement
was made
In this case its __main__, so the namespace of __main__ again becomes the global namespace
Local Namespaces
Important fact: When we call a function, the interpreter creates a local namespace for that function, and
registers the variables in that namespace
The reason for this will be explained in just a moment
Variables in the local namespace are called local variables
After the function returns, the namespace is deallocated and lost
While the function is executing, we can view the contents of the local namespace with locals()
For example, consider
def f(x):
a = 2
print(locals())
return a * x
f(1)
{'a': 2, 'x': 1}
We have been using various built-in functions, such as max(), dir(), str(), list(), len(),
range(), type(), etc.
How does access to these names work?
dir()
dir(__builtins__)
__builtins__.max
But __builtins__ is special, because we can always access them directly as well
max
__builtins__.max == max
True
Name Resolution
Here f is the enclosing function for g, and each function gets its own namespaces
Now we can give the rule for how namespace resolution works:
The order in which the interpreter searches for names is
1. the local namespace (if it exists)
2. the hierarchy of enclosing namespaces (if they exist)
3. the global namespace
4. the builtin namespace
If the name is not in any of these namespaces, the interpreter raises a NameError
This is called the LEGB rule (local, enclosing, global, builtin)
Heres an example that helps to illustrate
Consider a script test.py that looks as follows
def g(x):
a = 1
x = x + a
return x
a = 0
y = g(10)
print("a = ", a, "y = ", y)
a = 0 y = 11
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-2-401b30e3b8b5> in <module>()
----> 1 x
First,
• The global namespace {} is created
• The function object is created, and g is bound to it within the global namespace
• The name a is bound to 0, again in the global namespace
Next g is called via y = g(10), leading to the following sequence of actions
• The local namespace for the function is created
• Local names x and a are bound, so that the local namespace becomes {'x': 10, 'a': 1}
• Statement x = x + a uses the local a and local x to compute x + a, and binds local name x to
the result
• This value is returned, and y is bound to it in the global namespace
• Local x and a are discarded (and the local namespace is deallocated)
Note that the global a was not affected by the local a
This is a good time to say a little more about mutable vs immutable objects
Consider the code segment
def f(x):
x = x + 1
return x
x = 1
print(f(x), x)
We now understand what will happen here: The code prints 2 as the value of f(x) and 1 as the value of x
First f and x are registered in the global namespace
The call f(x) creates a local namespace and adds x to it, bound to 1
Next, this local x is rebound to the new integer object 2, and this value is returned
None of this affects the global x
However, its a different story when we use a mutable data type such as a list
def f(x):
x[0] = x[0] + 1
return x
x = [1]
print(f(x), x)
1 ∑
n
s2 := (yi − ȳ)2 ȳ = sample mean
n−1
i=1
Assertions
def var(y):
n = len(y)
assert n > 1, 'Sample size must be greater than one.'
return np.sum((y - y.mean())**2) / float(n-1)
If we run this with an array of length one, the program will terminate and print our error message
var([1])
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-20-0032ff8a150f> in <module>()
----> 1 var([1])
<ipython-input-19-cefafaec3555> in var(y)
1 def var(y):
2 n = len(y)
----> 3 assert n > 1, 'Sample size must be greater than one.'
4 return np.sum((y - y.mean())**2) / float(n-1)
The approach used above is a bit limited, because it always leads to termination
Sometimes we can handle errors more gracefully, by treating special cases
Lets look at how this is done
Exceptions
def f:
Since illegal syntax cannot be executed, a syntax error terminates execution of the program
Heres a different kind of error, unrelated to syntax
1 / 0
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-17-05c9758a9c21> in <module>()
----> 1 1/0
Heres another
x1 = y1
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-23-142e0509fbd6> in <module>()
----> 1 x1 = y1
And another
'foo' + 6
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-20-44bbe7e963e7> in <module>()
----> 1 'foo' + 6
And another
X = []
x = X[0]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-22-018da6d9fc14> in <module>()
----> 1 x = X[0]
Catching Exceptions
We can catch and deal with exceptions using try – except blocks
Heres a simple example
def f(x):
try:
return 1.0 / x
except ZeroDivisionError:
print('Error: division by zero. Returned None')
return None
f(2)
0.5
f(0)
f(0.0)
def f(x):
try:
return 1.0 / x
except ZeroDivisionError:
print('Error: Division by zero. Returned None')
except TypeError:
print('Error: Unsupported operation. Returned None')
return None
f(2)
0.5
f(0)
f('foo')
def f(x):
try:
return 1.0 / x
except (TypeError, ZeroDivisionError):
print('Error: Unsupported operation. Returned None')
return None
f(2)
0.5
f(0)
f('foo')
def f(x):
try:
return 1.0 / x
except:
print('Error. Returned None')
return None
Lets look at some special syntax elements that are routinely used by Python developers
You might not need the following concepts immediately, but you will see them in other peoples code
Hence you need to understand them at some stage of your Python education
Decorators
Decorators are a bit of syntactic sugar that, while easily avoided, have turned out to be popular
Its very easy to say what decorators do
On the other hand it takes a bit of effort to explain why you might use them
An Example
import numpy as np
def f(x):
return np.log(np.log(x))
def g(x):
return np.sqrt(42 * x)
Now suppose theres a problem: occasionally negative numbers get fed to f and g in the calculations that
follow
If you try it, youll see that when these functions are called with negative numbers they return a NumPy
object called nan
This stands for not a number (and indicates that you are trying to evaluate a mathematical function at a point
where it is not defined)
Perhaps this isnt what we want, because it causes other problems that are hard to pick up later on
Suppose that instead we want the program to terminate whenever this happens, with a sensible error message
This change is easy enough to implement
import numpy as np
def f(x):
assert x >= 0, "Argument must be nonnegative"
return np.log(np.log(x))
def g(x):
assert x >= 0, "Argument must be nonnegative"
return np.sqrt(42 * x)
Notice however that there is some repetition here, in the form of two identical lines of code
Repetition makes our code longer and harder to maintain, and hence is something we try hard to avoid
Here its not a big deal, but imagine now that instead of just f and g, we have 20 such functions that we need
to modify in exactly the same way
This means we need to repeat the test logic (i.e., the assert line testing nonnegativity) 20 times
The situation is still worse if the test logic is longer and more complicated
In this kind of scenario the following approach would be neater
import numpy as np
def check_nonneg(func):
def safe_function(x):
assert x >= 0, "Argument must be nonnegative"
return func(x)
return safe_function
def f(x):
return np.log(np.log(x))
def g(x):
return np.sqrt(42 * x)
f = check_nonneg(f)
g = check_nonneg(g)
# Program continues with various calculations using f and g
Enter Decorators
def f(x):
return np.log(np.log(x))
def f(x):
return np.log(np.log(x))
def g(x):
return np.sqrt(42 * x)
f = check_nonneg(f)
g = check_nonneg(g)
with
@check_nonneg
def f(x):
return np.log(np.log(x))
@check_nonneg
def g(x):
return np.sqrt(42 * x)
Descriptors
class Car:
One potential problem we might have here is that a user alters one of these variables but not the other
car = Car()
car.miles
1000
car.kms
1610.0
car.miles = 6000
car.kms
1610.0
In the last two lines we see that miles and kms are out of sync
What we really want is some mechanism whereby each time a user sets one of these variables, the other is
automatically updated
A Solution
def get_miles(self):
return self._miles
def get_kms(self):
return self._kms
car = Car()
car.miles
1000
car.miles = 6000
car.kms
9660.0
How it Works
The names _miles and _kms are arbitrary names we are using to store the values of the variables
The objects miles and kms are properties, a common kind of descriptor
The methods get_miles, set_miles, get_kms and set_kms define what happens when you get
(i.e. access) or set (bind) these variables
• So-called getter and setter methods
The builtin Python function property takes getter and setter methods and creates a property
For example, after car is created as an instance of Car, the object car.miles is a property
Being a property, when we set its value via car.miles = 6000 its setter method is triggered in this
case set_miles
These days its very common to see the property function used via a decorator
Heres another version of our Car class that works as before but now uses decorators to set up the properties
class Car:
@property
def miles(self):
return self._miles
@property
def kms(self):
return self._kms
@miles.setter
def miles(self, value):
self._miles = value
self._kms = value * 1.61
@kms.setter
def kms(self, value):
self._kms = value
self._miles = value / 1.61
3.4.6 Generators
Generator Expressions
tuple
type(plural)
list
generator
next(plural)
'dogs'
next(plural)
'cats'
next(plural)
'birds'
285
The function sum() calls next() to get the items, adds successive terms
In fact, we can omit the outer brackets in this case
285
Generator Functions
The most flexible way to create generator objects is to use generator functions
Lets look at some examples
Example 1
def f():
yield 'start'
yield 'middle'
yield 'end'
It looks like a function, but uses a keyword yield that we havent met before
Lets see how it works after running this code
type(f)
function
gen = f()
gen
next(gen)
'start'
next(gen)
'middle'
next(gen)
'end'
next(gen)
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-21-b2c61ce5e131> in <module>()
----> 1 gen.next()
StopIteration:
The generator function f() is used to create generator objects (in this case gen)
Generators are iterators, because they support a next method
The first call to next(gen)
• Executes code in the body of f() until it meets a yield statement
• Returns that value to the caller of next(gen)
The second call to next(gen) starts executing from the next line
def f():
yield 'start'
yield 'middle' # This line!
yield 'end'
Example 2
def g(x):
while x < 100:
yield x
x = x * x
<function __main__.g>
gen = g(2)
type(gen)
generator
next(gen)
next(gen)
next(gen)
16
next(gen)
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
<ipython-input-32-b2c61ce5e131> in <module>()
----> 1 gen.next()
StopIteration:
def g(x):
while x < 100:
yield x
x = x * x # execution continues from here
def g(x):
while 1:
yield x
x = x * x
Advantages of Iterators
import random
n = 10000000
draws = [random.uniform(0, 1) < 0.5 for i in range(n)]
sum(draws)
But we are creating two huge lists here, range(n) and draws
This uses lots of memory and is very slow
If we make n even bigger then this happens
n = 1000000000
draws = [random.uniform(0, 1) < 0.5 for i in range(n)]
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-9-20d1ec1dae24> in <module>()
----> 1 draws = [random.uniform(0, 1) < 0.5 for i in range(n)]
def f(n):
i = 1
while i <= n:
yield random.uniform(0, 1) < 0.5
i += 1
n = 10000000
draws = f(n)
draws
sum(draws)
4999141
In summary, iterables
• avoid the need to create big lists/tuples, and
• provide a uniform interface to iteration that can be used transparently in for loops
This is not something that you will use every day, but it is still useful you should learn it at some stage
Basically, a recursive function is a function that calls itself
For example, consider the problem of computing xt for some t when
def x_loop(t):
x = 1
for i in range(t):
x = 2 * x
return x
def x(t):
if t == 0:
return 1
else:
return 2 * x(t-1)
What happens here is that each successive call uses its own frame in the stack
• a frame is where the local variables of a given function call are held
• stack is memory used to process function calls
– a First In Last Out (FILO) queue
This example is somewhat contrived, since the first (iterative) solution would usually be preferred to the
recursive solution
Well meet less contrived applications of recursion later on
3.4.8 Exercises
Exercise 1
The first few numbers in the sequence are: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55
Write a function to recursively compute the t-th Fibonacci number for any t
Exercise 2
Complete the following code, and test it using this csv file, which we assume that youve put in your current
working directory.
dates = column_iterator('test_table.csv', 1)
Exercise 3
prices
3
8
7
21
Using try – except, write a program to read in the contents of the file and sum the numbers, ignoring
lines without numbers
3.4.9 Solutions
Exercise 1
def x(t):
if t == 0:
return 0
if t == 1:
return 1
else:
return x(t-1) + x(t-2)
Lets test it
Exercise 2
A small sample from test_table.csv is included (and saved) in the code below for convenience
%%file test_table.csv
Date,Open,High,Low,Close,Volume,Adj Close
2009-05-21,9280.35,9286.35,9189.92,9264.15,133200,9264.15
2009-05-20,9372.72,9399.40,9311.61,9344.64,143200,9344.64
2009-05-19,9172.56,9326.75,9166.97,9290.29,167000,9290.29
2009-05-18,9167.05,9167.82,8997.74,9038.69,147800,9038.69
2009-05-15,9150.21,9272.08,9140.90,9265.02,172000,9265.02
2009-05-14,9212.30,9223.77,9052.41,9093.73,169400,9093.73
2009-05-13,9305.79,9379.47,9278.89,9340.49,176000,9340.49
2009-05-12,9358.25,9389.61,9298.61,9298.61,188400,9298.61
2009-05-11,9460.72,9503.91,9342.75,9451.98,230800,9451.98
2009-05-08,9351.40,9464.43,9349.57,9432.83,220200,9432.83
dates = column_iterator('test_table.csv', 1)
i = 1
for date in dates:
print(date)
if i == 10:
break
i += 1
Date
2009-05-21
2009-05-20
2009-05-19
2009-05-18
2009-05-15
2009-05-14
2009-05-13
2009-05-12
2009-05-11
Exercise 3
%%file numbers.txt
prices
3
8
7
21
Writing numbers.txt
f = open('numbers.txt')
total = 0.0
for line in f:
try:
total += float(line)
except ValueError:
pass
f.close()
print(total)
39.0
3.5 Debugging
Contents
• Debugging
– Overview
– Debugging
– Other Useful Magics
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code
as cleverly as possible, you are, by definition, not smart enough to debug it. – Brian Kernighan
3.5.1 Overview
Are you one of those programmers who fills their code with print statements when trying to debug their
programs?
Hey, we all used to do that
(OK, sometimes we still do that)
But once you start writing larger programs youll need a better system
Debugging tools for Python vary across platforms, IDEs and editors
Here well focus on Jupyter and leave you to explore other settings
3.5.2 Debugging
import numpy as np
import matplotlib.pyplot as plt
def plot_log():
fig, ax = plt.subplots(2, 1)
x = np.linspace(1, 2, 10)
ax.plot(x, np.log(x))
plt.show()
This code is intended to plot the log function over the interval [1, 2]
But theres an error here: plt.subplots(2, 1) should be just plt.subplots()
(The call plt.subplots(2, 1) returns a NumPy array containing two axes objects, suitable for having
two subplots on the same figure)
Heres what happens when we run the code:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-ef5c75a58138> in <module>()
8 plt.show()
9
---> 10 plot_log() # Call the function, generate plot
<ipython-input-1-ef5c75a58138> in plot_log()
5 fig, ax = plt.subplots(2, 1)
6 x = np.linspace(1, 2, 10)
----> 7 ax.plot(x, np.log(x))
8 plt.show()
9
The traceback shows that the error occurs at the method call ax.plot(x, np.log(x))
The error occurs because we have mistakenly made ax a NumPy array, and a NumPy array has no plot
method
But lets pretend that we dont understand this for the moment
We might suspect theres something wrong with ax but when we try to investigate this object, we get the
following exception:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-2-645aedc8a285> in <module>()
----> 1 ax
The problem is that ax was defined inside plot_log(), and the name is lost once that function terminates
Lets try doing it a different way
We run the first cell block again, generating the same error
import numpy as np
import matplotlib.pyplot as plt
def plot_log():
fig, ax = plt.subplots(2, 1)
x = np.linspace(1, 2, 10)
ax.plot(x, np.log(x))
plt.show()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-ef5c75a58138> in <module>()
8 plt.show()
9
---> 10 plot_log() # Call the function, generate plot
<ipython-input-1-ef5c75a58138> in plot_log()
5 fig, ax = plt.subplots(2, 1)
6 x = np.linspace(1, 2, 10)
----> 7 ax.plot(x, np.log(x))
8 plt.show()
9
%debug
You should be dropped into a new prompt that looks something like this
ipdb>
ipdb> ax
array([<matplotlib.axes.AxesSubplot object at 0x290f5d0>,
<matplotlib.axes.AxesSubplot object at 0x2930810>], dtype=object)
Its now very clear that ax is an array, which clarifies the source of the problem
To find out what else you can do from inside ipdb (or pdb), use the online help
ipdb> h
Undocumented commands:
======================
retval rv
ipdb> h c
c(ont(inue))
Continue execution, only stop when a breakpoint is encountered.
import numpy as np
import matplotlib.pyplot as plt
def plot_log():
fig, ax = plt.subplots()
x = np.logspace(1, 2, 10)
ax.plot(x, np.log(x))
plt.show()
plot_log()
Here the original problem is fixed, but weve accidentally written np.logspace(1, 2, 10) instead of
np.linspace(1, 2, 10)
Now there wont be any exception, but the plot wont look right
To investigate, it would be helpful if we could inspect variables like x during execution of the function
To this end , we add a break point by inserting the line from IPython.core.debugger import
Tracer; Tracer()() inside the function code block
import numpy as np
import matplotlib.pyplot as plt
from IPython.core.debugger import Pdb
def plot_log():
Pdb().set_trace()
fig, ax = plt.subplots()
x = np.logspace(1, 2, 10)
ax.plot(x, np.log(x))
plt.show()
plot_log()
Now lets run the script, and investigate via the debugger
> <ipython-input-5-c5864f6d184b>(6)plot_log()
4 def plot_log():
5 from IPython.core.debugger import Tracer; Tracer()()
----> 6 fig, ax = plt.subplots()
7 x = np.logspace(1, 2, 10)
8 ax.plot(x, np.log(x))
ipdb> n
> <ipython-input-5-c5864f6d184b>(7)plot_log()
5 from IPython.core.debugger import Tracer; Tracer()()
6 fig, ax = plt.subplots()
----> 7 x = np.logspace(1, 2, 10)
8 ax.plot(x, np.log(x))
9 plt.show()
ipdb> n
> <ipython-input-5-c5864f6d184b>(8)plot_log()
6 fig, ax = plt.subplots()
7 x = np.logspace(1, 2, 10)
----> 8 ax.plot(x, np.log(x))
9 plt.show()
10
ipdb> x
array([ 10. , 12.91549665, 16.68100537, 21.5443469 ,
27.82559402, 35.93813664, 46.41588834, 59.94842503,
77.42636827, 100. ])
We used n twice to step forward through the code (one line at a time)
Then we printed the value of x to see what was happening with that variable
To exit from the debugger, use q
FOUR
This part of the course provides a set of lectures focused on Data and Empirics using Python
4.1 Pandas
Contents
• Pandas
– Overview
– Series
– DataFrames
– On-Line Data Sources
– Exercises
– Solutions
4.1.1 Overview
249
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Just as NumPy provides the basic array data type plus core array operations, pandas
1. defines fundamental structures for working with data and
2. endows them with methods that facilitate operations such as
• reading in data
• adjusting indices
• working with dates and time series
• sorting, grouping, re-ordering and general data munging1
• dealing with missing values, etc., etc.
More sophisticated statistical functionality is left to other packages, such as statsmodels and scikit-learn,
which are built on top of pandas
This lecture will provide a basic introduction to pandas
Throughout the lecture we will assume that the following imports have taken place
import pandas as pd
import numpy as np
1
Wikipedia defines munging as cleaning data from one raw form into a structured, purged one.
4.1.2 Series
Two important data types defined by pandas are Series and DataFrame
You can think of a Series as a column of data, such as a collection of observations on a single variable
A DataFrame is an object for storing related columns of data
Lets start with Series
0 0.430271
1 0.617328
2 -0.265421
3 -0.836113
Name: daily returns, dtype: float64
Here you can imagine the indices 0, 1, 2, 3 as indexing four listed companies, and the values being
daily returns on their shares
Pandas Series are built on top of NumPy arrays, and support many similar operations
s * 100
0 43.027108
1 61.732829
2 -26.542104
3 -83.611339
Name: daily returns, dtype: float64
np.abs(s)
0 0.430271
1 0.617328
2 0.265421
3 0.836113
Name: daily returns, dtype: float64
s.describe()
count 4.000000
mean -0.013484
std 0.667092
min -0.836113
25% -0.408094
50% 0.082425
75% 0.477035
max 0.617328
Name: daily returns, dtype: float64
AMZN 0.430271
AAPL 0.617328
MSFT -0.265421
GOOG -0.836113
Name: daily returns, dtype: float64
Viewed in this way, Series are like fast, efficient Python dictionaries (with the restriction that the items in
the dictionary all have the same typein this case, floats)
In fact, you can use much of the same syntax as Python dictionaries
s['AMZN']
0.43027108469945924
s['AMZN'] = 0
s
AMZN 0.000000
AAPL 0.617328
MSFT -0.265421
GOOG -0.836113
Name: daily returns, dtype: float64
'AAPL' in s
True
4.1.3 DataFrames
While a Series is a single column of data, a DataFrame is several columns, one for each variable
In essence, a DataFrame in pandas is analogous to a (highly optimized) Excel spreadsheet
Thus, it is a powerful tool for representing and analyzing data that are naturally organized into rows and
columns, often with descriptive indexes for individual rows and individual columns
Lets look at an example that reads data from the CSV file pandas/data/test_pwt.csv, and can be
downloaded here
Heres the contents of test_pwt.csv
"country","country isocode","year","POP","XRAT","tcgdp","cc","cg"
"Argentina","ARG","2000","37335.653","0.9995","295072.21869","75.716805379",
,→"5.5788042896"
"Australia","AUS","2000","19053.186","1.72483","541804.6521","67.759025993",
,→"6.7200975332"
"India","IND","2000","1006300.297","44.9416","1728144.3748","64.575551328",
,→"14.072205773"
"Israel","ISR","2000","6114.57","4.07733","129253.89423","64.436450847","10.
,→266688415"
"Malawi","MWI","2000","11801.505","59.543808333","5026.2217836","74.707624181
,→","11.658954494"
"South Africa","ZAF","2000","45064.098","6.93983","227242.36949","72.718710427
,→","5.7265463933"
"United States","USA","2000","282171.957","1","9898700","72.347054303","6.
,→0324539789"
"Uruguay","URY","2000","3219.793","12.099591667","25255.961693","78.978740282
,→","5.108067988"
Supposing you have this data saved as test_pwt.csv in the present working directory (type %pwd in Jupyter
to see what this is), it can be read in as follows:
df = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/
,→master/pandas/data/test_pwt.csv')
type(df)
pandas.core.frame.DataFrame
df
We can select particular rows using standard Python array slicing notation
df[2:5]
To select columns, we can pass a list containing the names of the desired columns represented as strings
df[['country', 'tcgdp']]
country tcgdp
0 Argentina 295072.218690
1 Australia 541804.652100
2 India 1728144.374800
3 Israel 129253.894230
4 Malawi 5026.221784
5 South Africa 227242.369490
6 United States 9898700.000000
7 Uruguay 25255.961693
To select both rows and columns using integers, the iloc attribute should be used with the format .
iloc[rows,columns]
df.iloc[2:5,0:4]
To select rows and columns using a mixture of integers and labels, the loc attribute can be used in a similar
way
country tcgdp
2 India 1728144.374800
3 Israel 129253.894230
4 Malawi 5026.221784
Lets imagine that were only interested in population and total GDP (tcgdp)
One way to strip the data frame df down to only these variables is to overwrite the dataframe using the
selection method described above
df = df[['country','POP','tcgdp']]
df
Here the index 0, 1,..., 7 is redundant, because we can use the country names as an index
To do this, we set the index to be the country variable in the dataframe
df = df.set_index('country')
df
POP tcgdp
country
Argentina 37335.653 295072.218690
Australia 19053.186 541804.652100
India 1006300.297 1728144.374800
Israel 6114.570 129253.894230
Malawi 11801.505 5026.221784
South Africa 45064.098 227242.369490
United States 282171.957 9898700.000000
Uruguay 3219.793 25255.961693
Next were going to add a column showing real GDP per capita, multiplying by 1,000,000 as we go because
total GDP is in millions
One of the nice things about pandas DataFrame and Series objects is that they have methods for plotting
and visualization that work through Matplotlib
For example, we can easily generate a bar plot of GDP per capita
df['GDP percap'].plot(kind='bar')
plt.show()
At the moment the data frame is ordered alphabetically on the countrieslets change it to GDP per capita
df['GDP percap'].plot(kind='bar')
plt.show()
https://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.csv
One option is to use requests, a standard Python library for requesting data over the Internet
To begin, try the following code on your computer
import requests
r = requests.get('http://research.stlouisfed.org/fred2/series/UNRATE/
,→downloaddata/UNRATE.csv')
url = 'http://research.stlouisfed.org/fred2/series/UNRATE/downloaddata/UNRATE.
,→csv'
source = requests.get(url).content.decode().split("\n")
source[0]
'DATE,VALUE\r\n'
source[1]
'1948-01-01,3.4\r\n'
source[2]
'1948-02-01,3.8\r\n'
We could now write some additional code to parse this text and store it as an array
But this is unnecessary pandas read_csv function can handle the task for us
We use parse_dates=True so that pandas recognizes our dates column, allowing for simple date filter-
ing
The data has been read into a pandas DataFrame called data that we can now manipulate in the usual way
type(data)
pandas.core.frame.DataFrame
VALUE
DATE
1948-01-01 3.4
1948-02-01 3.8
1948-03-01 4.0
1948-04-01 3.9
1948-05-01 3.5
pd.set_option('precision', 1)
data.describe() # Your output might differ slightly
VALUE
count 830.0
mean 5.8
std 1.6
min 2.5
25% 4.7
50% 5.6
75% 6.9
max 10.8
We can also plot the unemployment rate from 2006 to 2012 as follows
data['2006':'2012'].plot()
plt.show()
Lets look at one more example of downloading and manipulating data this time from the World Bank
The World Bank collects and organizes data on a huge range of indicators
For example, heres some data on government debt as a ratio to GDP
If you click on DOWNLOAD DATA you will be given the option to download the data as an Excel file
The next program does this for you, reads an Excel file into a pandas DataFrame, and plots time series for
the US and Australia
r = requests.get(wb_data_query)
with open('gd.xls', 'wb') as output:
output.write(r.content)
4.1.5 Exercises
Exercise 1
Write a program to calculate the percentage price change over 2013 for the following shares
'KO': 'Coca-Cola',
'GOOG': 'Google',
'SNE': 'Sony',
'PTR': 'PetroChina'}
A dataset of daily closing prices for the above firms can be found in pandas/data/ticker_data.
csv, and can be downloaded here
Plot the result as a bar graph like follows
4.1.6 Solutions
Exercise 1
ticker = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/
,→raw/master/pandas/data/ticker_data.csv')
ticker.set_index('Date', inplace=True)
'AAPL': 'Apple',
'AMZN': 'Amazon',
'BA': 'Boeing',
'QCOM': 'Qualcomm',
'KO': 'Coca-Cola',
'GOOG': 'Google',
'SNE': 'Sony',
'PTR': 'PetroChina'}
price_change = pd.Series()
price_change.sort_values(inplace=True)
fig, ax = plt.subplots(figsize=(10,8))
price_change.plot(kind='bar', ax=ax)
plt.show()
Contents
– Exercises
– Solutions
4.2.1 Overview
We will read in a dataset from the OECD of real minimum wages in 32 countries and assign it to realwage
The dataset pandas_panel/realwage.csv can be downloaded here
Make sure the file is in your current working directory
import pandas as pd
realwage = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/
,→raw/master/pandas_panel/realwage.csv')
The data is currently in long format, which is difficult to analyse when there are several dimensions to the
data
We will use pivot_table to create a wide format panel, with a MultiIndex to handle higher dimen-
sional data
pivot_table arguments should specify the data (values), the index, and the columns we want in our
resulting dataframe
By passing a list in columns, we can create a MultiIndex in our column axis
realwage = realwage.pivot_table(values='value',
index='Time',
columns=['Country', 'Series', 'Pay period'])
realwage.head()
To more easily filter our time series data later on, we will convert the index into a DateTimeIndex
realwage.index = pd.to_datetime(realwage.index)
type(realwage.index)
pandas.core.indexes.datetimes.DatetimeIndex
The columns contain multiple levels of indexing, known as a MultiIndex, with levels being ordered
hierarchically (Country > Series > Pay period)
A MultiIndex is the simplest and most flexible way to manage panel data in pandas
type(realwage.columns)
pandas.core.indexes.multi.MultiIndex
realwage.columns.names
Like before, we can select the country (the top level of our MultiIndex)
realwage['United States'].head()
Stacking and unstacking levels of the MultiIndex will be used throughout this lecture to reshape our
dataframe into a format we need
.stack() rotates the lowest level of the column MultiIndex to the row index (.unstack() works
in the opposite direction - try it out)
realwage.stack().head()
We can also pass in an argument to select the level we would like to stack
realwage.stack(level='Country').head()
realwage['2015'].stack(level=(1, 2)).transpose().head()
For the rest of lecture, we will work with a dataframe of the hourly real minimum wages across countries
and time, measured in 2015 US dollars
To create our filtered dataframe (realwage_f), we can use the xs method to select values at lower levels
in the multiindex, while keeping the higher levels (countries in this case)
Similar to relational databases like SQL, pandas has built in methods to merge datasets together
Using country information from WorldData.info, well add the continent of each country to realwage_f
with the merge function
The csv file can be found in pandas_panel/countries.csv, and can be downloaded here
worlddata = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/
,→raw/master/pandas_panel/countries.csv', sep=';')
worlddata.head()
First well select just the country and continent variables from worlddata and rename the column to
Country
realwage_f.transpose().head()
We can use either left, right, inner, or outer join to merge our datasets:
• left join includes only countries from the left dataset
• right join includes only countries from the right dataset
• outer join includes countries that are in either the left and right datasets
• inner join includes only countries common to both the left and right datasets
By default, merge will use an inner join
Here we will pass how='left' to keep all countries in realwage_f, but discard countries in
worlddata that do not have a corresponding data entry realwage_f
This is illustrated by the red shading in the following diagram
We will also need to specify where the country name is located in each dataframe, which will be the key
that is used to merge the dataframes on
Our left dataframe (realwage_f.transpose()) contains countries in the index, so we set
left_index=True
Our right dataframe (worlddata) contains countries in the Country column, so we set
right_on='Country'
Countries that appeared in realwage_f but not in worlddata will have NaN in the Continent column
To check whether this has occurred, we can use .isnull() on the continent column and filter the merged
dataframe
merged[merged['Continent'].isnull()]
merged['Country'].map(missing_continents)
17 NaN
23 NaN
32 NaN
100 NaN
38 NaN
108 NaN
41 NaN
225 NaN
53 NaN
58 NaN
45 NaN
68 NaN
233 NaN
86 NaN
88 NaN
91 NaN
247 Asia
117 NaN
122 NaN
123 NaN
138 NaN
153 NaN
151 NaN
174 NaN
175 NaN
247 Europe
247 Europe
198 NaN
200 NaN
227 NaN
241 NaN
240 NaN
Name: Country, dtype: object
merged['Continent'] = merged['Continent'].fillna(merged['Country'].
,→map(missing_continents))
merged[merged['Country'] == 'Korea']
We will also combine the Americas into a single continent - this will make our visualization nicer later on
To do this, we will use .replace() and loop through a list of the continent values we want to replace
Now that we have all the data we want in a single DataFrame, we will reshape it back into panel form
with a MultiIndex
We should also ensure to sort the index using .sort_index() so that we can efficiently filter our
dataframe later on
By default, levels will be sorted top-down
While merging, we lost our DatetimeIndex, as we merged columns that were not in datetime format
merged.columns
Now that we have set the merged columns as the index, we can recreate a DatetimeIndex using .
to_datetime()
merged.columns = pd.to_datetime(merged.columns)
merged.columns = merged.columns.rename('Time')
merged.columns
The DatetimeIndex tends to work more smoothly in the row axis, so we will go ahead and transpose
merged
merged = merged.transpose()
merged.head()
Grouping and summarizing data can be particularly useful for understanding large panel datasets
A simple way to summarize data is to call an aggregation method on the dataframe, such as .mean() or
.max()
For example, we can calculate the average real minimum wage for each country over the period 2006 to
2016 (the default is to aggregate over rows)
merged.mean().head(10)
Continent Country
America Brazil 1.09
Canada 7.82
Chile 1.62
Colombia 1.07
Costa Rica 2.53
Mexico 0.53
United States 7.15
Asia Israel 5.95
Japan 6.18
Korea 4.22
dtype: float64
Using this series, we can plot the average real minimum wage over the past decade for each country in our
data set
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('seaborn')
%matplotlib inline
merged.mean().sort_values(ascending=False).plot(kind='bar', title="Average
,→real minimum wage 2006 - 2016")
plt.show()
Passing in axis=1 to .mean() will aggregate over columns (giving the average minimum wage for all
countries over time)
merged.mean(axis=1).head()
Time
2006-01-01 4.69
2007-01-01 4.84
2008-01-01 4.90
2009-01-01 5.08
2010-01-01 5.11
dtype: float64
merged.mean(axis=1).plot()
plt.title('Average real minimum wage 2006 - 2016')
plt.ylabel('2015 USD')
plt.xlabel('Year')
plt.show()
We can also specify a level of the MultiIndex (in the column axis) to aggregate over
merged.mean(level='Continent', axis=1).head()
We can plot the average minimum wages in each continent as a time series
merged.mean(level='Continent', axis=1).plot()
plt.title('Average real minimum wage')
plt.ylabel('2015 USD')
plt.xlabel('Year')
plt.show()
merged.stack().describe()
Calling an aggregation method on the object applies the function to each group, the results of which are
combined in a new data structure
For example, we can return the number of countries in our dataset for each continent using .size()
In this case, our new data structure is a Series
grouped.size()
Continent
America 7
Asia 4
Europe 19
dtype: int64
Calling .get_group() to return just the countries in a single group, we can create a kernel density
estimate of the distribution of real minimum wages in 2016 for each continent
grouped.groups.keys() will return the keys from the groupby object
continents = grouped.groups.keys()
This lecture has provided an introduction to some of pandas more advanced features, including multiindices,
merging, grouping and plotting
Other tools that may be useful in panel data analysis include xarray, a python package that extends pandas
to N-dimensional data structures
4.2.6 Exercises
Exercise 1
In these exercises youll work with a dataset of employment rates in Europe by age and sex from Eurostat
The dataset pandas_panel/employ.csv can be downloaded here
Reading in the csv file returns a panel dataset in long format. Use .pivot_table() to construct a wide
format dataframe with a MultiIndex in the columns
Start off by exploring the dataframe and the variables available in the MultiIndex levels
Write a program that quickly returns all values in the MultiIndex
Exercise 2
Filter the above dataframe to only include employment as a percentage of active population
Create a grouped boxplot using seaborn of employment rates in 2015 by age group and sex
Hint: GEO includes both areas and countries
4.2.7 Solutions
Exercise 1
employ = pd.read_csv('https://github.com/QuantEcon/QuantEcon.lectures.code/
,→raw/master/pandas_panel/employ.csv')
employ = employ.pivot_table(values='Value',
index=['DATE'],
columns=['UNIT','AGE', 'SEX', 'INDIC_EM', 'GEO'])
employ.index = pd.to_datetime(employ.index) # ensure that dates are datetime
,→format
employ.head()
This is a large dataset so it is useful to explore the levels and variables available
employ.columns.names
'United Kingdom'],
dtype='object', name='GEO')
Exercise 2
To easily filter by country, swap GEO to the top level and sort the MultiIndex
employ.columns = employ.columns.swaplevel(0,-1)
employ = employ.sort_index(axis=1)
We need to get rid of a few items in GEO which are not countries
A fast way to get rid of the EU areas is to use a list comprehension to find the level values in GEO that begin
with Euro
geo_list = employ.columns.get_level_values('GEO').unique().tolist()
countries = [x for x in geo_list if not x.startswith('Euro')]
employ = employ[countries]
employ.columns.get_level_values('GEO').unique()
Select only percentage employed in the active population from the dataframe
box = employ_f['2015'].unstack().reset_index()
sns.boxplot(x="AGE", y=0, hue="SEX", data=box, palette=("husl"),
,→showfliers=False)
plt.xlabel('')
plt.xticks(rotation=35)
plt.ylabel('Percentage of population (%)')
plt.title('Employment in Europe (2015)')
plt.legend(bbox_to_anchor=(1,0.5))
plt.show()
Contents
4.3.1 Overview
Linear regression is a standard tool for analyzing the relationship between two or more variables
In this lecture well use the Python package statsmodels to estimate, interpret, and visualize linear
regression models
Along the way well discuss a variety of topics, including
• simple and multivariate linear regression
• visualization
• endogeneity and omitted variable bias
• two-stage least squares
As an example, we will replicate results from Acemoglu, Johnson and Robinsons seminal paper [AJR01]
• You can download a copy here
In the paper, the authors emphasize the importance of institutions in economic development
The main contribution is the use of settler mortality rates as a source of exogenous variation in institutional
differences
Such variation is needed to determine whether it is institutions that give rise to greater economic growth,
rather than the other way around
Prerequisites
Comments
[AJR01] wish to determine whether or not differences in institutions can help to explain observed economic
outcomes
How do we measure institutional differences and economic outcomes?
In this paper,
• economic outcomes are proxied by log GDP per capita in 1995, adjusted for exchange rates
• institutional differences are proxied by an index of protection against expropriation on average over
1985-95, constructed by the Political Risk Services Group
These variables and other data used in the paper are available for download on Daron Acemoglus webpage
We will use pandas .read_stata() function to read in data contained in the .dta files to dataframes
import pandas as pd
df1 = pd.read_stata('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/
,→master/ols/maketable1.dta')
df1.head()
Lets use a scatterplot to see whether any obvious relationship exists between GDP per capita and the pro-
tection against expropriation index
import matplotlib.pyplot as plt
plt.style.use('seaborn')
The plot shows a fairly strong positive relationship between protection against expropriation and log GDP
per capita
Specifically, if higher protection against expropriation is a measure of institutional quality, then better insti-
tutions appear to be positively correlated with better economic outcomes (higher GDP per capita)
Given the plot, choosing a linear model to describe this relationship seems like a reasonable assumption
We can write our model as
logpgp95i = β0 + β1 avexpri + ui
where:
• β0 is the intercept of the linear trend line on the y-axis
• β1 is the slope of the linear trend line, representing the marginal effect of protection against risk on
log GDP per capita
• ui is a random error term (deviations of observations from the linear trend due to factors not included
in the model)
Visually, this linear model involves choosing a straight line that best fits the data, as in the following plot
(Figure 2 in [AJR01])
import numpy as np
X = df1_subset['avexpr']
y = df1_subset['logpgp95']
labels = df1_subset['shortnam']
plt.xlim([3.3,10.5])
plt.ylim([4,10.5])
plt.xlabel('Average Expropriation Risk 1985-95')
plt.ylabel('Log GDP per capita, PPP, 1995')
plt.title('Figure 2: OLS relationship between expropriation risk and income')
plt.show()
The most common technique to estimate the parameters (βs) of the linear model is Ordinary Least Squares
(OLS)
As the name implies, an OLS model is solved by finding the parameters that minimize the sum of squared
residuals, ie.
∑
N
min û2i
β̂ i=1
where ûi is the difference between the observation and the predicted value of the dependent variable
To estimate the constant term β0 , we need to add a column of 1s to our dataset (consider the equation if β0
was replaced with β0 xi and xi = 1)
df1['const'] = 1
Now we can construct our model in statsmodels using the OLS function
We will use pandas dataframes with statsmodels, however standard arrays can also be used as argu-
ments
import statsmodels.api as sm
statsmodels.regression.linear_model.OLS
results = reg1.fit()
type(results)
statsmodels.regression.linear_model.RegressionResultsWrapper
print(results.summary())
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is
,→correctly specified.
This equation describes the line that best fits our data, as shown in Figure 2
We can use this equation to predict the level of log GDP per capita for a value of the index of expropriation
protection
For example, for a country with an index value of 7.07 (the average for the dataset), we find that their
predicted level of log GDP per capita in 1995 is 8.38
mean_expr = np.mean(df1_subset['avexpr'])
mean_expr
6.515625
8.3771
An easier (and more accurate) way to obtain this result is to use .predict() and set constant = 1 and
avexpri = mean_expr
results.predict(exog=[1, mean_expr])
array([ 8.09156367])
We can obtain an array of predicted logpgp95i for every value of avexpri in our dataset by calling .
predict() on our results
Plotting the predicted values against avexpri shows that the predicted values lie along the linear line that
we fitted above
The observed values of logpgp95i are also plotted for comparison purposes
plt.legend()
plt.title('OLS predicted values')
plt.xlabel('avexpr')
plt.ylabel('logpgp95')
plt.show()
So far we have only accounted for institutions affecting economic performance - almost certainly there are
numerous other factors affecting GDP that are not included in our model
Leaving out variables that affect logpgp95i will result in omitted variable bias, yielding biased and incon-
sistent parameter estimates
We can extend our bivariate regression model to a multivariate regression model by adding in other factors
that may affect logpgp95i
[AJR01] consider other factors such as:
• the effect of climate on economic outcomes; latitude is used to proxy this
• differences that affect both economic performance and institutions, eg. cultural, historical, etc.; con-
trolled for with the use of continent dummies
Lets estimate some of the extended models considered in the paper (Table 2) using data from
maketable2.dta
df2 = pd.read_stata('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/
,→master/ols/maketable2.dta')
Now that we have fitted our model, we will use summary_col to display the results in a single table
(model numbers correspond to those in the paper)
results_table = summary_col(results=[reg1,reg2,reg3],
float_format='%0.2f',
stars = True,
model_names=['Model 1',
'Model 3',
'Model 4'],
info_dict=info_dict,
regressor_order=['const',
'avexpr',
'lat_abst',
'asia',
'africa'])
print(results_table)
4.3.4 Endogeneity
As [AJR01] discuss, the OLS models likely suffer from endogeneity issues, resulting in biased and incon-
sistent model estimates
Namely, there is likely a two-way relationship between institutions and economic outcomes:
• richer countries may be able to afford or prefer better institutions
• variables that affect income may also be correlated with institutional differences
• the construction of the index may be biased; analysts may be biased towards seeing countries with
higher income having better institutions
To deal with endogeneity, we can use two-stage least squares (2SLS) regression, which is an extension of
OLS regression
This method requires replacing the endogenous variable avexpri with a variable that is:
1. correlated with avexpri
2. not correlated with the error term (ie. it should not directly affect the dependent variable, otherwise it
would be correlated with ui due to omitted variable bias)
The new set of regressors is called an instrument, which aims to remove endogeneity in our proxy of
institutional differences
The main contribution of [AJR01] is the use of settler mortality rates to instrument for institutional differ-
ences
They hypothesize that higher mortality rates of colonizers led to the establishment of institutions that were
more extractive in nature (less protection against expropriation), and these institutions still persist today
Using a scatterplot (Figure 3 in [AJR01]), we can see protection against expropriation is negatively corre-
lated with settler mortality rates, coinciding with the authors hypothesis and satisfying the first condition of
a valid instrument
X = df1_subset2['logem4']
y = df1_subset2['avexpr']
labels = df1_subset2['shortnam']
plt.xlim([1.8,8.4])
plt.ylim([3.3,10.4])
plt.xlabel('Log of Settler Mortality')
plt.ylabel('Average Expropriation Risk 1985-95')
plt.title('Figure 3: First-stage relationship between settler mortality and
,→expropriation risk')
plt.show()
The second condition may not be satisfied if settler mortality rates in the 17th to 19th centuries have a direct
effect on current GDP (in addition to their indirect effect through institutions)
For example, settler mortality rates may be related to the current disease environment in a country, which
could affect current economic performance
[AJR01] argue this is unlikely because:
• The majority of settler deaths were due to malaria and yellow fever, and had limited effect on local
people
• The disease burden on local people in Africa or India, for example, did not appear to be higher than
average, supported by relatively high population densities in these areas before colonization
As we appear to have a valid instrument, we can use 2SLS regression to obtain consistent and unbiased
parameter estimates
First stage
The first stage involves regressing the endogenous variable (avexpri ) on the instrument
The instrument is the set of all exogenous variables in our model (and not just the variable we have replaced)
Using model 1 as an example, our instrument is simply a constant and settler mortality rates logem4i
avexpri = δ0 + δ1 logem4i + vi
The data we need to estimate this equation is located in maketable4.dta (only complete data, indicated
by baseco = 1, is used for estimation)
df4 = df4[df4['baseco'] == 1]
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is
,→correctly specified.
Second stage
We need to retrieve the predicted values of avexpri using .predict()
\ i in the original linear
We then replace the endogenous variable avexpri with the predicted values avexpr
model
\ i + ui
logpgp95i = β0 + β1 avexpr
df4['predicted_avexpr'] = results_fs.predict()
results_ss = sm.OLS(df4['logpgp95'],
df4[['const', 'predicted_avexpr']]).fit()
print(results_ss.summary())
------------------------------------------------------------------------------
,→------
==============================================================================
Omnibus: 10.547 Durbin-Watson: 2.137
Prob(Omnibus): 0.005 Jarque-Bera (JB): 11.010
Skew: -0.790 Prob(JB): 0.00407
Kurtosis: 4.277 Cond. No. 58.1
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is
,→correctly specified.
The second-stage regression results give us an unbiased and consistent estimate of the effect of institutions
on economic outcomes
The result suggests a stronger positive relationship than what the OLS results indicated
Note that while our parameter estimates are correct, our standard errors are not and for this reason, comput-
ing 2SLS manually (in stages with OLS) is not recommended
We can correctly estimate a 2SLS regression in one step using the linearmodels package, an extension of
statsmodels
To install this package, you will need to run pip install linearmodels in your command line
Note that when using IV2SLS, the exogenous and instrument variables are split up in the function arguments
(whereas before the instrument included exogenous variables)
iv = IV2SLS(dependent=df4['logpgp95'],
exog=df4['const'],
endog=df4['avexpr'],
instruments=df4['logem4']).fit(cov_type='unadjusted')
print(iv.summary)
Parameter Estimates
==============================================================================
Parameter Std. Err. T-stat P-value Lower CI Upper CI
------------------------------------------------------------------------------
const 1.9097 1.0106 1.8897 0.0588 -0.0710 3.8903
avexpr 0.9443 0.1541 6.1293 0.0000 0.6423 1.2462
==============================================================================
Endogenous: avexpr
Instruments: logem4
Unadjusted Covariance (Homoskedastic)
Debiased: False
Given that we now have consistent and unbiased estimates, we can infer from the model we have estimated
that institutional differences (stemming from institutions set up during colonization) can help to explain
differences in income levels across countries today
[AJR01] use a marginal effect of 0.94 to calculate that the difference in the index between Chile and Nige-
ria (ie. institutional quality) implies up to a 7-fold difference in income, emphasizing the significance of
institutions in economic development
4.3.5 Summary
We have demonstrated basic OLS and 2SLS regression in statsmodels and linearmodels
If you are familiar with R, you may want use the formula interface to statsmodels, or consider using
r2py to call R from within Python
4.3.6 Exercises
Exercise 1
In the lecture, we think the original model suffers from endogeneity bias due to the likely effect income has
on institutional development
Although endogeneity is often best identified by thinking about the data and model, we can formally test for
endogeneity using the Hausman test
We want to test for correlation between the endogenous variable, avexpri , and the errors, ui
H0 : Cov(avexpri , ui ) = 0 (no endogeneity)
H1 : Cov(avexpri , ui ) ̸= 0 (endogeneity)
This test is run is two stages
First, we regress avexpri on the instrument, logem4i
avexpri = π0 + π1 logem4i + υi
Second, we retrieve the residuals υ̂i and include them in the original equation
If α is statistically significant (with a p-value < 0.05), then we reject the null hypothesis and conclude that
avexpri is endogenous
Using the above information, estimate a Hausman test and interpret your results
Exercise 2
The OLS parameter β can also be estimated using matrix algebra and numpy (you may need to review the
numpy lecture to complete this exercise)
The linear equation we want to estimate is (written in matrix form)
y = Xβ + u
To solve for the unknown parameter β, we want to minimise the sum of squared residuals
minû′ û
β̂
Rearranging the first equation and substituting into the second equation, we can write
Solving this optimization problem gives the solution for the β̂ coefficients
β̂ = (X ′ X)−1 X ′ y
Using the above information, compute β̂ from model 1 using numpy - your results should be the same as
those in the statsmodels output from earlier in the lecture
4.3.7 Solutions
Exercise 1
# Load in data
df4 = pd.read_stata('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/
,→master/ols/maketable4.dta')
print(reg2.summary())
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is
,→correctly specified.
The output shows that the coefficient on the residuals is statistically significant, indicating avexpri is en-
dogenous
Exercise 2
# Load in data
df1 = pd.read_stata('https://github.com/QuantEcon/QuantEcon.lectures.code/raw/
,→master/ols/maketable1.dta')
df1 = df1.dropna(subset=['logpgp95', 'avexpr'])
# Compute β_hat
β_hat = np.linalg.solve(X.T @ X, X.T @ y)
β_0 = 4.6
β_1 = 0.53
Contents
– Summary
– Exercises
– Solutions
4.4.1 Overview
In a previous lecture we estimated the relationship between dependent and explanatory variables using linear
regression
But what if a linear relationship is not an appropriate assumption for our model?
One widely used alternative is maximum likelihood estimation, which involves specifying a class of distri-
butions, indexed by unknown parameters, and then using the data to pin down these parameter values
The benefit relative to linear regression is that it allows more flexibility in the probabilistic relationships
between variables
Here we illustrate maximum likelihood by replicating Daniel Treismans (2016) paper, Russias Billionaires,
which connects the number of billionaires in a country to its economic characteristics
The paper concludes that Russia has a higher number of billionaires than economic factors such as market
size and tax rate predict
Prerequisites
Comments
Lets consider the steps we need to go through in maximum likelihood estimation and how they pertain to
this study
Flow of Ideas
The first step with maximum likelihood estimation is to choose the probability distribution believed to be
generating the data
More precisely, we need to make an assumption as to which parametric class of distributions is generating
the data
• e.g., the class of all normal distributions, or the class of all gamma distributions
Each such class is a family of distributions indexed by a finite number of parameters
• e.g., the class of normal distributions is a family of distributions indexed by its mean µ ∈ (−∞, ∞)
and standard deviation σ ∈ (0, ∞)
Well let the data pick out a particular element of the class by pinning down the parameters
The parameter estimates so produced will be called maximum likelihood estimates
Counting Billionaires
ax.grid()
ax.set_xlabel('$y$', fontsize=14)
ax.set_ylabel('$f(y \mid \mu)$', fontsize=14)
ax.axis(xmin=0, ymin=0)
ax.legend(fontsize=14)
plt.show()
Notice that the Poisson distribution begins to resemble a normal distribution as the mean of y increases
Lets have a look at the distribution of the data well be working with in this lecture
Treismans main source of data is Forbes annual rankings of billionaires and their estimated net worth
The dataset mle/fp.dta can be downloaded from here or its AER page
import pandas as pd
pd.options.display.max_columns = 10
Using a histogram, we can view the distribution of the number of billionaires per country, numbil0, in
2008 (the United States is dropped for plotting purposes)
plt.subplots(figsize=(12, 8))
plt.hist(numbil0_2008, bins=30)
plt.xlim(xmin=0)
plt.grid()
plt.xlabel('Number of billionaires in 2008')
plt.ylabel('Count')
plt.show()
From the histogram, it appears that the Poisson assumption is not unreasonable (albeit with a very low µ
and some outliers)
In Treismans paper, the dependent variable the number of billionaires yi in country i is modeled as a
function of GDP per capita, population size, and years membership in GATT and WTO
Hence, the distribution of yi needs to be conditioned on the vector of explanatory variables xi
The standard formulation the so-called poisson regression model is as follows:
µyi i −µi
f (yi | xi ) = e ; yi = 0, 1, 2, . . . , ∞. (4.1)
yi !
import numpy as np
for X in datasets:
µ = exp(X @ β)
distribution = []
for y_i in y_values:
distribution.append(poisson_pmf(y_i, µ))
ax.plot(y_values,
distribution,
label=f'$\mu_i$={µ:.1}',
marker='o',
markersize=8,
alpha=0.5)
ax.grid()
ax.legend()
ax.set_xlabel('$y \mid x_i$')
ax.set_ylabel(r'$f(y \mid x_i; \beta )$')
ax.axis(xmin=0, ymin=0)
plt.show()
In our model for number of billionaires, the conditional distribution contains 4 (k = 4) parameters that we
need to estimate
We will label our entire parameter vector as β where
β0
β1
β=
β2
β3
To estimate the model using MLE, we want to maximize the likelihood that our estimate β̂ is the true
parameter β
Intuitively, we want to find the β̂ that best fits our data
First we need to construct the likelihood function L(β), which is similar to a joint probability density
function
Assume we have some data yi = {y1 , y2 } and yi ∼ f (yi )
If y1 and y2 are independent, the joint pmf of these data is f (y1 , y2 ) = f (y1 ) · f (y2 )
If yi follows a Poisson distribution with λ = 7, we can visualize the joint pmf like so
plot_joint_poisson(µ=7, y_n=20)
Similarly, the joint pmf of our data (which is distributed as a conditional Poisson distribution) can be written
as
∏
n
µyi
f (y1 , y2 , . . . , yn | x1 , x2 , . . . , xn ; β) = i
e−µi
yi !
i=1
Now that we have our likelihood function, we want to find the β̂ that yields the maximum likelihood value
maxL(β)
β
In doing so it is generally easier to maximize the log-likelihood (consider differentiating f (x) = x exp(x)
vs. f (x) = log(x) + x)
Given that taking a logarithm is a monotone increasing transformation, a maximizer of the likelihood func-
tion will also be a maximizer of the log-likelihood function
In our case the log-likelihood is
( )
log L(β) = log f (y1 ; β) · f (y2 ; β) · . . . · f (yn ; β)
∑
n
= log f (yi ; β)
i=1
∑
n ( µyi )
= log i
e−µi
yi !
i=1
∑n ∑
n ∑
n
= yi log µi − µi − log y!
i=1 i=1 i=1
The MLE of the Poisson to the Poisson for β̂ can be obtained by solving
(∑
n ∑
n ∑
n )
max yi log µi − µi − log y!
β
i=1 i=1 i=1
However, no analytical solution exists to the above problem – to find the MLE we need to use numerical
methods
Many distributions do not have nice, analytical solutions and therefore require numerical methods to solve
for parameter estimates
One such numerical method is the Newton-Raphson algorithm
β = np.linspace(1, 20)
logL = -(β - 10) ** 2 - 10
dlogL = -2 * β + 20
ax1.set_ylabel(r'$log \mathcal{L(\beta)}$',
rotation=0,
labelpad=35,
fontsize=15)
ax2.set_ylabel(r'$\frac{dlog \mathcal{L(\beta)}}{d \beta}$ ',
rotation=0,
labelpad=35,
fontsize=19)
ax2.set_xlabel(r'$\beta$', fontsize=15)
ax1.grid(), ax2.grid()
plt.axhline(c='black')
plt.show()
d log L(β)
The plot shows that the maximum likelihood value (the top plot) occurs when dβ = 0 (the bottom
plot)
Therefore, the likelihood is maximized when β = 10
We can also ensure that this value is a maximum (as opposed to a minimum) by checking that the second
derivative (slope of the bottom plot) is negative
The Newton-Raphson algorithm finds a point where the first derivative is 0
To use the algorithm, we take an initial guess at the maximum value, β0 (the OLS parameter estimates might
be a reasonable guess), then
1. Use the updating rule to iterate the algorithm
G(β (k) )
β (k+1) = β (k) −
H(β (k) )
where:
d log L(β (∥) )
G(β (k) ) =
dβ (k)
d2 log L(β (∥) )
H(β (k) ) =
dβ 2(k)
def µ(self):
return np.exp(self.X @ self.β.T)
def logL(self):
y = self.y
µ = self.µ()
return np.sum(y * np.log(µ) - µ - np.log(factorial(y)))
def G(self):
µ = self.µ()
return ((self.y - µ) @ self.X).reshape(self.k, 1)
def H(self):
X = self.X
µ = self.µ()
return -(µ * X.T @ X)
Our function newton_raphson will take a PoissonRegression object that has an initial guess of
the parameter vector β 0
The algorithm will update the parameter vector according to the updating rule, and recalculate the gradient
and Hessian matrices at the new parameter estimates
Iteration will end when either:
• The difference between the parameter and the updated parameter is below a tolerance level
• The maximum number of iterations has been achieved (meaning convergence is not achieved)
So we can get an idea of whats going on while the algorithm is running, an option display=True is
added to print out values at each iteration
def newton_raphson(model, tol=1e-3, max_iter=1000, display=True):
i = 0
error = 100 # Initial error value
if display:
header = f'{"Iteration_k":<13}{"Log-likelihood":<16}{"θ":<60}'
print(header)
print("-" * len(header))
# Print iterations
if display:
β_list = [f'{t:.3}' for t in list(model.β)]
update = f'{i:<13}{model.logL():<16.8}{β_list}'
print(update)
i += 1
return model.β
Lets try out our algorithm with a small dataset of 5 observations and 3 variables in X
X = np.array([[1, 2, 5],
[1, 1, 3],
[1, 4, 2],
[1, 5, 2],
[1, 3, 1]])
y = np.array([1, 0, 1, 1, 0])
Iteration_k Log-likelihood Θ
-----------------------------------------------------------
0 -4.34476224 ['-1.4890', '0.2650', '0.2440']
1 -3.5742413 ['-3.3840', '0.5280', '0.4740']
2 -3.39995256 ['-5.0640', '0.7820', '0.7020']
3 -3.37886465 ['-5.9150', '0.9090', '0.8200']
4 -3.3783559 ['-6.0740', '0.9330', '0.8430']
5 -3.37835551 ['-6.0780', '0.9330', '0.8430']
Number of iterations: 6
β_hat = [-6.07848205 0.93340226 0.84329625]
As this was a simple model with few observations, the algorithm achieved convergence in only 6 iterations
You can see that with each iteration, the log-likelihood value increased
Remember, our objective was to maximize the log-likelihood function, which the algorithm has worked to
achieve
Also note that the increase in log L(β (k) ) becomes smaller with each iteration
This is because the gradient is approaching 0 as we reach the maximum, and therefore the numerator in our
updating equation is becoming smaller
The gradient vector should be close to 0 at β̂
poi.G()
array([[ -3.95169226e-07],
[ -1.00114804e-06],
[ -7.73114556e-07]])
The iterative process can be visualized in the following diagram, where the maximum is found at β = 10
β = np.linspace(2, 18)
fig, ax = plt.subplots(figsize=(12, 8))
ax.plot(β, logL(β), lw=2, c='black')
ax.grid(alpha=0.3)
plt.show()
Note that our implementation of the Newton-Raphson algorithm is rather basic for more robust implemen-
tations see, for example, scipy.optimize
Now that we know whats going on under the hood, we can apply MLE to an interesting application
Well use the Poisson regression model in statsmodels to obtain richer output with standard errors, test
values, and more
statsmodels uses the same algorithm as above to find the maximum likelihood estimates
Before we begin, lets re-estimate our simple model with statsmodels to confirm we obtain the same
coefficients and log-likelihood value
X = np.array([[1, 2, 5],
[1, 1, 3],
[1, 4, 2],
[1, 5, 2],
[1, 3, 1]])
y = np.array([1, 0, 1, 1, 0])
Now lets replicate results from Daniel Treismans paper, Russias Billionaires, mentioned earlier in the lecture
Treisman starts by estimating equation (4.1), where:
• yi is number of billionairesi
• xi1 is log GDP per capitai
• xi2 is log populationi
• xi3 is years in GAT T i – years membership in GATT and WTO (to proxy access to international
markets)
The paper only considers the year 2008 for estimation
We will set up our variables for estimation like so (you should have the data assigned to df from earlier in
the lecture)
# Add a constant
df['const'] = 1
# Variable sets
reg1 = ['const', 'lngdppc', 'lnpop', 'gattwto08']
Then we can use the Poisson function from statsmodels to fit the model
Well use robust standard errors as in the authors paper
import statsmodels.api as sm
# Specify model
poisson_reg = sm.Poisson(df[['numbil0']], df[reg1],
missing='drop').fit(cov_type='HC0')
print(poisson_reg.summary())
Here we received a warning message saying Maximum number of iterations has been exceeded.
Lets try increasing the maximum number of iterations that the algorithm is allowed (the .fit() docstring
tells us the default number of iterations is 35)
Model: Poisson
Df Residuals: 193
Method: MLE
Df Model: 3
Date: Wed, 26 Jul 2017
Pseudo R-squ.: 0.8574
Time: 15:41:38
Log-Likelihood: -438.54
converged: True
LL-Null: -3074.7
LLR p-value: 0.000
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const -29.0495 2.578 -11.268 0.000 -34.103 -23.997
lngdppc 1.0839 0.138 7.834 0.000 0.813 1.355
lnpop 1.1714 0.097 12.024 0.000 0.980 1.362
gattwto08 0.0060 0.007 0.868 0.386 -0.008 0.019
==============================================================================
results.append(result)
results_table = summary_col(results=results,
float_format='%0.3f',
stars=True,
model_names=reg_names,
info_dict=info_dict,
regressor_order=regressor_order)
results_table.add_title('Table 1 - Explaining the Number of Billionaires in
,→2008')
print(results_table)
The output suggests that the frequency of billionaires is positively correlated with GDP per capita, popula-
tion size, stock market capitalization, and negatively correlated with top marginal income tax rate
To analyze our results by country, we can plot the difference between the predicted an actual values, then
sort from highest to lowest and plot the first 15
data = ['const', 'lngdppc', 'lnpop', 'gattwto08', 'lnmcap08', 'rintr',
'topint08', 'nrrents', 'roflaw', 'numbil0', 'country']
results_df = df[data].dropna()
# Calculate difference
results_df['difference'] = results_df['numbil0'] - results_df['prediction']
plt.xlabel('Country')
plt.show()
As we can see, Russia has by far the highest number of billionaires in excess of what is predicted by the
model (around 50 more than expected)
Treisman uses this empirical result to discuss possible reasons for Russias excess of billionaires, including
the origination of wealth in Russia, the political climate, and the history of privatization in the years after
the USSR
4.4.7 Summary
In this lecture we used Maximum Likelihood Estimation to estimate the parameters of a Poisson model
statsmodels contains other built-in likelihood models such as Probit and Logit
For further flexibility, statsmodels provides a way to specify the distribution manually using the
GenericLikelihoodModel class - an example notebook can be found here
4.4.8 Exercises
Exercise 1
Suppose we wanted to estimate the probability of an event yi occurring, given some observations
We could use a probit regression model, where the pmf of yi is
Φ represents the cumulative normal distribution and constrains the predicted yi to be between 0 and 1 (as
required for a probability)
β is a vector of coefficients
Following the example in the lecture, write a class to represent the Probit model
To begin, find the log-likelihood function and derive the gradient and Hessian
The scipy module stats.norm contains the functions needed to compute the cmf and pmf of the normal
distribution
Exercise 2
Use the following dataset and initial values of β to estimate the MLE with the Newton-Raphson algorithm
developed earlier in the lecture
1 2 4 1
1 1 1 0 0.1
X = 1 4 3 y = 1 β (0) = 0.1
1 5 6 1 0.1
1 3 5 0
Verify your results with statsmodels - you can import the Probit function with the following import
statement
Note that the simple Newton-Raphson algorithm developed in this lecture is very sensitive to initial values,
and therefore you may fail to achieve convergence with different starting values
4.4.9 Solutions
Exercise 1
∑
n
[ ]
log L = yi log Φ(x′i β) + (1 − yi ) log(1 − Φ(x′i β))
i=1
Using the fundamental theorem of calculus, the derivative of a cumulative probability distribution is its
marginal distribution
∂
Φ(s) = ϕ(s)
∂s
where ϕ is the marginal normal distribution
The gradient vector of the Probit model is
Using these results, we can write a class for the Probit model as follows
class ProbitRegression:
def µ(self):
return norm.cdf(self.X @ self.β.T)
def (self):
return norm.pdf(self.X @ self.β.T)
def logL(self):
µ = self.µ()
return np.sum(y * np.log(µ) + (1 - y) * np.log(1 - µ))
def G(self):
µ = self.µ()
= self.()
return np.sum((X.T * y * / µ - X.T * (1 - y) * / (1 - µ)), axis=1)
def H(self):
X = self.X
β = self.β
µ = self.µ()
= self.()
a = ( + (X @ β.T) * µ) / µ**2
b = ( - (X @ β.T) * (1 - µ)) / (1 - µ)**2
return -( * (y * a + (1 - y) * b) * X.T) @ X
Exercise 2
X = np.array([[1, 2, 4],
[1, 1, 1],
[1, 4, 3],
[1, 5, 6],
[1, 3, 5]])
y = np.array([1, 0, 1, 1, 0])
Iteration_k Log-likelihood θ
-----------------------------------------------------------
0 -2.37968841 ['-1.3400', '0.7750', '-0.1570']
1 -2.36875259 ['-1.5350', '0.7750', '-0.0980']
2 -2.36872942 ['-1.5460', '0.7780', '-0.0970']
3 -2.36872942 ['-1.5460', '0.7780', '-0.0970']
Number of iterations: 4
β_hat = [-1.54625858 0.77778952 -0.09709757]
print(Probit(y, X).fit().summary())
==============================================================================
FIVE
This section of the course contains foundational mathematical and statistical tools and techniques
Contents
• Linear Algebra
– Overview
– Vectors
– Matrices
– Solving Systems of Equations
– Eigenvalues and Eigenvectors
– Further Topics
– Exercises
– Solutions
5.1.1 Overview
Linear algebra is one of the most useful branches of applied mathematics for economists to invest in
For example, many applied problems in economics and finance require the solution of a linear system of
equations, such as
y1 = ax1 + bx2
y2 = cx1 + dx2
329
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
The objective here is to solve for the unknowns x1 , . . . , xk given a11 , . . . , ank and y1 , . . . , yn
When considering such problems, it is essential that we first consider at least some of the following questions
• Does a solution actually exist?
• Are there in fact many solutions, and if so how should we interpret them?
• If no solution exists, is there a best approximate solution?
• If a solution exists, how should we compute it?
These are the kinds of topics addressed by linear algebra
In this lecture we will cover the basics of linear and matrix algebra, treating both theory and computation
We admit some overlap with this lecture, where operations on NumPy arrays were first explained
Note that this lecture is more theoretical than most, and contains background material that will be used in
applications as we go along
5.1.2 Vectors
A vector of length n is just a sequence (or array, or tuple) of n numbers, which we write as x = (x1 , . . . , xn )
or x = [x1 , . . . , xn ]
We will write these sequences either horizontally or vertically as we please
(Later, when we wish to perform certain matrix operations, it will become necessary to distinguish between
the two)
The set of all n-vectors is denoted by Rn
For example, R2 is the plane, and a vector in R2 is just a point in the plane
Traditionally, vectors are represented visually as arrows from the origin to the point
The following figure represents three vectors in this manner
Vector Operations
The two most common operators for vectors are addition and scalar multiplication, which we now describe
As a matter of definition, when we add two vectors, we add them element by element
x1 y1 x1 + y1
x2 y2 x2 + y2
x + y = . + . := ..
.. .. .
xn yn x n + yn
Scalar multiplication is an operation that takes a number γ and a vector x and produces
γx1
γx2
γx := .
..
γxn
import numpy as np
scalars = (-2, 2)
x = np.array(x)
for s in scalars:
v = s * x
ax.annotate('', xy=v, xytext=(0, 0),
arrowprops=dict(facecolor='red',
shrink=0,
alpha=0.5,
width=0.5))
ax.text(v[0] + 0.4, v[1] - 0.2, f'${s} x$', fontsize='16')
plt.show()
In Python, a vector can be represented as a list or tuple, such as x = (2, 4, 6), but is more commonly
represented as a NumPy array
One advantage of NumPy arrays is that scalar multiplication and addition have very natural syntax
4 * x
∑
n
′
x y := xi yi
i=1
12.0
1.7320508075688772
1.7320508075688772
Span
Given a set of vectors A := {a1 , . . . , ak } in Rn , its natural to think about the new vectors we can create by
performing linear operations
New vectors created in this manner are called linear combinations of A
In particular, y ∈ Rn is a linear combination of A := {a1 , . . . , ak } if
In this context, the values β1 , . . . , βk are called the coefficients of the linear combination
The set of linear combinations of A is called the span of A
The next figure shows the span of A = {a1 , a2 } in R3
The span is a 2 dimensional plane passing through these two points and the origin
α, β = 0.2, 0.1
gs = 3
z = np.linspace(x_min, x_max, gs)
x = np.zeros(gs)
y = np.zeros(gs)
ax.plot(x, y, z, 'k-', lw=2, alpha=0.5)
ax.plot(z, x, y, 'k-', lw=2, alpha=0.5)
ax.plot(y, z, x, 'k-', lw=2, alpha=0.5)
# Lines to vectors
for i in (0, 1):
x = (0, x_coords[i])
y = (0, y_coords[i])
z = (0, f(x_coords[i], y_coords[i]))
ax.plot(x, y, z, 'b-', lw=1.5, alpha=0.6)
Examples
If A contains only one vector a1 ∈ R2 , then its span is just the scalar multiples of a1 , which is the unique
line passing through both a1 and the origin
If A = {e1 , e2 , e3 } consists of the canonical basis vectors of R3 , that is
1 0 0
e1 := 0 , e2 := 1 , e3 := 0
0 0 1
then the span of A is all of R3 , because, for any x = (x1 , x2 , x3 ) ∈ R3 , we can write
x = x1 e1 + x2 e2 + x3 e3
Linear Independence
As well see, its often desirable to find families of vectors with relatively large span, so that many vectors
can be described by linear operators on a few vectors
The condition we need for a set of vectors to have a large span is whats called linear independence
In particular, a collection of vectors A := {a1 , . . . , ak } in Rn is said to be
• linearly dependent if some strict subset of A has the same span as A
• linearly independent if it is not linearly dependent
Put differently, a set of vectors is linearly independent if no vector is redundant to the span, and linearly
dependent otherwise
To illustrate the idea, recall the figure that showed the span of vectors {a1 , a2 } in R3 as a plane through the
origin
If we take a third vector a3 and form the set {a1 , a2 , a3 }, this set will be
• linearly dependent if a3 lies in the plane
• linearly independent otherwise
As another illustration of the concept, since Rn can be spanned by n vectors (see the discussion of canonical
basis vectors above), any collection of m > n vectors in Rn must be linearly dependent
The following statements are equivalent to linear independence of A := {a1 , . . . , ak } ⊂ Rn
1. No vector in A can be formed as a linear combination of the other elements
2. If β1 a1 + · · · βk ak = 0 for scalars β1 , . . . , βk , then β1 = · · · = βk = 0
(The zero in the first expression is the origin of Rn )
Unique Representations
Another nice thing about sets of linearly independent vectors is that each element in the span has a unique
representation as a linear combination of these vectors
In other words, if A := {a1 , . . . , ak } ⊂ Rn is linearly independent and
y = β1 a1 + · · · βk ak
5.1.3 Matrices
Matrices are a neat way of organizing data for use in linear operations
An n × k matrix is a rectangular array A of numbers with n rows and k columns:
a11 a12 · · · a1k
a21 a22 · · · a2k
A= . .. ..
.. . .
an1 an2 · · · ank
Often, the numbers in the matrix represent coefficients in a system of linear equations, as discussed at the
start of this lecture
For obvious reasons, the matrix A is also called a vector if either n = 1 or k = 1
In the former case, A is called a row vector, while in the latter it is called a column vector
If n = k, then A is called square
The matrix formed by replacing aij by aji for every i and j is called the transpose of A, and denoted A′ or
A⊤
If A = A′ , then A is called symmetric
For a square matrix A, the i elements of the form aii for i = 1, . . . , n are called the principal diagonal
A is called diagonal if the only nonzero entries are on the principal diagonal
If, in addition to being diagonal, each element along the principal diagonal is equal to 1, then A is called the
identity matrix, and denoted by I
Matrix Operations
Just as was the case for vectors, a number of algebraic operations are defined for matrices
Scalar multiplication and addition are immediate generalizations of the vector case:
a11 · · · a1k γa11 · · · γa1k
.. := .. ..
γA = γ ... ..
. . .
..
. .
an1 · · · ank γan1 · · · γank
and
a11 · · · a1k b11 · · · b1k a11 + b11 · · · a1k + b1k
.. + .. :=
A + B = ... ..
. .
..
.
..
. .
..
.
..
.
..
.
an1 · · · ank bn1 · · · bnk an1 + bn1 · · · ank + bnk
In the latter case, the matrices must have the same shape in order for the definition to make sense
We also have a convention for multiplying two matrices
The rule for matrix multiplication generalizes the idea of inner products discussed above, and is designed to
make multiplication play well with basic linear operations
If A and B are two matrices, then their product AB is formed by taking as its i, j-th element the inner
product of the i-th row of A and the j-th column of B
There are many tutorials to help you visualize this operation, such as this one, or the discussion on the
Wikipedia page
If A is n × k and B is j × m, then to multiply A and B we require k = j, and the resulting matrix AB is
n×m
As perhaps the most important special case, consider multiplying n × k matrix A and k × 1 column vector
x
According to the preceding rule, this gives us an n × 1 column vector
a11 · · · a1k x1 a11 x1 + · · · + a1k xk
.. .. :=
Ax = ... ..
. . .
..
. (5.2)
an1 · · · ank xk an1 x1 + · · · + ank xk
Matrices in NumPy
NumPy arrays are also used as matrices, and have fast, efficient functions and methods for all the standard
matrix operations1
You can create them manually from tuples of tuples (or lists of lists) as follows
A = ((1, 2),
(3, 4))
type(A)
tuple
A = np.array(A)
type(A)
numpy.ndarray
1
Although there is a specialized matrix data type defined in NumPy, its more standard to work with ordinary NumPy arrays.
See this discussion.
A.shape
(2, 2)
The shape attribute is a tuple giving the number of rows and columns see here for more discussion
To get the transpose of A, use A.transpose() or, more simply, A.T
There are many convenient functions for creating common matrices (matrices of zeros, ones, etc.) see here
Since operations are performed elementwise by default, scalar multiplication and addition have very natural
syntax
A = np.identity(3)
B = np.ones((3, 3))
2 * A
A + B
Matrices as Maps
Each n × k matrix A can be identified with a function f (x) = Ax that maps x ∈ Rk into y = Ax ∈ Rn
These kinds of functions have a special property: they are linear
A function f : Rk → Rn is called linear if, for all x, y ∈ Rk and all scalars α, β, we have
You can check that this holds for the function f (x) = Ax + b when b is the zero vector, and fails when b is
nonzero
In fact, its known that f is linear if and only if there exists a matrix A such that f (x) = Ax for all x
If we compare (5.1) and (5.2), we see that (5.1) can now be written more conveniently as
y = Ax (5.3)
The problem we face is to determine a vector x ∈ Rk that solves (5.3), taking y and A as given
This is a special case of a more general problem: Find an x such that y = f (x)
Given an arbitrary function f and a y, is there always an x such that y = f (x)?
If so, is it always unique?
The answer to both these questions is negative, as the next figure shows
def f(x):
return 0.6 * np.cos(4 * x) + 1.4
for ax in axes:
# Set the axes through the origin
for spine in ['left', 'bottom']:
ax.spines[spine].set_position('zero')
for spine in ['right', 'top']:
ax.spines[spine].set_color('none')
ax = axes[0]
ax = axes[1]
ybar = 2.6
plt.show()
In the first plot there are multiple solutions, as the function is not one-to-one, while in the second there are
no solutions, since y lies outside the range of f
Can we impose conditions on A in (5.3) that rule out these problems?
In this context, the most important thing to recognize about the expression Ax is that it corresponds to a
linear combination of the columns of A
Ax = x1 a1 + · · · + xk ak
The n × n Case
Lets discuss some more details, starting with the case where A is n × n
This is the familiar case where the number of unknowns equals the number of equations
For arbitrary y ∈ Rn , we hope to find a unique x ∈ Rn such that y = Ax
In view of the observations immediately above, if the columns of A are linearly independent, then their span,
and hence the range of f (x) = Ax, is all of Rn
Hence there always exists an x such that y = Ax
Moreover, the solution is unique
In particular, the following are equivalent
1. The columns of A are linearly independent
2. For any y ∈ Rn , the equation y = Ax has a unique solution
The property of having linearly independent columns is sometimes expressed as having full column rank
Inverse Matrices
Determinants
Another quick comment about square matrices is that to every such matrix we assign a unique number called
the determinant of the matrix you can find the expression for it here
If the determinant of A is not zero, then we say that A is nonsingular
Perhaps the most important fact about determinants is that A is nonsingular if and only if A is of full column
rank
This gives us a useful one-number summary of whether or not a square matrix can be inverted
This is the n × k case with n < k, so there are fewer equations than unknowns
In this case there are either no solutions or infinitely many in other words, uniqueness never holds
For example, consider the case where k = 3 and n = 2
Thus, the columns of A consists of 3 vectors in R2
This set can never be linearly independent, since it is possible to find two vectors that span R2
(For example, use the canonical basis vectors)
It follows that one column is a linear combination of the other two
For example, lets say that a1 = αa2 + βa3
Then if y = Ax = x1 a1 + x2 a2 + x3 a3 , we can also write
Heres an illustration of how to solve linear equations with SciPys linalg submodule
All of these routines are Python front ends to time-tested and highly optimized FORTRAN code
-2.0
array([[-2. , 1. ],
[ 1.5, -0.5]])
x = A_inv @ y # Solution
A @ x # Should equal y
array([[ 1.],
[ 1.]])
array([[-1.],
[ 1.]])
Observe how we can solve for x = A−1 y by either via inv(A) @ y, or using solve(A, y)
The latter method uses a different algorithm (LU decomposition) that is numerically more stable, and hence
should almost always be preferred
To obtain the least squares solution x̂ = (A′ A)−1 A′ y, use scipy.linalg.lstsq(A, y)
Av = λv
A = ((1, 2),
(2, 1))
A = np.array(A)
evals, evecs = eig(A)
evecs = evecs[:, 0], evecs[:, 1]
for v in evecs:
a = v[1] / v[0]
ax.plot(x, a * x, 'b-', lw=0.4)
plt.show()
The eigenvalue equation is equivalent to (A − λI)v = 0, and this has a nonzero solution v only when the
columns of A − λI are linearly dependent
This in turn is equivalent to stating that the determinant is zero
Hence to find all eigenvalues, we can look for λ such that the determinant of A − λI is zero
This problem can be expressed as one of solving for the roots of a polynomial in λ of degree n
This in turn implies the existence of n solutions in the complex plane, although some might be repeated
Some nice facts about the eigenvalues of a square matrix A are as follows
1. The determinant of A equals the product of the eigenvalues
2. The trace of A (the sum of the elements on the principal diagonal) equals the sum of the eigenvalues
A = ((1, 2),
(2, 1))
A = np.array(A)
evals, evecs = eig(A)
evals
evecs
Generalized Eigenvalues
It is sometimes useful to consider the generalized eigenvalue problem, which, for given matrices A and B,
seeks generalized eigenvalues λ and eigenvectors v such that
Av = λBv
We round out our discussion by briefly mentioning several other important topics
Series Expansions
Recall the usual summation formula for a geometric progression, which states that if |a| < 1, then
∑ ∞ −1
k=0 a = (1 − a)
k
Matrix Norms
The norms on the right-hand side are ordinary vector norms, while the norm on the left-hand side is a matrix
norm in this case, the so-called spectral norm
For example, for a square matrix S, the condition ∥S∥ < 1 means that S is contractive, in the sense that it
pulls all vectors towards the origin2
Neumanns Theorem
∞
∑
(I − A)−1 = Ak (5.4)
k=0
Spectral Radius
A result known as Gelfands formula tells us that, for any square matrix A,
Here ρ(A) is the spectral radius, defined as maxi |λi |, where {λi }i is the set of eigenvalues of A
As a consequence of Gelfands formula, if all eigenvalues are strictly less than one in modulus, there exists a
k with ∥Ak ∥ < 1
In which case (5.4) is valid
Analogous definitions exist for negative definite and negative semi-definite matrices
It is notable that if A is positive definite, then all of its eigenvalues are strictly positive, and hence A is
invertible (with positive definite inverse)
Further Reading
5.1.7 Exercises
Exercise 1
y = Ax + Bu
Here
• P is an n × n matrix and Q is an m × m matrix
L = −y ′ P y − u′ Qu + λ′ [Ax + Bu − y]
Note: If we dont care about the Lagrange multipliers, we can substitute the constraint into the objective
function, and then just maximize −(Ax + Bu)′ P (Ax + Bu) − u′ Qu with respect to u. You can verify that
this leads to the same maximizer.
5.1.8 Solutions
Solution to Exercise 1
s.t.
y = Ax + Bu
with primitives
• P be a symmetric and positive semidefinite n × n matrix
• Q be a symmetric and positive semidefinite m × m matrix
• A an n × n matrix
• B an n × m matrix
The associated Lagrangian is :
L = −y ′ P y − u′ Qu + λ′ [Ax + Bu − y]
1.
Differentiating Lagrangian equation w.r.t y and setting its derivative equal to zero yields
∂L
= −(P + P ′ )y − λ = −2P y − λ = 0 ,
∂y
since P is symmetric
Accordingly, the first-order condition for maximizing L w.r.t. y implies
λ = −2P y
2.
Differentiating Lagrangian equation w.r.t. u and setting its derivative equal to zero yields
∂L
= −(Q + Q′ )u − B ′ λ = −2Qu + B ′ λ = 0
∂u
Substituting λ = −2P y gives
Qu + B ′ P y = 0
Qu + B ′ P (Ax + Bu) = 0
(Q + B ′ P B)u + B ′ P Ax = 0
which is the first-order condition for maximizing L w.r.t. u
Thus, the optimal choice of u must satisfy
u = −(Q + B ′ P B)−1 B ′ P Ax ,
which follows from the definition of the first-order conditions for Lagrangian equation
3.
Rewriting our problem by substituting the constraint into the objective function, we get
−2u′ B ′ P Ax = −2x′ S ′ B ′ P Ax
= 2x′ A′ P B(Q + B ′ P B)−1 B ′ P Ax
Notice that the term (Q + B ′ P B)−1 is symmetric as both P and Q are symmetric
Regarding the third term −u′ (Q + B ′ P B)u,
Therefore, the solution to the optimization problem v(x) = −x′ P̃ x follows the above result by denoting
P̃ := A′ P A − A′ P B(Q + B ′ P B)−1 B ′ P A
Contents
5.2.1 Overview
Orthogonal projection is a cornerstone of vector space methods, with many diverse applications
These include, but are not limited to,
• Least squares projection, also known as linear regression
• Conditional expectations for multivariate normal (Gaussian) distributions
• Gram–Schmidt orthogonalization
• QR decomposition
• Orthogonal polynomials
• etc
In this lecture we focus on
• key ideas
• least squares regression
Further Reading
For background and foundational concepts, see our lecture on linear algebra
For more proofs and greater theoretical detail, see A Primer in Econometric Theory
For a complete set of proofs in a general setting, see, for example, [Rom05]
For an advanced treatment of projection in the context of least squares prediction, see this book chapter
Assume x, z ∈ Rn
∑
Define ⟨x, z⟩ = i xi zi
Recall ∥x∥2 = ⟨x, x⟩
The law of cosines states that ⟨x, z⟩ = ∥x∥∥z∥ cos(θ) where θ is the angle between the vectors x and z
When ⟨x, z⟩ = 0, then cos(θ) = 0 and x and z are said to be orthogonal and we write x ⊥ z
S ⊥ is a linear subspace of Rn
• To see this, fix x, y ∈ S ⊥ and α, β ∈ R
• Observe that if z ∈ S, then
⟨αx + βy, z⟩ = α⟨x, z⟩ + β⟨y, z⟩ = α × 0 + β × 0 = 0
ŷ := arg min ∥y − z∥
z∈S
Proof of sufficiency
For a linear space Y and a fixed linear subspace S, we have a functional relationship
y ∈ Y 7→ its orthogonal projection ŷ ∈ S
By the OPT, this is a well-defined mapping or operator from Rn to Rn
In what follows we denote this operator by a matrix P
• P y represents the projection ŷ
• This is sometimes expressed as ÊS y = P y, where Ê denotes a wide-sense expectations operator
and the subscript S indicates that we are projecting y onto the linear subspace S
The operator P is called the orthogonal projection mapping onto S
1. P y ∈ S and
2. y − P y ⊥ S
From this we can deduce additional useful properties, such as
1. ∥y∥2 = ∥P y∥2 + ∥y − P y∥2 and
2. ∥P y∥ ≤ ∥y∥
For example, to prove 1, observe that y = P y + y − P y and apply the Pythagorean law
Orthogonal Complement
Let S ⊂ Rn .
The orthogonal complement of S is the linear subspace S ⊥ that satisfies x1 ⊥ x2 for every x1 ∈ S and
x2 ∈ S ⊥
Let Y be a linear space with linear subspace S and its orthogonal complement S ⊥
We write
Y = S ⊕ S⊥
to indicate that for every y ∈ Y there is unique x1 ∈ S and a unique x2 ∈ S ⊥ such that y = x1 + x2 .
Moreover, x1 = ÊS y and x2 = y − ÊS y
This amounts to another version of the OPT:
Theorem. If S is a linear subspace of Rn , ÊS y = P y and ÊS ⊥ y = M y, then
∑
k
x= ⟨x, ui ⟩ui for all x∈S
i=1
To see this, observe that since x ∈ span{u1 , . . . , uk }, we can find scalars α1 , . . . , αk that verify
∑
k
x= αj u j (5.5)
j=1
∑
k
⟨x, ui ⟩ = αj ⟨uj , ui ⟩ = αi
j=1
When the subspace onto which are projecting is orthonormal, computing the projection simplifies:
Theorem If {u1 , . . . , uk } is an orthonormal basis for S, then
∑
k
Py = ⟨y, ui ⟩ui , ∀ y ∈ Rn (5.6)
i=1
ÊS y = P y
P = X(X ′ X)−1 X ′
An expression of the form Xa is precisely a linear combination of the columns of X, and hence an element
of S
Claim 2 is equivalent to the statement
Starting with X
It is common in applications to start with n × k matrix X with linearly independent columns and let
P y = U (U ′ U )−1 U ′ y
∑
k
P y = U U ′y = ⟨ui , y⟩ui
i=1
We have recovered our earlier result about projecting onto the span of an orthonormal basis
β̂ := (X ′ X)−1 X ′ y
X β̂ = X(X ′ X)−1 X ′ y = P y
Because Xb ∈ span(X)
If probabilities and hence E are unknown, we cannot solve this problem directly
However, if a sample is available, we can estimate the risk with the empirical risk:
1 ∑
N
min (yn − f (xn ))2
f ∈F N
n=1
∑
N
min (yn − b′ xn )2
b∈R K
n=1
Solution
β̂ := (X ′ X)−1 X ′ y
ŷ := X β̂ = P y
û := y − ŷ = y − P y = M y
Lets return to the connection between linear independence and orthogonality touched on above
A result of much interest is a famous algorithm for constructing orthonormal sets from linearly independent
sets
The next section gives details
Gram-Schmidt Orthogonalization
Theorem For each linearly independent set {x1 , . . . , xk } ⊂ Rn, there exists an orthonormal set
{u1 , . . . , uk } with
QR Decomposition
The following result uses the preceding algorithm to produce a useful decomposition
Theorem If X is n × k with linearly independent columns, then there exists a factorization X = QR where
• R is k × k, upper triangular, and nonsingular
• Q is n × k with orthonormal columns
Proof sketch: Let
• xj := colj (X)
• {u1 , . . . , uk } be orthonormal with same span as {x1 , . . . , xk } (to be constructed using
Gram–Schmidt)
• Q be formed from cols ui
Since xj ∈ span{u1 , . . . , uj }, we have
∑
j
xj = ⟨ui , xj ⟩ui for j = 1, . . . , k
i=1
For matrices X and y that overdetermine beta in the linear equation system y = Xβ, we found the least
squares approximator β̂ = (X ′ X)−1 X ′ y
Using the QR decomposition X = QR gives
β̂ = (R′ Q′ QR)−1 R′ Q′ y
= (R′ R)−1 R′ Q′ y
= R−1 (R′ )−1 R′ Q′ y = R−1 Q′ y
Numerical routines would in this case use the alternative form Rβ̂ = Q′ y and back substitution
5.2.8 Exercises
Exercise 1
Exercise 2
Let P = X(X ′ X)−1 X ′ and let M = I − P . Show that P and M are both idempotent and symmetric. Can
you give any intuition as to why they should be idempotent?
Exercise 3
Using Gram-Schmidt orthogonalization, produce a linear projection of y onto the column space of X and
verify this using the projection matrix P := X(X ′ X)−1 X ′ and also using QR decomposition, where:
1
y := 3 ,
−3
and
1 0
X := 0 −6
2 2
5.2.9 Solutions
Exercise 1
Exercise 2
Symmetry and idempotence of M and P can be established using standard rules for matrix algebra. The
intuition behind idempotence of M and P is that both are orthogonal projections. After a point is projected
into a given subspace, applying the projection again makes no difference. (A point inside the subspace is
not shifted by orthogonal projection onto that space because it is already the closest point in the subspace to
itself.)
Exercise 3
Heres a function that computes the orthonormal vectors using the GS algorithm given in the lecture.
import numpy as np
def gram_schmidt(X):
"""
Implements Gram-Schmidt orthogonalization.
Parameters
----------
X : an n x k array with linearly independent columns
Returns
-------
U : an n x k array with orthonormal columns
"""
# Set up
n, k = X.shape
U = np.empty((n, k))
I = np.eye(n)
# Normalize
U[:, i] = u / np.sqrt(np.sum(u * u))
return U
y = [1, 3, -3]
X = [[1, 0],
[0, -6],
[2, 2]]
First lets try projection of y onto the column space of X using the ordinary matrix expression:
Now lets do the same using an orthonormal basis created from our gram_schmidt function.
U = gram_schmidt(X)
U
Py2 = U @ U.T @ y
Py2
This is the same answer. So far so good. Finally, lets try the same thing but with the basis obtained via QR
decomposition:
Q, R = qr(X, mode='economic')
Q
array([[-0.4472136 , -0.13187609],
[-0. , -0.98907071],
[-0.89442719, 0.06593805]])
Py3 = Q @ Q.T @ y
Py3
Contents
5.3.1 Overview
This lecture illustrates two of the most important theorems of probability and statistics: The law of large
numbers (LLN) and the central limit theorem (CLT)
These beautiful theorems lie behind many of the most fundamental results in econometrics and quantitative
economic modeling
The lecture is based around simulations that show the LLN and CLT in action
We also demonstrate how the LLN and CLT break down when the assumptions they are based on do not
hold
In addition, we examine several useful extensions of the classical theorems, such as
• The delta method, for smooth functions of random variables
• The multivariate case
Some of these extensions are presented as exercises
5.3.2 Relationships
5.3.3 LLN
We begin with the law of large numbers, which tells us when sample averages will converge to their popu-
lation means
The classical law of large numbers concerns independent and identically distributed (IID) random variables
Here is the strongest version of the classical LLN, known as Kolmogorovs strong law
Let X1 , . . . , Xn be independent and identically distributed scalar random variables, with common distribu-
tion F
When it exists, let µ denote the common mean of this sample:
∫
µ := EX = xF (dx)
In addition, let
1∑
n
X̄n := Xi
n
i=1
{ }
P X̄n → µ as n → ∞ = 1 (5.7)
Proof
The proof of Kolmogorovs strong law is nontrivial – see, for example, theorem 8.3.5 of [Dud02]
On the other hand, we can prove a weaker version of the LLN very easily and still get most of the intuition
The version we prove is as follows: If X1 , . . . , Xn is IID with EXi2 < ∞, then, for any ϵ > 0, we have
{ }
P |X̄n − µ| ≥ ϵ → 0 as n → ∞ (5.8)
(This version is weaker because we claim only convergence in probability rather than almost sure conver-
gence, and assume a finite second moment)
To see that this is so, fix ϵ > 0, and let σ 2 be the variance of each Xi
Recall the Chebyshev inequality, which tells us that
{ } E[(X̄n − µ)2 ]
P |X̄n − µ| ≥ ϵ ≤ (5.9)
ϵ2
Now observe that
[ ]2
1∑
n
E[(X̄n − µ)2 ] = E (Xi − µ)
n
i=1
1 ∑∑
n n
= E(Xi − µ)(Xj − µ)
n2
i=1 j=1
1 ∑n
= E(Xi − µ)2
n2
i=1
σ2
=
n
Here the crucial step is at the third equality, which follows from independence
Independence means that if i ̸= j, then the covariance term E(Xi − µ)(Xj − µ) drops out
{ } σ2
P |X̄n − µ| ≥ ϵ ≤ 2 (5.10)
nϵ
The claim in (5.8) is now clear
Of course, if the sequence X1 , . . . , Xn is correlated, then the cross-product terms E(Xi − µ)(Xj − µ) are
not necessarily zero
While this doesnt mean that the same line of argument is impossible, it does mean that if we want a similar
result then the covariances should be almost zero for most of these terms
In a long sequence, this would be true if, for example, E(Xi − µ)(Xj − µ) approached zero when the
difference between i and j became large
In other words, the LLN can still work if the sequence X1 , . . . , Xn has a kind of asymptotic independence,
in the sense that correlation falls to zero as variables become further apart in the sequence
This idea is very important in time series analysis, and well come across it again soon enough
Illustration
Lets now illustrate the classical IID law of large numbers using simulation
In particular, we aim to generate some sequences of IID random variables and plot the evolution of X̄n as n
increases
Below is a figure that does just this (as usual, you can click on it to expand it)
It shows IID observations from three different distributions and plots X̄n against n in each case
The dots represent the underlying observations Xi for i = 1, . . . , 100
In each of the three cases, convergence of X̄n to µ occurs as predicted
import random
import numpy as np
from scipy.stats import t, beta, lognorm, expon, gamma, poisson
import matplotlib.pyplot as plt
n = 100
num_plots = 3
fig, axes = plt.subplots(num_plots, 1, figsize=(20, 20))
for ax in axes:
# == Choose a randomly selected distribution == #
name = random.choice(list(distributions.keys()))
distribution = distributions.pop(name)
# == Plot == #
ax.plot(list(range(n)), data, 'o', color='grey', alpha=0.5)
axlabel = '$\\bar X_n$ for $X_i \sim$' + name
ax.plot(list(range(n)), sample_mean, 'g-', lw=3, alpha=0.6, label=axlabel)
m = distribution.mean()
ax.plot(list(range(n)), [m] * n, 'k--', lw=1.5, label='$\mu$')
ax.vlines(list(range(n)), m, data, lw=0.2)
ax.legend(**legend_args)
plt.show()
The three distributions are chosen at random from a selection stored in the dictionary distributions
Infinite Mean
What happens if the condition E|X| < ∞ in the statement of the LLN is not satisfied?
This might be the case if the underlying distribution is heavy tailed the best known example is the Cauchy
distribution, which has density
1
f (x) = (x ∈ R)
π(1 + x2 )
The next figure shows 100 independent draws from this distribution
n = 100
distribution = cauchy()
plt.show()
Notice how extreme observations are far more prevalent here than the previous figure
Lets now have a look at the behavior of the sample mean
n = 1000
distribution = cauchy()
# == Plot == #
ax.plot(list(range(n)), sample_mean, 'r-', lw=3, alpha=0.6,
label='$\\bar X_n$')
ax.plot(list(range(n)), [0] * n, 'k--', lw=0.5)
ax.legend()
plt.show()
Here weve increased n to 1000, but the sequence still shows no sign of converging
Will convergence become visible if we take n even larger?
The answer is no
To see this, recall that the characteristic function of the Cauchy distribution is
∫
ϕ(t) = Ee itX
= eitx f (x)dx = e−|t| (5.11)
5.3.4 CLT
Next we turn to the central limit theorem, which tells us about the distribution of the deviation between
sample averages and population means
The central limit theorem is one of the most remarkable results in all of mathematics
In the classical IID setting, it tells us the following:
If the sequence X1 , . . . , Xn is IID, with common mean µ and common variance σ 2 ∈ (0, ∞), then
√ d
n(X̄n − µ) → N (0, σ 2 ) as n→∞ (5.12)
d
Here → N (0, σ 2 ) indicates convergence in distribution to a centered (i.e, zero mean) normal with standard
deviation σ
Intuition
The striking implication of the CLT is that for any distribution with finite second moment, the simple
operation of adding independent copies always leads to a Gaussian curve
A relatively simple proof of the central limit theorem can be obtained by working with characteristic func-
tions (see, e.g., theorem 9.5.6 of [Dud02])
The proof is elegant but almost anticlimactic, and it provides surprisingly little intuition
In fact all of the proofs of the CLT that we know are similar in this respect
plt.show()
When n = 1, the distribution is flat one success or no successes have the same probability
Simulation 1
Since the CLT seems almost magical, running simulations that verify its implications is one good way to
build intuition
To this end, we now perform the following simulation
1. Choose an arbitrary distribution F for the underlying observations Xi
√
2. Generate independent draws of Yn := n(X̄n − µ)
3. Use these draws to compute some measure of their distribution such as a histogram
4. Compare the latter to N (0, σ 2 )
Heres some code that does exactly this for the exponential distribution F (x) = 1 − e−λx
(Please experiment with other choices of F , but remember that, to conform with the conditions of the CLT,
the distribution must have finite second moment)
# == Set parameters == #
n = 250 # Choice of n
k = 100000 # Number of draws of Y_n
distribution = expon(2) # Exponential distribution, λ = 1/2
µ, s = distribution.mean(), distribution.std()
sample_means = data.mean(axis=1)
# == Generate observations of Y_n == #
Y = np.sqrt(n) * (sample_means - µ)
# == Plot == #
fig, ax = plt.subplots(figsize=(10, 6))
xmin, xmax = -3 * s, 3 * s
ax.set_xlim(xmin, xmax)
ax.hist(Y, bins=60, alpha=0.5, normed=True)
xgrid = np.linspace(xmin, xmax, 200)
ax.plot(xgrid, norm.pdf(xgrid, scale=s), 'k-', lw=2, label='$N(0, \sigma^2)$')
ax.legend()
plt.show()
Notice the absence of for loops every operation is vectorized, meaning that the major calculations are all
shifted to highly optimized C code
The program produces figures such as the one below
The fit to the normal density is already tight, and can be further improved by increasing n
You can also experiment with other specifications of F
Simulation 2
√
Our next simulation is somewhat like the first, except that we aim to track the distribution of Yn := n(X̄n −
µ) as n increases
In the simulation well be working with random variables having µ = 0
Thus, when n = 1, we have Y1 = X1 , so the first distribution is just the distribution of the underlying
random variable
√
For n = 2, the distribution of Y2 is that of (X1 + X2 )/ 2, and so on
What we expect is that, regardless of the distribution of the underlying random variable, the distribution of
Yn will smooth out into a bell shaped curve
The next figure shows this process for Xi ∼ f , where f was specified as the convex combination of three
different beta densities
(Taking a convex combination is an easy way to produce an irregular shape for f )
In the figure, the closest density is that of Y1 , while the furthest is that of Y5
beta_dist = beta(2, 2)
def gen_x_draws(k):
"""
Returns a flat array containing k independent draws from the
distribution of X, the underlying random variable. This distribution is
itself a convex combination of three beta distributions.
"""
bdraws = beta_dist.rvs((3, k))
# == Transform rows, so each represents a different distribution == #
bdraws[0, :] -= 0.5
bdraws[1, :] += 0.6
bdraws[2, :] -= 1.1
# == Set X[i] = bdraws[j, i], where j is a random draw from {0, 1, 2} == #
js = np.random.randint(0, 2, size=k)
X = bdraws[js, np.arange(k)]
# == Rescale, so that the random variable is zero mean == #
m, sigma = X.mean(), X.std()
return (X - m) / sigma
nmax = 5
reps = 100000
ns = list(range(1, nmax + 1))
# == Plot == #
a, b = -3, 3
gs = 100
xs = np.linspace(a, b, gs)
# == Build verts == #
greys = np.linspace(0.3, 0.7, nmax)
verts = []
for n in ns:
density = gaussian_kde(Y[:, n-1])
ys = density(xs)
verts.append(list(zip(xs, ys)))
The law of large numbers and central limit theorem work just as nicely in multidimensional settings
To state the results, lets recall some elementary facts about random vectors
A random vector X is just a sequence of k random variables (X1 , . . . , Xk )
Each realization of X is an element of Rk
A collection of random vectors X1 , . . . , Xn is called independent if, given any n vectors x1 , . . . , xn in Rk ,
we have
1∑
n
X̄n := Xi
n
i=1
{ }
P X̄n → µ as n → ∞ = 1 (5.13)
Here X̄n → µ means that ∥X̄n − µ∥ → 0, where ∥ · ∥ is the standard Euclidean norm
The CLT tells us that, provided Σ is finite,
√ d
n(X̄n − µ) → N (0, Σ) as n → ∞ (5.14)
5.3.5 Exercises
Exercise 1
√
n{g(X̄n ) − g(µ)} → N (0, g ′ (µ)2 σ 2 ) as
d
n→∞ (5.15)
This theorem is used frequently in statistics to obtain the asymptotic distribution of estimators many of
which can be expressed as functions of sample means
(These kinds of results are often said to use the delta method)
The proof is based on a Taylor expansion of g around the point µ
Taking the result as given, let the distribution F of each Xi be uniform on [0, π/2] and let g(x) = sin(x)
√
Derive the asymptotic distribution of n{g(X̄n ) − g(µ)} and illustrate convergence in the same spirit as
the program illustrate_clt.py discussed above
What happens when you replace [0, π/2] with [0, π]?
What is the source of the problem?
Exercise 2
Heres a result thats often used in developing statistical tests, and is connected to the multivariate central
limit theorem
If you study econometric theory, you will see this result used again and again
Assume the setting of the multivariate CLT discussed above, so that
1. X1 , . . . , Xn is a sequence of IID random vectors, each taking values in Rk
2. µ := E[Xi ], and Σ is the variance-covariance matrix of Xi
3. The convergence
√ d
n(X̄n − µ) → N (0, Σ) (5.16)
is valid
In a statistical setting, one often wants the right hand side to be standard normal, so that confidence intervals
are easily computed
This normalization can be achieved on the basis of three observations
First, if X is a random vector in Rk and A is constant and k × k, then
Var[AX] = A Var[X]A′
d
Second, by the continuous mapping theorem, if Zn → Z in Rk and A is constant and k × k, then
d
AZn → AZ
Third, if S is a k × k symmetric positive definite matrix, then there exists a symmetric positive definite
matrix Q, called the inverse square root of S, such that
QSQ′ = I
Applying the continuous mapping theorem one more time tells us that
d
∥Zn ∥2 → ∥Z∥2
d
n∥Q(X̄n − µ)∥2 → χ2 (k) (5.17)
where
• each Wi is an IID draw from the uniform distribution on [−1, 1]
• each Ui is an IID draw from the uniform distribution on [−2, 2]
• Ui and Wi are independent of each other
Hints:
1. scipy.linalg.sqrtm(A) computes the square root of A. You still need to invert it
2. You should be able to work out Σ from the proceding information
5.3.6 Solutions
Exercise 1
"""
Illustrates the delta method, a consequence of the central limit theorem.
"""
# == Set parameters == #
n = 250
replications = 100000
distribution = uniform(loc=0, scale=(np.pi / 2))
µ, s = distribution.mean(), distribution.std()
g = np.sin
g_prime = np.cos
# == Plot == #
asymptotic_sd = g_prime(µ) * s
fig, ax = plt.subplots(figsize=(10, 6))
xmin = -3 * g_prime(µ) * s
xmax = -xmin
ax.set_xlim(xmin, xmax)
ax.hist(error_obs, bins=60, alpha=0.5, normed=True)
xgrid = np.linspace(xmin, xmax, 200)
lb = "$N(0, g'(\mu)^2 \sigma^2)$"
ax.plot(xgrid, norm.pdf(xgrid, scale=asymptotic_sd), 'k-', lw=2, label=lb)
ax.legend()
plt.show()
What happens when you replace [0, π/2] with [0, π]?
In this case, the mean µ of this distribution is π/2, and since g ′ = cos, we have g ′ (µ) = 0
Hence the conditions of the delta theorem are not satisfied
Exercise 2
Since linear combinations of normal random variables are normal, the vector QY is also normal
Its mean is clearly 0, and its variance covariance matrix is
d
In conclusion, QYn → QY ∼ N (0, I), which is what we aimed to show
Now we turn to the simulation exercise
Our solution is as follows
# == Set parameters == #
n = 250
replications = 50000
dw = uniform(loc=-1, scale=2) # Uniform(-1, 1)
du = uniform(loc=-2, scale=4) # Uniform(-2, 2)
sw, su = dw.std(), du.std()
vw, vu = sw**2, su**2
Σ = ((vw, vw), (vw, vw + vu))
Σ = np.array(Σ)
# == Compute Σ^{-1/2} == #
Q = inv(sqrtm(Σ))
# == Plot == #
fig, ax = plt.subplots(figsize=(10, 6))
xmax = 8
ax.set_xlim(0, xmax)
xgrid = np.linspace(0, xmax, 200)
lb = "Chi-squared with 2 degrees of freedom"
ax.plot(xgrid, chi2.pdf(xgrid, 2), 'k-', lw=2, label=lb)
ax.legend()
ax.hist(chisq_obs, bins=50, normed=True)
plt.show()
Contents
We may regard the present state of the universe as the effect of its past and the cause of its future
– Marquis de Laplace
5.4.1 Overview
Primitives
Weve made the common assumption that the shocks are independent standardized normal vectors
But some of what we say will be valid under the assumption that {wt+1 } is a martingale difference se-
quence
A martingale difference sequence is a sequence that is zero mean when conditioned on past information
In the present case, since {xt } is our state sequence, this means that it satisfies
E[wt+1 |xt , xt−1 , . . .] = 0
This is a weaker condition than that {wt } is iid with wt+1 ∼ N (0, I)
Examples
By appropriate choice of the primitives, a variety of dynamics can be represented in terms of the linear state
space model
The following examples help to highlight this point
They also illustrate the wise dictum finding the state is an art
You can confirm that under these definitions, (5.18) and (5.18) agree
The next figure shows dynamics of this process when ϕ0 = 1.1, ϕ1 = 0.8, ϕ2 = −0.8, y0 = y−1 = 1
Vector Autoregressions
Seasonals
0 0 1 0
It is easy to check that A4 = I, which implies that xt is strictly periodic with period 4:1
xt+4 = xt
Such an xt process can be used to model deterministic seasonals in quarterly time series.
The indeterministic seasonal produces recurrent, but aperiodic, seasonal fluctuations.
Time Trends
[ ] [ ]
1 1 0 [ ]
A= C= G= a b (5.20)
0 1 0
[ ]′
and starting at initial condition x0 = 0 1
In fact its possible to use the state-space system to represent polynomial trends of any order
For instance, let
0 1 1 0 0
x0 = 0 A = 0 1 1 C = 0
1 0 0 1 0
It follows that
1 t t(t − 1)/2
At = 0 1 t
0 0 1
[ ]
Then x′t = t(t − 1)/2 t 1 , so that xt contains linear and quadratic time trends
1
The eigenvalues of A are (1, −1, i, −i).
xt = Axt−1 + Cwt
= A2 xt−2 + ACwt−1 + Cwt
..
.
∑
t−1
= Aj Cwt−j + At x0
j=0
∑
t−1
[ ]
x1t = wt−j + 1 t x0
j=0
Unconditional Moments
Using (5.18), its easy to obtain expressions for the (unconditional) means of xt and yt
Well explain what unconditional and conditional mean soon
Letting µt := E[xt ] and using linearity of expectations, we find that
Distributions
In general, knowing the mean and variance-covariance matrix of a random vector is not quite as good as
knowing the full distribution
However, there are some situations where these moments alone tell us all we need to know
These are situations in which the mean vector and covariance matrix are sufficient statistics for the popula-
tion distribution
(Sufficient statistics form a list of objects that characterize a population distribution)
One such situation is when the vector in question is Gaussian (i.e., normally distributed)
This is the case here, given
In particular, given our Gaussian assumptions on the primitives and the linearity of (5.18) we can see imme-
diately that both xt and yt are Gaussian for all t ≥ 02
Since xt is Gaussian, to find the distribution, all we need to do is find its mean and variance-covariance
matrix
But in fact weve already done this, in (5.21) and (5.22)
Letting µt and Σt be as defined by these equations, we have
xt ∼ N (µt , Σt ) (5.26)
Ensemble Interpretations
In the right-hand figure, these values are converted into a rotated histogram that shows relative frequencies
from our sample of 20 yT s
(The parameters and source code for the figures can be found in file linear_models/paths_and_hist.py)
Here is another figure, this time with 100 observations
Lets now try with 500,000 observations, showing only the histogram (without rotation)
Ensemble means
1∑ i
I
ȳT := yT
I
i=1
approximates the expectation E[yT ] = GµT (as implied by the law of large numbers)
Heres a simulation comparing the ensemble averages and population means at time points t = 0, . . . , 50
The parameters are the same as for the preceding figures, and the sample size is relatively small (I = 20)
1∑ i
I
x̄T := xT → µT (I → ∞)
I
i=1
1∑ i
I
(xT − x̄T )(xiT − x̄T )′ → ΣT (I → ∞)
I
i=1
Joint Distributions
To compute the joint distribution of x0 , x1 , . . . , xT , recall that joint and conditional densities are linked by
the rule
p(xt+1 | xt ) = N (Axt , CC ′ )
Autocovariance functions
Σt+j,t = Aj Σt (5.29)
Notice that Σt+j,t in general depends on both j, the gap between the two dates, and t, the earlier date
Stationarity and ergodicity are two properties that, when they hold, greatly aid analysis of linear state space
models
Lets start with the intuition
Visualizing Stability
Lets look at some more time series from the same model that we analyzed above
This picture shows cross-sectional distributions for y at times T, T ′ , T ′′
Note how the time series settle down in the sense that the distributions at T ′ and T ′′ are relatively similar to
each other but unlike the distribution at T
Apparently, the distributions of yt converge to a fixed long-run distribution as t → ∞
When such a distribution exists it is called a stationary distribution
Stationary Distributions
Since
1. in the present case all distributions are Gaussian
2. a Gaussian distribution is pinned down by its mean and variance-covariance matrix
we can restate the definition as follows: ψ∞ is stationary for xt if
ψ∞ = N (µ∞ , Σ∞ )
Lets see what happens to the preceding figure if we start x0 at the stationary distribution
Now the differences in the observed distributions at T, T ′ and T ′′ come entirely from random fluctuations
due to the finite sample size
By
• our choosing x0 ∼ N (µ∞ , Σ∞ )
• the definitions of µ∞ and Σ∞ as fixed points of (5.21) and (5.22) respectively
weve ensured that
Moreover, in view of (5.29), the autocovariance function takes the form Σt+j,t = Aj Σ∞ , which depends on
j but not on t
This motivates the following definition
A process {xt } is said to be covariance stationary if
• both µt and Σt are constant in t
• Σt+j,t depends on the time gap j but not on time t
In our setting, {xt } will be covariance stationary if µ0 , Σ0 , A, C assume values that imply that none of
µt , Σt , Σt+j,t depends on t
The difference equation µt+1 = Aµt is known to have unique fixed point µ∞ = 0 if all eigenvalues of A
have moduli strictly less than unity
That is, if (np.absolute(np.linalg.eigvals(A)) < 1).all() == True
The difference equation (5.22) also has a unique fixed point in this case, and, moreover
µt → µ∞ = 0 and Σt → Σ∞ as t → ∞
where
• A1 is an (n − 1) × (n − 1) matrix
• a is an (n − 1) × 1 column vector
[ ]′
Let xt = x′1t 1 where x1t is (n − 1) × 1
It follows that
Let µ1t = E[x1t ] and take expectations on both sides of this expression to get
Assume now that the moduli of the eigenvalues of A1 are all strictly less than one
µ1∞ = (I − A1 )−1 a
[ ]′
The stationary value of µt itself is then µ∞ := µ′1∞ 1
The stationary values of Σt and Σt+j,t satisfy
Σ∞ = AΣ∞ A′ + CC ′
Σt+j,t = Aj Σ∞
Notice that here Σt+j,t depends on the time gap j but not on calendar time t
In conclusion, if
• x0 ∼ N (µ∞ , Σ∞ ) and
• the moduli of the eigenvalues of A1 are all strictly less than unity
then the {xt } process is covariance stationary, with constant state component
Note: If the eigenvalues of A1 are less than unity in modulus, then (a) starting from any initial value, the
mean and variance-covariance matrix both converge to their stationary values; and (b) iterations on (5.22)
converge to the fixed point of the discrete Lyapunov equation in the first line of (5.31)
Ergodicity
Ensemble averages across simulations are interesting theoretically, but in real life we usually observe only
a single realization {xt , yt }Tt=0
So now lets take a single realization and form the time series averages
1∑ 1∑
T T
x̄ := xt and ȳ := yt
T T
t=1 t=1
Do these time series averages converge to something interpretable in terms of our basic state-space repre-
sentation?
The answer depends on something called ergodicity
Ergodicity is the property that time series and ensemble averages coincide
More formally, ergodicity implies that time series sample averages converge to their expectation under the
stationary distribution
In particular,
∑
• T1 Tt=1 xt → µ∞
∑
• T1 Tt=1 (xt − x̄T )(xt − x̄T )′ → Σ∞
∑
• T1 Tt=1 (xt+j − x̄T )(xt − x̄T )′ → Aj Σ∞
In our linear Gaussian setting, any covariance stationary process is also ergodic
In some settings the observation equation yt = Gxt is modified to include an error term
Often this error term represents the idea that the true state can only be observed imperfectly
To include an error term in the observation we introduce
• An iid sequence of ℓ × 1 random vectors vt ∼ N (0, I)
• A k × ℓ matrix H
and extend the linear state-space system to
yt ∼ N (Gµt , GΣt G′ + HH ′ )
5.4.6 Prediction
The theory of prediction for linear state space systems is elegant and simple
The right-hand side follows from xt+1 = Axt + Cwt+1 and the fact that wt+1 is zero mean and independent
of xt , xt−1 , . . . , x0
That Et [xt+1 ] = E[xt+1 | xt ] is an implication of {xt } having the Markov property
The one-step-ahead forecast error is
More generally, wed like to compute the j-step ahead forecasts Et [xt+j ] and Et [yt+j ]
With a bit of algebra we obtain
In view of the iid property, current and past state values provide no information about future values of the
shock
Hence Et [wt+k ] = E[wt+k ] = 0
It now follows from linearity of expectations that the j-step ahead forecast of x is
Et [xt+j ] = Aj xt
It is useful to obtain the covariance matrix of the vector of j-step-ahead prediction errors
∑
j−1
xt+j − Et [xt+j ] = As Cwt−s+j (5.33)
s=0
Evidently,
∑
j−1
′
′
Vj := Et [(xt+j − Et [xt+j ])(xt+j − Et [xt+j ]) ] = Ak CC ′ Ak (5.34)
k=0
Vj is the conditional covariance matrix of the errors in forecasting xt+j , conditioned on time t information
xt
Under particular conditions, Vj converges to
V∞ = CC ′ + AV∞ A′ (5.36)
In several contexts, we want to compute forecasts of geometric sums of future random variables governed
by the linear state-space system (5.18)
We want the following objects
[∑ ]
∞
• Forecast of a geometric sum of future xs, or Et j=0 β
jx
t+j
[∑ ]
∞
• Forecast of a geometric sum of future ys, or Et j
j=0 β yt+j
These objects are important components of some famous and interesting dynamic models
For example,
[∑ ]
∞
• if {yt } is a stream of dividends, then E j=0 β jy
t+j |xt is a model of a stock price
[∑ ]
∞
• if {yt } is the money supply, then E j=0 β yt+j |xt is a model of the price level
j
Formulas
5.4.7 Code
Our preceding simulations and calculations are based on code in the file lss.py from the QuantEcon.py
package
The code implements a class for handling linear state space models (simulations, calculating moments, etc.)
One Python construct you might not be familiar with is the use of a generator function in the method
moment_sequence()
Go back and read the relevant documentation if youve forgotten how generator functions work
Examples of usage are given in the solutions to the exercises
5.4.8 Exercises
Exercise 1
Exercise 2
Exercise 3
Exercise 4
5.4.9 Solutions
import numpy as np
import matplotlib.pyplot as plt
from quantecon import LinearStateSpace
Exercise 1
A = [[1, 0, 0 ],
[_0, _1, _2],
[0, 1, 0 ]]
C = np.zeros((3, 1))
G = [0, 1, 0]
ar = LinearStateSpace(A, C, G, mu_0=np.ones(3))
x, y = ar.simulate(ts_length=50)
Exercise 2
ar = LinearStateSpace(A, C, G, mu_0=np.ones(4))
x, y = ar.simulate(ts_length=200)
Exercise 3
I = 20
T = 50
ar = LinearStateSpace(A, C, G, mu_0=np.ones(4))
ymin, ymax = -0.5, 1.15
ax.set_ylim(ymin, ymax)
ax.set_xlabel('time', fontsize=16)
ax.set_ylabel('$y_t$', fontsize=16)
ensemble_mean = np.zeros(T)
for i in range(I):
x, y = ar.simulate(ts_length=T)
y = y.flatten()
ax.plot(y, 'c-', lw=0.8, alpha=0.5)
ensemble_mean = ensemble_mean + y
ensemble_mean = ensemble_mean / I
ax.plot(ensemble_mean, color='b', lw=2, alpha=0.8, label='$\\bar y_t$')
m = ar.moment_sequence()
population_means = []
for t in range(T):
µ_x, µ_y, Σ_x, Σ_y = next(m)
population_means.append(float(µ_y))
ax.plot(population_means, color='g', lw=2, alpha=0.8, label='$G\mu_t$')
ax.legend(ncol=2)
plt.show()
Exercise 4
[1, 0, 0, 0 ],
[0, 1, 0, 0 ],
[0, 0, 1, 0 ]]
C = [[σ],
[0],
[0],
[0]]
G = [1, 0, 0, 0]
T0 = 10
T1 = 50
T2 = 75
T4 = 100
ax.grid(alpha=0.4)
ax.set_ylim(ymin, ymax)
ax.set_ylabel('$y_t$', fontsize=16)
ax.vlines((T0, T1, T2), -1.5, 1.5)
for i in range(80):
rcolor = random.choice(('c', 'g', 'b'))
x, y = ar.simulate(ts_length=T4)
y = y.flatten()
ax.plot(y, color=rcolor, lw=0.8, alpha=0.5)
ax.plot((T0, T1, T2), (y[T0], y[T1], y[T2],), 'ko', alpha=0.5)
plt.show()
Contents
5.5.1 Overview
Markov chains are one of the most useful classes of stochastic processes, being
• simple, flexible and supported by many elegant theoretical results
• valuable for building intuition about random dynamic models
• central to quantitative modeling in their own right
You will find them in many of the workhorse models of economics and finance
In this lecture we review some of the theory of Markov chains
We will also introduce some of the high quality routines for working with Markov chains available in
QuantEcon.py
Prerequisite knowledge is basic probability and linear algebra
5.5.2 Definitions
Stochastic Matrices
Markov Chains
In other words, knowing the current state is enough to know probabilities for future states
1
Hint: First show that if P and Q are stochastic matrices then so is their product to check the row sums, try postmultiplying
by a column vector of ones. Finally, argue that P n is a stochastic matrix using induction.
In particular, the dynamics of a Markov chain are fully determined by the set of values
By construction,
• P (x, y) is the probability of going from x to y in one unit of time (one step)
• P (x, ·) is the conditional distribution of Xt+1 given Xt = x
We can view P as a stochastic matrix where
Pij = P (xi , xj ) 1 ≤ i, j ≤ n
Going the other way, if we take a stochastic matrix P , we can generate a Markov chain {Xt } as follows:
• draw X0 from some specified distribution
• for each t = 0, 1, . . ., draw Xt+1 from P (Xt , ·)
By construction, the resulting process satisfies (5.38)
Example 1
Consider a worker who, at any given time t, is either unemployed (state 0) or employed (state 1)
Suppose that, over a one month period,
1. An unemployed worker finds a job with probability α ∈ (0, 1)
2. An employed worker loses her job and becomes unemployed with probability β ∈ (0, 1)
In terms of a Markov model, we have
• S = {0, 1}
• P (0, 1) = α and P (1, 0) = β
We can write out the transition probabilities in matrix form as
( )
1−α α
P =
β 1−β
Once we have the values α and β, we can address a range of questions, such as
• What is the average duration of unemployment?
• Over the long-run, what fraction of time does a worker find herself unemployed?
• Conditional on employment, what is the probability of becoming unemployed at least once over the
next 12 months?
Well cover such applications below
Example 2
where
• the frequency is monthly
• the first state represents normal growth
• the second state represents mild recession
• the third state represents severe recession
For example, the matrix tells us that when the state is normal growth, the state will again be normal growth
next month with probability 0.97
In general, large values on the main diagonal indicate persistence in the process {Xt }
This Markov process can also be represented as a directed graph, with edges labeled by transition probabil-
ities
5.5.3 Simulation
One natural way to answer questions about Markov chains is to simulate them
(To approximate the probability of event E, we can simulate many times and count the fraction of times that
E occurs)
Nice functionality for simulating Markov chains exists in QuantEcon.py
• Efficient, bundled with lots of other useful routines for handling Markov chains
However, its also a good exercise to roll our own routines lets do that first and then come back to the
methods in QuantEcon.py
In these exercises well take the state space to be S = 0, . . . , n − 1
To simulate a Markov chain, we need its stochastic matrix P and either an initial state or a probability
distribution ψ for initial state to be drawn from
The Markov chain is then constructed as discussed above. To repeat:
1. At time t = 0, the X0 is set to some fixed state or chosen from ψ
2. At each subsequent time t, the new state Xt+1 is drawn from P (Xt , ·)
In order to implement this simulation procedure, we need a method for generating draws from a discrete
distributions
For this task well use DiscreteRV from QuantEcon
import quantecon as qe
import numpy as np
array([0, 1, 1, 1, 1])
Well write our code as a function that takes the following three arguments
• A stochastic matrix P
• An initial state init
• A positive integer sample_size representing the length of the time series the function should return
return X
( )
0.4 0.6
P := (5.39)
0.2 0.8
As well see later, for a long series drawn from P, the fraction of the sample that takes value 0 will be about
0.25
If you run the following code you should get roughly that answer
0.25128
As discussed above, QuantEcon.py has routines for handling Markov chains, including simulation
Heres an illustration using the same P as the preceding example
0.250359
mc.simulate(ts_length=4, init='unemployed')
If we want to simulate with output as indices rather than state values we can use
mc.simulate_indices(ts_length=4)
array([0, 1, 1, 1])
Suppose that
1. {Xt } is a Markov chain with stochastic matrix P
2. the distribution of Xt is known to be ψt
What then is the distribution of Xt+1 , or, more generally, of Xt+m ?
Solution
In words, to get the probability of being at y tomorrow, we account for all ways this can happen and sum
their probabilities
Rewriting this statement in terms of marginal and conditional probabilities gives
∑
ψt+1 (y) = P (x, y)ψt (x)
x∈S
If we think of ψt+1 and ψt as row vectors (as is traditional in this literature), these n equations are summa-
rized by the matrix expression
ψt+1 = ψt P (5.40)
In other words, to move the distribution forward one unit of time, we postmultiply by P
By repeating this m times we move forward m steps into the future
Hence, iterating on (5.40), the expression ψt+m = ψt P m is also valid here P m is the m-th power of P
As a special case, we see that if ψ0 is the initial distribution from which X0 is drawn, then ψ0 P m is the
distribution of Xm
This is very important, so lets repeat it
X0 ∼ ψ0 =⇒ Xm ∼ ψ0 P m (5.41)
Xt ∼ ψt =⇒ Xt+m ∼ ψt P m (5.42)
Recall the stochastic matrix P for recession and growth considered above
Suppose that the current state is unknown perhaps statistics are available only at the end of the current
month
We estimate the probability that the economy is in state x to be ψ(x)
The probability of being in recession (either mild or severe) in 6 months time is given by the inner product
0
ψP 6 · 1
1
The marginal distributions we have been studying can be viewed either as probabilities or as cross-sectional
frequencies in large samples
To illustrate, recall our model of employment / unemployment dynamics for a given worker discussed above
Consider a large (i.e., tending to infinite) population of workers, each of whose lifetime experiences are
described by the specified dynamics, independently of one another
Let ψ be the current cross-sectional distribution over {0, 1}
• For example, ψ(0) is the unemployment rate
The cross-sectional distribution records the fractions of workers employed and unemployed at a given mo-
ment
The same distribution also describes the fractions of a particular workers career spent being employed and
unemployed, respectively
Irreducibility and aperiodicity are central concepts of modern Markov chain theory
Lets see what theyre about
Irreducibility
We can translate this into a stochastic matrix, putting zeros where theres no edge between nodes
0.9 0.1 0
P := 0.4 0.4 0.2
0.1 0.1 0.8
Its clear from the graph that this stochastic matrix is irreducible: we can reach any state from any other state
eventually
We can also test this using QuantEcon.pys MarkovChain class
True
Heres a more pessimistic scenario, where the poor are poor forever
This stochastic matrix is not irreducible, since, for example, rich is not accessible from poor
Lets confirm this
False
mc.communication_classes
[array(['poor'], dtype='<U6'),
array(['middle', 'rich'], dtype='<U6')]
It might be clear to you already that irreducibility is going to be important in terms of long run outcomes
For example, poverty is a life sentence in the second graph but not the first
Well come back to this a bit later
Aperiodicity
Loosely speaking, a Markov chain is called periodic if it cycles in a predictible way, and aperiodic otherwise
Heres a trivial example with three states
P = [[0, 1, 0],
[0, 0, 1],
[1, 0, 0]]
mc = qe.MarkovChain(P)
mc.period
More formally, the period of a state x is the greatest common divisor of the set of integers
In the last example, D(x) = {3, 6, 9, . . .} for every state x, so the period is 3
A stochastic matrix is called aperiodic if the period of every state is 1, and periodic otherwise
For example, the stochastic matrix associated with the transition probabilities below is periodic because, for
example, state a has period 2
mc = qe.MarkovChain(P)
mc.period
mc.is_aperiodic
False
As seen in (5.40), we can shift probabilities forward one unit of time via postmultiplication by P
Some distributions are invariant under this updating process for example,
Example
Recall our model of employment / unemployment dynamics for a given worker discussed above
Assuming α ∈ (0, 1) and β ∈ (0, 1), the uniform ergodicity condition is satisfied
Let ψ ∗ = (p, 1 − p) be the stationary distribution, so that p corresponds to unemployment (state 0)
Using ψ ∗ = ψ ∗ P and a bit of algebra yields
β
p=
α+β
This is, in some sense, a steady state probability of unemployment more on interpretation below
Not surprisingly it tends to zero as β → 0, and to one as α → 0
As discussed above, a given Markov matrix P can have many stationary distributions
That is, there can be many row vectors ψ such that ψ = ψP
In fact if P has two distinct stationary distributions ψ1 , ψ2 then it has infinitely many, since in this case, as
you can verify,
ψ3 := λψ1 + (1 − λ)ψ2
Convergence to Stationarity
Part 2 of the Markov chain convergence theorem stated above tells us that the distribution of Xt converges
to the stationary distribution regardless of where we start off
This adds considerable weight to our interpretation of ψ ∗ as a stochastic steady state
The convergence in the theorem is illustrated in the next figure
mc = qe.MarkovChain(P)
ψ_star = mc.stationary_distributions[0]
ax.scatter(ψ_star[0], ψ_star[1], ψ_star[2], c='k', s=60)
plt.show()
Here
• P is the stochastic matrix for recession and growth considered above
• The highest red dot is an arbitrarily chosen initial probability distribution ψ, represented as a vector
in R3
• The other red dots are the distributions ψP t for t = 1, 2, . . .
5.5.7 Ergodicity
1∑
m
1{Xt = x} → ψ ∗ (x) as m → ∞ (5.43)
n
t=1
Here
• 1{Xt = x} = 1 if Xt = x and zero otherwise
• convergence is with probability one
• the result does not depend on the distribution (or value) of X0
The result tells us that the fraction of time the chain spends at state x converges to ψ ∗ (x) as time goes to
infinity
This gives us another way to interpret the stationary distribution provided that the convergence result in
(5.43) is valid
The convergence in (5.43) is a special case of a law of large numbers result for Markov chains see EDTC,
section 4.3.4 for some additional information
Example
Recall our cross-sectional interpretation of the employment / unemployment model discussed above
Assume that α ∈ (0, 1) and β ∈ (0, 1), so that irreducibility and aperiodicity both hold
We saw that the stationary distribution is (p, 1 − p), where
β
p=
α+β
In the cross-sectional interpretation, this is the fraction of people unemployed
In view of our latest (ergodicity) result, it is also the fraction of time that a worker can expect to spend
unemployed
Thus, in the long-run, cross-sectional averages for a population and time-series averages for a given person
coincide
This is one interpretation of the notion of ergodicity
E[h(Xt )] (5.44)
E[h(Xt+k ) | Xt = x] (5.45)
where
• {Xt } is a Markov chain generated by n × n stochastic matrix P
• h is a given function, which, in expressions involving matrix algebra, well think of as the column
vector
h(x1 )
..
h= .
h(xn )
The unconditional expectation (5.44) is easy: We just sum over the distribution of Xt to get
∑
E[h(Xt )] = (ψP t )(x)h(x)
x∈S
E[h(Xt )] = ψP t h
For the conditional expectation (5.45), we need to sum over the conditional distribution of Xt+k given
Xt = x
We already know that this is P k (x, ·), so
where
(I − βP )−1 = I + βP + β 2 P 2 + · · ·
5.5.9 Exercises
Exercise 1
According to the discussion above, if a workers employment dynamics obey the stochastic matrix
( )
1−α α
P =
β 1−β
with α ∈ (0, 1) and β ∈ (0, 1), then, in the long-run, the fraction of time spent unemployed will be
β
p :=
α+β
In other words, if {Xt } represents the Markov chain for employment, then X̄m → p as m → ∞, where
1 ∑
m
X̄m := 1{Xt = 0}
m
t=1
(You dont need to add the fancy touches to the graphsee the solution if youre interested)
Exercise 2
where
• ℓi is the total number of outbound links from i
• Lj is the set of all pages i such that i has a link to j
This is a measure of the number of inbound links, weighted by their own ranking (and normalized by 1/ℓi )
There is, however, another interpretation, and it brings us back to Markov chains
Let P be the matrix given by P (i, j) = 1{i → j}/ℓi where 1{i → j} = 1 if i has a link to j and zero
otherwise
The matrix P is a stochastic matrix provided that each page has at least one link
With this definition of P we have
∑ ri ∑ ri ∑
rj = = 1{i → j} = P (i, j)ri
ℓi ℓi
i∈Lj all i all i
d -> h;
import re
When you solve for the ranking, you will find that the highest ranked node is in fact g, while the lowest is a
Exercise 3
In numerical work it is sometimes convenient to replace a continuous model with a discrete one
In particular, Markov chains are routinely generated as discrete approximations to AR(1) processes of the
form
σu2
σy2 :=
1 − ρ2
Tauchens method [Tau86] is the most common method for approximating this continuous state process with
a finite state Markov chain
A routine for this already exists in QuantEcon.py but lets write our own version as an exercise
As a first step we choose
• n, the number of states for the discrete approximation
• m, an integer that parameterizes the width of the state space
Next we create a state space {x0 , . . . , xn−1 } ⊂ R and a stochastic n × n matrix P such that
• x0 = −m σy
• xn−1 = m σy
• xi+1 = xi + s where s = (xn−1 − x0 )/(n − 1)
Let F be the cumulative distribution function of the normal distribution N (0, σu2 )
The values P (xi , xj ) are computed to approximate the AR(1) process omitting the derivation, the rules are
as follows:
1. If j = 0, then set
P (xi , xj ) = P (xi , x0 ) = F (x0 − ρxi + s/2)
2. If j = n − 1, then set
P (xi , xj ) = P (xi , xn−1 ) = 1 − F (xn−1 − ρxi − s/2)
3. Otherwise, set
5.5.10 Solutions
import numpy as np
import matplotlib.pyplot as plt
from quantecon import MarkovChain
Exercise 1
Compute the fraction of time that the worker spends unemployed, and compare it to the stationary probability
α = β = 0.1
N = 10000
p = β / (α + β)
ax.legend(loc='upper right')
plt.show()
Exercise 2
First save the data into a file called web_graph_data.txt by executing the next cell
%%file web_graph_data.txt
a -> d;
a -> f;
b -> j;
b -> k;
b -> m;
c -> c;
c -> g;
c -> j;
c -> m;
d -> f;
d -> h;
d -> k;
e -> d;
e -> h;
e -> l;
f -> a;
f -> b;
f -> j;
f -> l;
g -> b;
g -> j;
h -> d;
h -> g;
h -> l;
h -> m;
i -> g;
i -> h;
i -> n;
j -> e;
j -> i;
j -> k;
k -> n;
l -> m;
m -> g;
n -> c;
n -> j;
n -> m;
Overwriting web_graph_data.txt
"""
Return list of pages, ordered by rank
"""
import numpy as np
from operator import itemgetter
infile = 'web_graph_data.txt'
alphabet = 'abcdefghijklmnopqrstuvwxyz'
print(f'{name}: {rank:.4}')
Rankings
***
g: 0.1607
j: 0.1594
m: 0.1195
n: 0.1088
k: 0.09106
b: 0.08326
e: 0.05312
i: 0.05312
c: 0.04834
h: 0.0456
l: 0.03202
d: 0.03056
f: 0.01164
a: 0.002911
Exercise 3
Contents
5.6.1 Overview
In a previous lecture we learned about finite Markov chains, a relatively elementary class of stochastic
dynamic models
The present lecture extends this analysis to continuous (i.e., uncountable) state Markov chains
Most stochastic dynamic models studied by economists either fit directly into this class or can be represented
as continuous state Markov chains after minor modifications
In this lecture, our focus will be on continuous Markov models that
• evolve in discrete time
• are often nonlinear
The fact that we accommodate nonlinear models here is significant, because linear stochastic models have
their own highly developed tool set, as well see later on
The question that interests us most is: Given a particular stochastic dynamic model, how will the state of the
system evolve over time?
In particular,
• What happens to the distribution of the state variables?
• Is there anything we can say about the average behavior of these variables?
• Is there a notion of steady state or long run equilibrium thats applicable to the model?
– If so, how can we compute it?
Answering these questions will lead us to revisit many of the topics that occupied us in the finite state case,
such as simulation, distribution dynamics, stability, ergodicity, etc.
Note: For some people, the term Markov chain always refers to a process with a finite or discrete state
space. We follow the mainstream mathematical literature (e.g., [MT09]) in using the term to refer to any
discrete time Markov process
You are probably aware that some distributions can be represented by densities and some cannot
(For example, distributions on the real numbers R that put positive probability on individual points have no
density representation)
We are going to start our analysis by looking at Markov chains where the one step transition probabilities
have density representations
The benefit is that the density case offers a very direct parallel to the finite case in terms of notation and
intuition
Once weve built some intuition well cover the general case
In our lecture on finite Markov chains, we studied discrete time Markov chains that evolve on a finite state
space S
In this setting, the dynamics of the model are described by a stochastic matrix a nonnegative square matrix
P = P [i, j] such that each row P [i, ·] sums to one
The interpretation of P is that P [i, j] represents the probability of transitioning from state i to state j in one
unit of time
In symbols,
P{Xt+1 = j | Xt = i} = P [i, j]
Equivalently,
• P can be thought of as a family of distributions P [i, ·], one for each i ∈ S
• P [i, ·] is the distribution of Xt+1 given Xt = i
(As you probably recall, when using NumPy arrays, P [i, ·] is expressed as P[i,:])
In this section, well allow S to be a subset of R, such as
• R itself
• the positive reals (0, ∞)
• a bounded interval (a, b)
The family of discrete distributions P [i, ·] will be replaced by a family of densities p(x, ·), one for each
x∈S
Analogous to the finite state case, p(x, ·) is to be understood as the distribution (density) of Xt+1 given
Xt = x
More formally, a stochastic kernel on S is a function p : S × S → R with the property that
1. p(x, y) ≥ 0 for all x, y ∈ S
∫
2. p(x, y)dy = 1 for all x ∈ S
(Integrals are over the whole space unless otherwise specified)
For example, let S = R and consider the particular stochastic kernel pw defined by
{ }
1 (y − x)2
pw (x, y) := √ exp − (5.47)
2π 2
What kind of model does pw represent?
The answer is, the (normally distributed) random walk
IID
Xt+1 = Xt + ξt+1 where {ξt } ∼ N (0, 1) (5.48)
To see this, lets find the stochastic kernel p corresponding to (5.48)
Recall that p(x, ·) represents the distribution of Xt+1 given Xt = x
Letting Xt = x in (5.48) and considering the distribution of Xt+1 , we see that p(x, ·) = N (x, 1)
In other words, p is exactly pw , as defined in (5.47)
In the previous section, we made the connection between stochastic difference equation (5.48) and stochastic
kernel (5.47)
In economics and time series analysis we meet stochastic difference equations of all different shapes and
sizes
It will be useful for us if we have some systematic methods for converting stochastic difference equations
into stochastic kernels
To this end, consider the generic (scalar) stochastic difference equation given by
This is a special case of (5.49) with µ(x) = αx and σ(x) = (β + γx2 )1/2
Example 3: With stochastic production and a constant savings rate, the one-sector neoclassical growth
model leads to a law of motion for capital per worker such as
Here
• s is the rate of savings
• At+1 is a production shock
– The t + 1 subscript indicates that At+1 is not visible at time t
• δ is a depreciation rate
• f : R+ → R+ is a production function satisfying f (k) > 0 whenever k > 0
(The fixed savings rate can be rationalized as the optimal policy for a particular set of technologies and
preferences (see [LS18], section 3.1.2), although we omit the details here)
Equation (5.51) is a special case of (5.49) with µ(x) = (1 − δ)x and σ(x) = sf (x)
Now lets obtain the stochastic kernel corresponding to the generic model (5.49)
To find it, note first that if U is a random variable with density fU , and V = a + bU for some constants a, b
with b > 0, then the density of V is given by
( )
1 v−a
fV (v) = fU (5.52)
b b
(The proof is below. For a multidimensional version see EDTC, theorem 8.1.3)
Taking (5.52) as given for the moment, we can obtain the stochastic kernel p for (5.49) by recalling that
p(x, ·) is the conditional density of Xt+1 given Xt = x
In the present case, this is equivalent to stating that p(x, ·) is the density of Y := µ(x) + σ(x) ξt+1 when
ξt+1 ∼ ϕ
Hence, by (5.52),
( )
1 y − µ(x)
p(x, y) = ϕ (5.53)
σ(x) σ(x)
( )
1 y − (1 − δ)x
p(x, y) = ϕ (5.54)
sf (x) sf (x)
Distribution Dynamics
In this section of our lecture on finite Markov chains, we asked the following question: If
1. {Xt } is a Markov chain with stochastic matrix P
2. the distribution of Xt is known to be ψt
then what is the distribution of Xt+1 ?
Letting ψt+1 denote the distribution of Xt+1 , the answer we gave was that
∑
ψt+1 [j] = P [i, j]ψt [i]
i∈S
This intuitive equality states that the probability of being at j tomorrow is the probability of visiting i today
and then going on to j, summed over all possible i
In the density case, we just replace the sum with an integral and probability mass functions with densities,
yielding
∫
ψt+1 (y) = p(x, y)ψt (x) dx, ∀y ∈ S (5.55)
∫
(ψP )(y) = p(x, y)ψ(x)dx (5.56)
Note: Unlike most operators, we write P to the right of its argument, instead of to the left (i.e., ψP instead
of P ψ). This is a common convention, with the intention being to maintain the parallel with the finite case
see here
With this notation, we can write (5.55) more succinctly as ψt+1 (y) = (ψt P )(y) for all y, or, dropping the y
and letting = indicate equality of functions,
ψt+1 = ψt P (5.57)
Equation (5.57) tells us that if we specify a distribution for ψ0 , then the entire sequence of future distributions
can be obtained by iterating with P
Its interesting to note that (5.57) is a deterministic difference equation
Thus, by converting a stochastic difference equation such as (5.49) into a stochastic kernel p and hence an
operator P , we convert a stochastic difference equation into a deterministic one (albeit in a much higher
dimensional space)
Note: Some people might be aware that discrete Markov chains are in fact a special case of the continuous
Markov chains we have just described. The reason is that probability mass functions are densities with
respect to the counting measure.
Computation
To learn about the dynamics of a given process, its useful to compute and study the sequences of densities
generated by the model
One way to do this is to try to implement the iteration described by (5.56) and (5.57) using numerical
integration
However, to produce ψP from ψ via (5.56), you would need to integrate at every y, and there is a continuum
of such y
Another possibility is to discretize the model, but this introduces errors of unknown size
A nicer alternative in the present setting is to combine simulation with an elegant estimator called the look
ahead estimator
Lets go over the ideas with reference to the growth model discussed above, the dynamics of which we repeat
here for convenience:
Our aim is to compute the sequence {ψt } associated with this model and fixed initial condition ψ0
To approximate ψt by simulation, recall that, by definition, ψt is the density of kt given k0 ∼ ψ0
If we wish to generate observations of this random variable, all we need to do is
1. draw k0 from the specified initial condition ψ0
2. draw the shocks A1 , . . . , At from their specified density ϕ
3. compute kt iteratively via (5.58)
If we repeat this n times, we get n independent observations kt1 , . . . , ktn
With these draws in hand, the next step is to generate some kind of representation of their distribution ψt
A naive approach would be to use a histogram, or perhaps a smoothed histogram using SciPys
gaussian_kde function
However, in the present setting there is a much better way to do this, based on the look-ahead estimator
With this estimator, to construct an estimate of ψt , we actually generate n observations of kt−1 , rather than
kt
1 , . . . , k n and form the estimate
Now we take these n observations kt−1 t−1
1∑
n
ψtn (y) = i
p(kt−1 , y) (5.59)
n
i=1
Implementation
A class called LAE for estimating densities by this technique can be found in lae.py
Given our use of the __call__ method, an instance of LAE acts as a callable object, which is essentially
a function that can store its own data (see this discussion)
This function returns the right-hand side of (5.59) using
• the data and stochastic kernel that it stores as its instance data
• the value y as its argument
The function is vectorized, in the sense that if psi is such an instance and y is an array, then the call
psi(y) acts elementwise
(This is the reason that we reshaped X and y inside the class to make vectorization work)
Because the implementation is fully vectorized, it is about as efficient as it would be in C or Fortran
Example
The following code is example of usage for the stochastic growth model described above
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import lognorm, beta
from quantecon import LAE
# == Define parameters == #
s = 0.2
δ = 0.1
a_σ = 0.4 # A = exp(B) where B ~ N(0, a_σ)
α = 0.4 # We set f(k) = k**α
ψ_0 = beta(5, 5, scale=0.5) # Initial distribution
= lognorm(a_σ)
"""
Stochastic kernel for the growth model with Cobb-Douglas production.
Both x and y must be strictly positive.
"""
d = s * x**α
return .pdf((y - (1 - δ) * x) / d) / d
# == Generate T instances of LAE using this data, one for each date t == #
laes = [LAE(p, k[:, t]) for t in range(T)]
# == Plot == #
fig, ax = plt.subplots()
ygrid = np.linspace(0.01, 4.0, 200)
greys = [str(g) for g in np.linspace(0.0, 0.8, T)]
greys.reverse()
for ψ, g in zip(laes, greys):
ax.plot(ygrid, ψ(ygrid), color=g, lw=2, alpha=0.6)
ax.set_xlabel('capital')
ax.set_title(f'Density of $k_1$ (lighter) to $k_T$ (darker) for $T={T}$')
plt.show()
The figure shows part of the density sequence {ψt }, with each density computed via the look ahead estimator
Notice that the sequence of densities shown in the figure seems to be converging more on this in just a
moment
Another quick comment is that each of these distributions could be interpreted as a cross sectional distribu-
tion (recall this discussion)
Up until now, we have focused exclusively on continuous state Markov chains where all conditional distri-
butions p(x, ·) are densities
As discussed above, not all distributions can be represented as densities
If the conditional distribution of Xt+1 given Xt = x cannot be represented as a density for some x ∈ S,
then we need a slightly different theory
The ultimate option is to switch from densities to probability measures, but not all readers will be familiar
with measure theory
We can, however, construct a fairly general theory using distribution functions
To illustrate the issues, recall that Hopenhayn and Rogerson [HR93] study a model of firm dynamics where
individual firm productivity follows the exogenous process
IID
Xt+1 = a + ρXt + ξt+1 , where {ξt } ∼ N (0, σ 2 )
If you think about it, you will see that for any given x ∈ [0, 1], the conditional distribution of Xt+1 given
Xt = x puts positive probability mass on 0 and 1
Hence it cannot be represented as a density
What we can do instead is use cumulative distribution functions (cdfs)
To this end, set
This family of cdfs G(x, ·) plays a role analogous to the stochastic kernel in the density case
The distribution dynamics in (5.55) are then replaced by
∫
Ft+1 (y) = G(x, y)Ft (dx) (5.60)
Here Ft and Ft+1 are cdfs representing the distribution of the current state and next period state
The intuition behind (5.60) is essentially the same as for (5.55)
Computation
If you wish to compute these cdfs, you cannot use the look-ahead estimator as before
Indeed, you should not use any density estimator, since the objects you are estimating/computing are not
densities
One good option is simulation as before, combined with the empirical distribution function
5.6.4 Stability
In our lecture on finite Markov chains we also studied stationarity, stability and ergodicity
Here we will cover the same topics for the continuous case
We will, however, treat only the density case (as in this section), where the stochastic kernel is a family of
densities
The general case is relatively similar references are given below
Theoretical Results
Analogous to the finite case, given a stochastic kernel p and corresponding Markov operator as defined in
(5.56), a density ψ ∗ on S is called stationary for P if it is a fixed point of the operator P
In other words,
∫
∗
ψ (y) = p(x, y)ψ ∗ (x) dx, ∀y ∈ S (5.61)
As with the finite case, if ψ ∗ is stationary for P , and the distribution of X0 is ψ ∗ , then, in view of (5.57), Xt
will have this same distribution for all t
Hence ψ ∗ is the stochastic equivalent of a steady state
In the finite case, we learned that at least one stationary distribution exists, although there may be many
When the state space is infinite, the situation is more complicated
Even existence can fail very easily
For example, the random walk model has no stationary density (see, e.g., EDTC, p. 210)
However, there are well-known conditions under which a stationary density ψ ∗ exists
With additional conditions, we can also get a unique stationary density (ψ ∈ D and ψ = ψP =⇒ ψ = ψ ∗ ),
and also global convergence in the sense that
∀ ψ ∈ D, ψP t → ψ ∗ as t→∞ (5.62)
This combination of existence, uniqueness and global convergence in the sense of (5.62) is often referred to
as global stability
Under very similar conditions, we get ergodicity, which means that
∫
1∑
n
h(Xt ) → h(x)ψ ∗ (x)dx as n → ∞ (5.63)
n
t=1
for any (measurable) function h : S → R such that the right-hand side is finite
Note that the convergence in (5.63) does not depend on the distribution (or value) of X0
This is actually very important for simulation it means we can learn about ψ ∗ (i.e., approximate the right
hand side of (5.63) via the left hand side) without requiring any special knowledge about what to do with
X0
So what are these conditions we require to get global stability and ergodicity?
In essence, it must be the case that
1. Probability mass does not drift off to the edges of the state space
2. Sufficient mixing obtains
For one such set of conditions see theorem 8.2.14 of EDTC
In addition
• [SLP89] contains a classic (but slightly outdated) treatment of these topics
• From the mathematical literature, [LM94] and [MT09] give outstanding in depth treatments
• Section 8.1.2 of EDTC provides detailed intuition, and section 8.3 gives additional references
• EDTC, section 11.3.4 provides a specific treatment for the growth model we considered in this lecture
An Example of Stability
As stated above, the growth model treated here is stable under mild conditions on the primitives
• See EDTC, section 11.3.4 for more details
We can see this stability in action in particular, the convergence in (5.62) by simulating the path of densities
from various initial conditions
Here is such a figure
All sequences are converging towards the same limit, regardless of their initial condition
The details regarding initial conditions and so on are given in this exercise, where you are asked to replicate
the figure
In the preceding figure, each sequence of densities is converging towards the unique stationary density ψ ∗
Even from this figure we can get a fair idea what ψ ∗ looks like, and where its mass is located
However, there is a much more direct way to estimate the stationary density, and it involves only a slight
modification of the look ahead estimator
Lets say that we have a model of the form (5.49) that is stable and ergodic
Let p be the corresponding stochastic kernel, as given in (5.53)
To approximate the stationary density ψ ∗ , we can simply generate a long time series X0 , X1 , . . . , Xn and
estimate ψ ∗ via
1∑
n
ψn∗ (y) = p(Xt , y) (5.64)
n
t=1
This is essentially the same as the look ahead estimator (5.59), except that now the observations we generate
are a single time series, rather than a cross section
The justification for (5.64) is that, with probability one as n → ∞,
∫
1∑
n
p(Xt , y) → p(x, y)ψ ∗ (x) dx = ψ ∗ (y)
n
t=1
where the convergence is by (5.63) and the equality on the right is by (5.61)
The right hand side is exactly what we want to compute
On top of this asymptotic result, it turns out that the rate of convergence for the look ahead estimator is very
good
The first exercise helps illustrate this point
5.6.5 Exercises
Exercise 1
IID
Xt+1 = θ|Xt | + (1 − θ2 )1/2 ξt+1 where {ξt } ∼ N (0, 1) (5.65)
This is one of those rare nonlinear stochastic models where an analytical expression for the stationary density
is available
In particular, provided that |θ| < 1, there is a unique stationary density ψ ∗ given by
[ ]
∗ θy
ψ (y) = 2 ϕ(y) Φ (5.66)
(1 − θ2 )1/2
Here ϕ is the standard normal density and Φ is the standard normal cdf
As an exercise, compute the look ahead estimate of ψ ∗ , as defined in (5.64), and compare it with ψ ∗ in (5.66)
to see whether they are indeed close for large n
In doing so, set θ = 0.8 and n = 500
The next figure shows the result of such a computation
The additional density (black line) is a nonparametric kernel density estimate, added to the solution for
illustration
(You can try to replicate it before looking at the solution if you want to)
As you can see, the look ahead estimator is a much tighter fit than the kernel density estimator
If you repeat the simulation you will see that this is consistently the case
Exercise 2
Exercise 3
Each data set is represented by a box, where the top and bottom of the box are the third and first quartiles of
the data, and the red line in the center is the median
initial_conditions = np.linspace(8, 0, J)
5.6.6 Solutions
Exercise 1
Look ahead estimation of a TAR stationary density, where the TAR model is
and ξt ∼ N (0, 1). Try running at n = 10, 100, 1000, 10000 to get an idea of the speed of convergence.
= norm()
n = 500
θ = 0.8
# == Frequently used constants == #
d = np.sqrt(1 - θ**2)
δ = θ / d
def ψ_star(y):
"True stationary density of the TAR Model"
return 2 * norm.pdf(y) * norm.cdf(δ * y)
Z = .rvs(n)
X = np.empty(n)
for t in range(n-1):
X[t+1] = θ * np.abs(X[t]) + d * Z[t]
ψ_est = LAE(p, X)
k_est = gaussian_kde(X)
Exercise 2
# == Define parameters == #
s = 0.2
δ = 0.1
a_σ = 0.4 # A = exp(B) where B ~ N(0, a_σ)
α = 0.4 # f(k) = k**α
= lognorm(a_σ)
for i in range(4):
ax = axes[i]
ax.set_xlim(0, xmax)
ψ_0 = beta(5, 5, scale=0.5, loc=i*2) # Initial distribution
Exercise 3
θ = 0.9
d = np.sqrt(1 - θ**2)
δ = θ / d
for j in range(J):
axes[j].set_ylim(-4, 8)
axes[j].set_title(f'time series from t = {initial_conditions[j]}')
Z = np.random.randn(k, n)
X[:, 0] = initial_conditions[j]
for t in range(1, n):
X[:, t] = θ * np.abs(X[:, t-1]) + d * Z[:, t]
axes[j].boxplot(X)
plt.show()
5.6.7 Appendix
Contents
5.7.1 Overview
This lecture provides a simple and intuitive introduction to the Kalman filter, for those who either
• have heard of the Kalman filter but dont know how it works, or
• know the Kalman filter equations, but dont know where they come from
For additional (more advanced) reading on the Kalman filter, see
• [LS18], section 2.7.
• [AM05]
The second reference presents a comprehensive treatment of the Kalman filter
Required knowledge: Familiarity with matrix manipulations, multivariate normal distributions, covariance
matrices, etc.
The Kalman filter has many applications in economics, but for now lets pretend that we are rocket scientists
A missile has been launched from country Y and our mission is to track it
Let x ∈ R2 denote the current location of the missilea pair indicating latitude-longitute coordinates on a
map
At the present moment in time, the precise location x is unknown, but we do have some beliefs about x
One way to summarize our knowledge is a point prediction x̂
• But what if the President wants to know the probability that the missile is currently over the Sea of
Japan?
• Then it is better to summarize our initial beliefs with a bivariate probability density p
∫
– E p(x)dx indicates the probability that we attach to the missile being in region E
The density p is called our prior for the random variable x
To keep things tractable in our example, we assume that our prior is Gaussian. In particular, we take
p = N (x̂, Σ) (5.67)
where x̂ is the mean of the distribution and Σ is a 2 × 2 covariance matrix. In our simulations, we will
suppose that
( ) ( )
0.2 0.4 0.3
x̂ = , Σ= (5.68)
−0.2 0.3 0.45
This density p(x) is shown below as a contour map, with the center of the red ellipse being equal to x̂
Z = gen_gaussian_plot_vals(x_hat, Σ)
ax.contourf(X, Y, Z, 6, alpha=0.6, cmap=cm.jet)
cs = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs, inline=1, fontsize=10)
plt.show()
We are now presented with some good news and some bad news
The good news is that the missile has been located by our sensors, which report that the current location is
y = (2.3, −1.9)
The next figure shows the original prior p(x) and the new reported location y
Z = gen_gaussian_plot_vals(x_hat, Σ)
ax.contourf(X, Y, Z, 6, alpha=0.6, cmap=cm.jet)
cs = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs, inline=1, fontsize=10)
ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black")
plt.show()
Here G and R are 2 × 2 matrices with R positive definite. Both are assumed known, and the noise term v is
assumed to be independent of x
How then should we combine our prior p(x) = N (x̂, Σ) and this new information y to improve our under-
standing of the location of the missile?
As you may have guessed, the answer is to use Bayes theorem, which tells us to update our prior p(x) to
p(x | y) via
p(y | x) p(x)
p(x | y) =
p(y)
∫
where p(y) = p(y | x) p(x)dx
In solving for p(x | y), we observe that
• p(x) = N (x̂, Σ)
• In view of (5.69), the conditional density p(y | x) is N (Gx, R)
• p(y) does not depend on x, and enters into the calculations only as a normalizing constant
Because we are in a linear and Gaussian framework, the updated density can be computed by calculating
population linear regressions
In particular, the solution is known1 to be
p(x | y) = N (x̂F , ΣF )
where
x̂F := x̂ + ΣG′ (GΣG′ + R)−1 (y − Gx̂) and ΣF := Σ − ΣG′ (GΣG′ + R)−1 GΣ (5.70)
Here ΣG′ (GΣG′ + R)−1 is the matrix of population regression coefficients of the hidden object x − x̂ on
the surprise y − Gx̂
This new density p(x | y) = N (x̂F , ΣF ) is shown in the next figure via contour lines and the color map
The original density is left in as contour lines for comparison
Z = gen_gaussian_plot_vals(x_hat, Σ)
cs1 = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs1, inline=1, fontsize=10)
M = Σ * G.T * linalg.inv(G * Σ * G.T + R)
1
See, for example, page 93 of [Bis06]. To get from his expressions to the ones used above, you will also need to apply the
Woodbury matrix identity.
plt.show()
Our new density twists the prior p(x) in a direction determined by the new information y − Gx̂
In generating the figure, we set G to the identity matrix and R = 0.5Σ for Σ defined in (5.68)
Our aim is to combine this law of motion and our current distribution p(x | y) = N (x̂F , ΣF ) to come up
with a new predictive distribution for the location in one unit of time
In view of (5.71), all we have to do is introduce a random vector xF ∼ N (x̂F , ΣF ) and work out the
distribution of AxF + w where w is independent of xF and has distribution N (0, Q)
Since linear combinations of Gaussians are Gaussian, AxF + w is Gaussian
Elementary calculations and the expressions in (5.70) tell us that
and
The matrix AΣG′ (GΣG′ + R)−1 is often written as KΣ and called the Kalman gain
• The subscript Σ has been added to remind us that KΣ depends on Σ, but not y or x̂
Using this notation, we can summarize our results as follows
Our updated prediction is the density N (x̂new , Σnew ) where
• The density pnew (x) = N (x̂new , Σnew ) is called the predictive distribution
The predictive distribution is the new density shown in the following figure, where the update has used
parameters
( )
1.2 0.0
A= , Q = 0.3 ∗ Σ
0.0 −0.2
# Density 1
Z = gen_gaussian_plot_vals(x_hat, Σ)
# Density 2
M = Σ * G.T * linalg.inv(G * Σ * G.T + R)
x_hat_F = x_hat + M * (y - G * x_hat)
Σ_F = Σ - M * G * Σ
Z_F = gen_gaussian_plot_vals(x_hat_F, Σ_F)
cs2 = ax.contour(X, Y, Z_F, 6, colors="black")
ax.clabel(cs2, inline=1, fontsize=10)
# Density 3
new_x_hat = A * x_hat_F
new_Σ = A * Σ_F * A.T + Q
new_Z = gen_gaussian_plot_vals(new_x_hat, new_Σ)
cs3 = ax.contour(X, Y, new_Z, 6, colors="black")
ax.clabel(cs3, inline=1, fontsize=10)
ax.contourf(X, Y, new_Z, 6, alpha=0.6, cmap=cm.jet)
ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black")
plt.show()
These are the standard dynamic equations for the Kalman filter (see, for example, [LS18], page 58)
5.7.3 Convergence
5.7.4 Implementation
The class Kalman from the QuantEcon.py package implements the Kalman filter
• Instance data consists of:
– the moments (x̂t , Σt ) of the current prior
– An instance of the LinearStateSpace class from QuantEcon.py
The latter represents a linear state space model of the form
Q := CC ′ and R := HH ′
• The class Kalman from the QuantEcon.py package has a number of methods, some that we will wait
to use until we study more advanced applications in subsequent lectures
• Methods pertinent for this lecture are:
– prior_to_filtered, which updates (x̂t , Σt ) to (x̂Ft , ΣFt )
– filtered_to_forecast, which updates the filtering distribution to the predictive distribu-
tion – which becomes the new prior (x̂t+1 , Σt+1 )
– update, which combines the last two methods
– a stationary_values, which computes the solution to (5.73) and the corresponding (sta-
tionary) Kalman gain
You can view the program on GitHub
5.7.5 Exercises
Exercise 1
Consider the following simple application of the Kalman filter, loosely based on [LS18], section 2.9.2
Suppose that
• all variables are scalars
• the hidden state {xt } is in fact constant, equal to some θ ∈ R unknown to the modeler
State dynamics are therefore given by (5.71) with A = 1, Q = 0 and x0 = θ
The measurement equation is yt = θ + vt where vt is N (0, 1) and iid
The task of this exercise to simulate the model and, using the code from kalman.py, plot the first five
predictive densities pt (x) = N (x̂t , Σt )
As shown in [LS18], sections 2.9.1–2.9.2, these distributions asymptotically put all mass on the unknown
value θ
In the simulation, take θ = 10, x̂0 = 8 and Σ0 = 1
Your figure should – modulo randomness – look something like this
Exercise 2
The preceding figure gives some support to the idea that probability mass converges to θ
To get a better idea, choose a small ϵ > 0 and calculate
∫ θ+ϵ
zt := 1 − pt (x)dx
θ−ϵ
for t = 0, 1, 2, . . . , T
Plot zt against T , setting ϵ = 0.1 and T = 600
Your figure should show error erratically declining something like this
Exercise 3
As discussed above, if the shock sequence {wt } is not degenerate, then it is not in general possible to predict
xt without error at time t − 1 (and this would be the case even if we could observe xt−1 )
Lets now compare the prediction x̂t made by the Kalman filter against a competitor who is allowed to
observe xt−1
This competitor will use the conditional expectation E[xt | xt−1 ], which in this case is Axt−1
The conditional expectation is known to be the optimal prediction method in terms of minimizing mean
squared error
(More precisely, the minimizer of E ∥xt − g(xt−1 )∥2 with respect to g is g ∗ (xt−1 ) := E[xt | xt−1 ])
Thus we are comparing the Kalman filter against a competitor who has more information (in the sense of
being able to observe the latent state) and behaves optimally in terms of minimizing squared error
Our horse race will be assessed in terms of squared error
In particular, your task is to generate a graph plotting observations of both ∥xt − Axt−1 ∥2 and ∥xt − x̂t ∥2
against t for t = 1, . . . , 50
For the parameters, set G = I, R = 0.5I and Q = 0.3I, where I is the 2 × 2 identity
Set
( )
0.5 0.4
A=
0.6 0.3
Observe how, after an initial learning period, the Kalman filter performs quite well, even relative to the
competitor who predicts optimally with knowledge of the latent state
Exercise 4
The interpretation is that more randomness in the law of motion for xt causes more (permanent) uncertainty
in prediction
5.7.6 Solutions
Exercise 1
# == parameters == #
θ = 10 # Constant value of state x_t
A, C, G, H = 1, 0, 1, 1
ss = LinearStateSpace(A, C, G, H, mu_0=θ)
# == set up plot == #
fig, ax = plt.subplots(figsize=(10,8))
xgrid = np.linspace(θ - 5, θ + 2, 200)
for i in range(N):
# == record the current predicted mean and variance == #
m, v = [float(z) for z in (kalman.x_hat, kalman.Sigma)]
# == plot, update filter == #
ax.plot(xgrid, norm.pdf(xgrid, loc=m, scale=np.sqrt(v)), label=f'$t={i}$')
kalman.update(y[i])
Exercise 2
= 0.1
θ = 10 # Constant value of state x_t
A, C, G, H = 1, 0, 1, 1
ss = LinearStateSpace(A, C, G, H, mu_0=θ)
x_hat_0, Σ_0 = 8, 1
kalman = Kalman(ss, x_hat_0, Σ_0)
T = 600
z = np.empty(T)
x, y = ss.simulate(T)
y = y.flatten()
for t in range(T):
# Record the current predicted mean and variance, and plot their densities
kalman.update(y[t])
Exercise 3
A = [[0.5, 0.4],
[0.6, 0.3]]
C = np.sqrt(0.3) * np.identity(2)
# === Set up state space mode, initial value x_0 set to zero === #
ss = LinearStateSpace(A, C, G, H, mu_0 = np.zeros(2))
# == Print eigenvalues of A == #
print("Eigenvalues of A:")
print(eigvals(A))
# == Print stationary Σ == #
S, K = kn.stationary_values()
print("Stationary prediction error variance:")
print(S)
e1 = np.empty(T-1)
e2 = np.empty(T-1)
fig, ax = plt.subplots(figsize=(9,6))
ax.plot(range(1, T), e1, 'k-', lw=2, alpha=0.6, label='Kalman filter error')
ax.plot(range(1, T), e2, 'g-', lw=2, alpha=0.6, label='Conditional
,→expectation error')
ax.legend()
plt.show()
Eigenvalues of A:
[ 0.9+0.j -0.1+0.j]
Stationary prediction error variance:
[[ 0.40329108 0.1050718 ]
[ 0.1050718 0.41061709]]
SIX
DYNAMIC PROGRAMMING
This section of the course contains foundational models for dynamic economic modeling. Most are single
agent problems that take the activities of other agents as given. Later we will look at full equilibrium
problems.
Contents
• Shortest Paths
– Overview
– Outline of the Problem
– Finding Least-Cost Paths
– Solving for J
– Exercises
– Solutions
6.1.1 Overview
The shortest path problem is a classic problem in mathematics and computer science with applications in
• Economics (sequential decision making, analysis of social networks, etc.)
• Operations research and transportation
• Robotics and artificial intelligence
• Telecommunication network design and routing
• etc., etc.
Variations of the methods we discuss in this lecture are used millions of times every day, in applications
such as
485
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
• Google Maps
• routing packets on the internet
For us, the shortest path problem also provides a nice introduction to the logic of dynamic programming
Dynamic programming is an extremely powerful optimization technique that we apply in many lectures on
this site
The shortest path problem is one of finding how to traverse a graph from one specified node to another at
minimum cost
Consider the following graph
• A, D, F, G at cost 8
Suppose that we know J(v) for each node v, as shown below for the graph from the preceding example
where
• Fv is the set of nodes that can be reached from v in one step
• c(v, w) is the cost of traveling from v to w
Hence, if we know the function J, then finding the best path is almost trivial
But how to find J?
Some thought will convince you that, for every node v, the function J satisfies
This is known as the Bellman equation, after the mathematician Richard Bellman
6.1.5 Exercises
Exercise 1
Use the algorithm given above to find the optimal path (and its cost) for the following graph
You can put it in a Jupyter notebook cell and hit Shift-Enter it will be saved in the local directory as file
graph.txt
%%file graph.txt
node0, node1 0.04, node8 11.11, node14 72.21
node1, node46 1247.25, node6 20.59, node13 64.94
node2, node66 54.18, node31 166.80, node45 1561.45
node3, node20 133.65, node6 2.06, node11 42.43
node4, node75 3706.67, node5 0.73, node7 1.02
node5, node45 1382.97, node7 3.33, node11 34.54
node6, node31 63.17, node9 0.72, node10 13.10
node7, node50 478.14, node9 3.15, node10 5.85
node8, node69 577.91, node11 7.45, node12 3.18
node9, node70 2454.28, node13 4.42, node20 16.53
node10, node89 5352.79, node12 1.87, node16 25.16
node11, node94 4961.32, node18 37.55, node20 65.08
node12, node84 3914.62, node24 34.32, node28 170.04
node13, node60 2135.95, node38 236.33, node40 475.33
node14, node67 1878.96, node16 2.70, node24 38.65
node15, node91 3597.11, node17 1.01, node18 2.57
node16, node36 392.92, node19 3.49, node38 278.71
node17, node76 783.29, node22 24.78, node23 26.45
node18, node91 3363.17, node23 16.23, node28 55.84
node19, node26 20.09, node20 0.24, node28 70.54
node20, node98 3523.33, node24 9.81, node33 145.80
node21, node56 626.04, node28 36.65, node31 27.06
node22, node72 1447.22, node39 136.32, node40 124.22
node23, node52 336.73, node26 2.66, node33 22.37
Writing graph.txt
Here the line node0, node1 0.04, node8 11.11, node14 72.21 means that from node0 we
can go to
• node1 at cost 0.04
• node8 at cost 11.11
• node14 at cost 72.21
and so on
According to our calculations, the optimal path and its cost are like this
Your code should replicate this result
6.1.6 Solutions
Exercise 1
def read_graph(in_file):
""" Read in the graph from the data file. The graph is stored
as a dictionary, where the keys are the nodes, and the values
are a list of pairs (d, c), where d is a node and c is a number.
If (d, c) is in the list for node n, then d can be reached from
n at cost c.
"""
graph = {}
infile = open(in_file)
for line in infile:
elements = line.split(',')
node = elements.pop(0)
graph[node] = []
if node != 'node99':
for element in elements:
destination, cost = element.split()
graph[node].append((destination, float(cost)))
infile.close()
return graph
print('node99\n')
print('Cost: ', sum_costs)
## Main loop
graph = read_graph('graph.txt')
M = 1e10
J = {}
for node in graph:
J[node] = M
J['node99'] = 0
while True:
next_J = update_J(J, graph)
if next_J == J:
break
else:
J = next_J
print_best_path(J, graph)
node0
node8
node11
node18
node23
node33
node41
node53
node56
node57
node60
node67
node70
node73
node76
node85
node87
node88
node93
node94
node96
node97
node98
node99
Cost: 160.55000000000007
Contents
Questioning a McCall worker is like having a conversation with an out-of-work friend: Maybe
you are setting your sights too high, or Why did you quit your old job before you had a new one
lined up? This is real social science: an attempt to model, to understand, human behavior by
visualizing the situation people find themselves in, the options they face and the pros and cons
as they themselves see them. – Robert E. Lucas, Jr.
6.2.1 Overview
The McCall search model [McC70] helped transform economists way of thinking about labor markets
To clarify vague notions such as involuntary unemployment, McCall modeled the decision problem of un-
employed agents directly, in terms of factors such as
• current and likely future wages
• impatience
• unemployment compensation
To solve the decision problem he used dynamic programming
Here we set up McCalls model and adopt the same solution method
As well see, McCalls model is not only interesting in its own right but also an excellent vehicle for learning
dynamic programming
The smaller is β, the more the worker discounts future utility relative to current utility
The variable Yt is income, equal to
• his wage Wt when employed
• unemployment compensation c when unemployed
A Trade Off
In order to optimally trade off current and future rewards, we need to think about two things:
1. the current payoffs we get from different choices
2. the different states that those choices will lead to next period (in this case, either employment or
unemployment)
To weigh these two aspects of the decision problem, we need to assign values to states
To this end, let V (w) be the total lifetime value accruing to an unemployed worker who enters the current
period unemployed but with wage offer w in hand
More precisely, V (w) denotes the value of the objective function (6.11) when an agent in this situation
makes optimal decisions now and at all future points in time
Of course V (w) is not trivial to calculate because we dont yet know what decisions are optimal and what
arent!
But think of V as a function that assigns to each possible wage w the maximal lifetime value that can be
obtained with that offer in hand
A crucial observation is that this function V must satisfy the recursion
{ }
w ∑ n
V (w) = max , c+β V (wi )pi (6.4)
1−β
i=1
Suppose for now that we are able to solve (6.4) for the unknown function V
Once we have this function in hand we can behave optimally (i.e., make the right choice between accept and
reject)
All we have to do is select the maximal choice on the r.h.s. of (6.4)
The optimal action is best thought of as a policy, which is, in general, a map from states to actions
In our case, the state is the current wage offer w
Given any w, we can read off the corresponding best choice (accept or reject) by picking the max on the
r.h.s. of (6.4)
Thus, we have a map from R to {0, 1}, with 1 meaning accept and zero meaning reject
We can write the policy as follows
{ }
w ∑ n
σ(w) := 1 ≥c+β V (wi )pi
1−β
i=1
To put the above ideas into action, we need to compute the value function at points w1 , . . . , wn
In doing so, we can identify these values with the vector v = (vi ) where vi := V (wi )
In view of (6.4), this vector satisfies the nonlinear system of equations
{ }
wi ∑ n
vi = max , c+β v i pi for i = 1, . . . , n (6.5)
1−β
i=1
It turns out that there is exactly one vector v := (vi )ni=1 in Rn that satisfies this equation
The Algorithm
{ }
wi ∑ n
vi′ = max , c+β v i pi for i = 1, . . . , n (6.6)
1−β
i=1
Step 3: calculate a measure of the deviation between v and v ′ , such as maxi |vi − vi′ |
Step 4: if the deviation is larger than some fixed tolerance, set v = v ′ and go to step 2, else continue
Step 5: return v
This algorithm returns an arbitrarily good approximation to the true solution to (6.5), which represents the
value function
(Arbitrarily good means here that the approximation converges to the true solution as the tolerance goes to
zero)
{ }
wi ∑ n
T vi = max , c+β vi pi for i = 1, . . . , n (6.7)
1−β
i=1
(A new vector T v is obtained from given vector v by evaluating the r.h.s. at each i)
One can show that the conditions of the Banach contraction mapping theorem are satisfied by T as a self-
mapping on Rn
One implication is that T has a unique fixed point in Rn
Moreover, its immediate from the definition of T that this fixed point is precisely the value function
The iterative algorithm presented above corresponds to iterating with T from some initial guess v
The Banach contraction mapping theorem tells us that this iterative process generates a sequence that con-
verges to the fixed point
Implementation
import numpy as np
from numba import jit
import matplotlib.pyplot as plt
import quantecon as qe
from quantecon.distributions import BetaBinomial
plt.show()
First lets have a look at the sequence of approximate value functions that the algorithm above generates
Default parameter values are embedded in the function
Our initial guess v is the value of accepting at every given wage
def plot_value_function_seq(ax,
c=25,
β=0.99,
w_vals=w_vals,
p_vals=p_vals,
num_plots=6):
v = w_vals / (1 - β)
v_next = np.empty_like(v)
for i in range(num_plots):
ax.plot(w_vals, v, label=f"iterate {i}")
# Update guess
for j, w in enumerate(w_vals):
stop_val = w / (1 - β)
cont_val = c + β * np.sum(v * p_vals)
v_next[j] = max(stop_val, cont_val)
v[:] = v_next
ax.legend(loc='lower right')
Heres more serious iteration effort, that continues until measured deviation between successive iterates is
below tol
Well be using JIT compilation via Numba to turbo charge our loops
@jit(nopython=True)
def compute_reservation_wage(c=25,
β=0.99,
w_vals=w_vals,
p_vals=p_vals,
max_iter=500,
tol=1e-6):
v = w_vals / (1 - β)
v_next = np.empty_like(v)
i = 0
error = tol + 1
while i < max_iter and error > tol:
for j, w in enumerate(w_vals):
stop_val = w / (1 - β)
cont_val = c + β * np.sum(v * p_vals)
v_next[j] = max(stop_val, cont_val)
47.316499710024964
Comparative Statics
Now we know how to compute the reservation wage, lets see how it varies with parameters
In particular, lets look at what happens when we change β and c
grid_size = 25
R = np.empty((grid_size, grid_size))
for i, c in enumerate(c_vals):
for j, β in enumerate(β_vals):
R[i, j] = compute_reservation_wage(c=c, β=β)
ax.set_title("reservation wage")
ax.set_xlabel("$c$", fontsize=16)
ax.set_ylabel("$β$", fontsize=16)
ax.ticklabel_format(useOffset=False)
plt.show()
As expected, the reservation wage increases both with patience and with unemployment compensation
The approach to dynamic programming just described is very standard and broadly applicable
For this particular problem, theres also an easier way, which circumvents the need to compute the value
function
Let ψ denote the value of not accepting a job in this period but then behaving optimally in all subsequent
periods
That is,
∑
n
ψ =c+β V (wi )pi (6.8)
i=1
∑
n { }
wi
ψ =c+β max , ψ pi (6.9)
1−β
i=1
∑
n { }
′ wi
ψ =c+β max , ψ pi (6.10)
1−β
i=1
@jit(nopython=True)
def compute_reservation_wage_two(c=25,
β=0.99,
w_vals=w_vals,
p_vals=p_vals,
max_iter=500,
tol=1e-5):
# == First compute ψ == #
ψ = np.sum(w_vals * p_vals) / (1 - β)
i = 0
error = tol + 1
while i < max_iter and error > tol:
s = np.maximum(w_vals / (1 - β), ψ)
ψ_next = c + β * np.sum(s * p_vals)
error = np.abs(ψ_next - ψ)
i += 1
ψ = ψ_next
return (1 - β) * (c + β * ψ)
6.2.5 Exercises
Exercise 1
Compute the average duration of unemployment when β = 0.99 and c takes the following values
c_vals = np.linspace(10, 40, 25)
That is, start the agent off as unemployed, computed their reservation wage given the parameters, and then
simulate to see how long it takes to accept
Repeat a large number of times and take the average
Plot mean unemployment duration as a function of c in c_vals
6.2.6 Solutions
Exercise 1
@jit(nopython=True)
def compute_stopping_time(w_bar, seed=1234):
np.random.seed(seed)
t = 1
while True:
# Generate a wage draw
w = w_vals[qe.random.draw(cdf)]
if w >= w_bar:
stopping_time = t
break
else:
t += 1
return stopping_time
@jit(nopython=True)
def compute_mean_stopping_time(w_bar, num_reps=100000):
obs = np.empty(num_reps)
for i in range(num_reps):
obs[i] = compute_stopping_time(w_bar, seed=i)
return obs.mean()
plt.show()
Contents
6.3.1 Overview
Previously we looked at the McCall job search model [McC70] as a way of understanding unemployment
and worker decisions
One unrealistic feature of the model is that every job is permanent
In this lecture we extend the McCall model by introducing job separation
Once separation enters the picture, the agent comes to view
• the loss of a job as a capital loss, and
• a spell of unemployment as an investment in searching for an acceptable job
∞
∑
E β t u(Yt ) (6.11)
t=0
The only difference from the baseline model is that weve added some flexibility over preferences by intro-
ducing a utility function u
It satisfies u′ > 0 and u′′ < 0
Heres what happens at the start of a given period in our model with search and separation
If currently employed, the worker consumes his wage w, receiving utility u(w)
If currently unemployed, he
• receives and consumes unemployment compensation c
• receives an offer to start work next period at a wage w′ drawn from a known distribution p1 , . . . , pn
He can either accept or reject the offer
If he accepts the offer, he enters next period employed with wage w′
If he rejects the offer, he enters next period unemployed
When employed, the agent faces a constant probability α of becoming unemployed at the end of the period
(Note: we do not allow for job search while employedthis topic is taken up in a later lecture)
Let
• V (w) be the total lifetime value accruing to a worker who enters the current period employed with
wage w
• U be the total lifetime value accruing to a worker who is unemployed this period
Here value means the value of the objective function (6.11) when the worker makes optimal decisions at all
future points in time
Suppose for now that the worker can calculate the function V and the constant U and use them in his decision
making
Then V and U should satisfy
and
∑
U = u(c) + β max {U, V (wi )} pi (6.13)
i
Lets interpret these two equations in light of the fact that todays tomorrow is tomorrows today
• The left hand sides of equations (6.12) and (6.13) are the values of a worker in a particular situation
today
• The right hand sides of the equations are the discounted (by β) expected values of the possible situa-
tions that worker can be in tomorrow
• But tomorrow the worker can be in only one of the situations whose values today are on the left sides
of our two equations
Equation (6.13) incorporates the fact that a currently unemployed worker will maximize his own welfare
In particular, if his next period wage offer is w′ , he will choose to remain unemployed unless U < V (w′ )
Equations (6.12) and (6.13) are the Bellman equations for this model
Equations (6.12) and (6.13) provide enough information to solve out for both V and U
Before discussing this, however, lets make a small extension to the model
Stochastic Offers
Lets suppose now that unemployed workers dont always receive job offers
Instead, lets suppose that unemployed workers only receive an offer with probability γ
If our worker does receive an offer, the wage offer is drawn from p as before
He either accepts or rejects the offer
Otherwise the model is the same
With some thought, you will be able to convince yourself that V and U should now satisfy
and
∑
U = u(c) + β(1 − γ)U + βγ max {U, V (wi )} pi (6.15)
i
Well use the same iterative approach to solving the Bellman equations that we adopted in the first job search
lecture
Here this amounts to
1. make guesses for U and V
2. plug these guesses into the right hand sides of (6.14) and (6.15)
3. update the left hand sides from this rule and then repeat
and
∑
Un+1 = u(c) + β(1 − γ)Un + βγ max{Un , Vn (wi )}pi (6.17)
i
6.3.4 Implementation
import numpy as np
from quantecon.distributions import BetaBinomial
from numba import jit
@jit
def u(c, σ):
if c > 0:
return (c**(1 - σ) - 1) / (1 - σ)
else:
return -10e6
class McCallModel:
"""
Stores the parameters and functions associated with a given model.
"""
def __init__(self,
α=0.2, # Job separation rate
β=0.98, # Discount rate
# Add a default wage vector and probabilities over the vector using
# the beta-binomial distribution
if w_vec is None:
n = 60 # number of possible outcomes for wage
self.w_vec = np.linspace(10, 20, n) # wages between 10 and 20
a, b = 600, 400 # shape parameters
dist = BetaBinomial(n-1, a, b)
self.p_vec = dist.pdf()
else:
self.w_vec = w_vec
self.p_vec = p_vec
@jit
def _update_bellman(α, β, γ, c, σ, w_vec, p_vec, V, V_new, U):
"""
A jitted function to update the Bellman equations. Note that V_new is
modified in place (i.e, modified by this function). The new value of U is
returned.
"""
for w_idx, w in enumerate(w_vec):
# w_idx indexes the vector of possible wages
V_new[w_idx] = u(w, σ) + β * ((1 - α) * V[w_idx] + α * U)
U_new = u(c, σ) + β * (1 - γ) * U + \
β * γ * np.sum(np.maximum(U, V) * p_vec)
return U_new
Parameters
----------
mcm : an instance of McCallModel
tol : float
error tolerance
max_iter : int
the maximum number of iterations
"""
return V, U
The approach is to iterate until successive iterates are closer together than some small tolerance level
We then return the current iterate as an approximate solution
Lets plot the approximate solutions U and V to see what they look like
Well use the default parameterizations found in the code above
mcm = McCallModel()
V, U = solve_mccall_model(mcm)
plt.show()
The value V is increasing because higher w generates a higher wage flow conditional on staying employed
Once V and U are known, the agent can use them to make decisions in the face of a given wage offer
If V (w) > U , then working at wage w is preferred to unemployment
If V (w) < U , then remaining unemployed will generate greater lifetime value
Suppose in particular that V crosses U (as it does in the preceding figure)
Then, since V is increasing, there is a unique smallest w in the set of possible wages such that V (w) ≥ U
We denote this wage w̄ and call it the reservation wage
Optimal behavior for the worker is characterized by w̄
• if the wage offer w in hand is greater than or equal to w̄, then the worker accepts
• if the wage offer w in hand is less than w̄, then the worker rejects
Heres a function called compute_reservation_wage that takes an instance of a McCall model and returns the
reservation wage associated with a given model
It uses np.searchsorted to obtain the first w in the set of possible wages such that V (w) > U
If V (w) < U for all w, then the function returns np.inf
If V(w) > U for all w, then the reservation wage w_bar is set to
the lowest wage in mcm.w_vec.
Parameters
----------
mcm : an instance of McCallModel
return_values : bool (optional, default=False)
Return the value functions as well
Returns
-------
w_bar : scalar
The reservation wage
"""
V, U = solve_mccall_model(mcm)
w_idx = np.searchsorted(V - U, 0)
if w_idx == len(V):
w_bar = np.inf
else:
w_bar = mcm.w_vec[w_idx]
if return_values == False:
return w_bar
else:
return w_bar, V, U
Lets use it to look at how the reservation wage varies with parameters
In each instance below well show you a figure and then ask you to reproduce it in the exercises
As expected, higher unemployment compensation causes the worker to hold out for higher wages
In effect, the cost of continuing job search is reduced
Again, the results are intuitive: More patient workers will hold out for higher wages
Finally, lets look at how w̄ varies with the job separation rate α
Higher α translates to a greater chance that a worker will face termination in each period once employed
6.3.6 Exercises
Exercise 1
Exercise 2
grid_size = 25
γ_vals = np.linspace(0.05, 0.95, grid_size)
6.3.7 Solutions
Exercise 1
Using the compute_reservation_wage function mentioned earlier in the lecture, we can create an array for
reservation wages for different values of c, β and α and plot the results like so
grid_size = 25
c_vals = np.linspace(2, 12, grid_size) # values of unemployment compensation
w_bar_vals = np.empty_like(c_vals)
mcm = McCallModel()
for i, c in enumerate(c_vals):
mcm.c = c
w_bar = compute_reservation_wage(mcm)
w_bar_vals[i] = w_bar
ax.set_xlabel('unemployment compensation')
ax.set_ylabel('reservation wage')
txt = r'$\bar w$ as a function of $c$'
ax.plot(c_vals, w_bar_vals, 'b-', lw=2, alpha=0.7, label=txt)
ax.legend(loc='upper left')
ax.grid()
plt.show()
Exercise 2
grid_size = 25
γ_vals = np.linspace(0.05, 0.95, grid_size)
w_bar_vals = np.empty_like(γ_vals)
mcm = McCallModel()
for i, γ in enumerate(γ_vals):
mcm.γ = γ
w_bar = compute_reservation_wage(mcm)
w_bar_vals[i] = w_bar
plt.show()
Contents
6.4.1 Overview
This lecture describes a statistical decision problem encountered by Milton Friedman and W. Allen Wal-
lis during World War II when they were analysts at the U.S. Governments Statistical Research Group at
Columbia University
This problem led Abraham Wald [Wal47] to formulate sequential analysis, an approach to statistical deci-
sion problems intimately related to dynamic programming
In this lecture, we apply dynamic programming algorithms to Friedman and Wallis and Walds problem
Key ideas in play will be:
• Bayes Law
• Dynamic programming
• Type I and type II statistical errors
– a type I error occurs when you reject a null hypothesis that is true
– a type II error is when you accept a null hypothesis that is false
• Abraham Walds sequential probability ratio test
• The power of a statistical test
• The critical region of a statistical test
• A uniformly most powerful test
On pages 137-139 of his 1998 book Two Lucky People with Rose Friedman [FF98], Milton Friedman
described a problem presented to him and Allen Wallis during World War II, when they worked at the US
Governments Statistical Research Group at Columbia University
Lets listen to Milton Friedman tell us what happened
In order to understand the story, it is necessary to have an idea of a simple statistical problem, and of the
standard procedure for dealing with it. The actual problem out of which sequential analysis grew will serve.
The Navy has two alternative designs (say A and B) for a projectile. It wants to determine which is superior.
To do so it undertakes a series of paired firings. On each round it assigns the value 1 or 0 to A accordingly as
its performance is superior or inferio to that of B and conversely 0 or 1 to B. The Navy asks the statistician
how to conduct the test and how to analyze the results.
The standard statistical answer was to specify a number of firings (say 1,000) and a pair of percentages (e.g.,
53% and 47%) and tell the client that if A receives a 1 in more than 53% of the firings, it can be regarded
as superior; if it receives a 1 in fewer than 47%, B can be regarded as superior; if the percentage is between
47% and 53%, neither can be so regarded.
When Allen Wallis was discussing such a problem with (Navy) Captain Garret L. Schyler, the captain
objected that such a test, to quote from Allens account, may prove wasteful. If a wise and seasoned ordnance
officer like Schyler were on the premises, he would see after the first few thousand or even few hundred
[rounds] that the experiment need not be completed either because the new method is obviously inferior or
because it is obviously superior beyond what was hoped for . . .
Friedman and Wallis struggled with the problem but, after realizing that they were not able to solve it,
described the problem to Abraham Wald
That started Wald on the path that led him to Sequential Analysis [Wal47]
Well formulate the problem using dynamic programming
The following presentation of the problem closely follows Dmitri Berskekass treatment in Dynamic Pro-
gramming and Stochastic Control [Ber75]
A decision maker observes iid draws of a random variable z
He (or she) wants to know which of two probability distributions f0 or f1 governs z
After a number of draws, also to be determined, he makes a decision as to which of the distributions is
generating the draws he observers
To help formalize the problem, let x ∈ {x0 , x1 } be a hidden state that indexes the two distributions:
{
f0 (v) if x = x0 ,
P{z = v | x} =
f1 (v) if x = x1
Before observing any outcomes, the decision maker believes that the probability that x = x0 is
pk = P{x = x0 | zk , zk−1 , . . . , z0 },
This is a mixture of distributions f0 and f1 , with the weight on f0 being the posterior probability that
x = x0 1
To help illustrate this kind of distribution, lets inspect some mixtures of beta distributions
The density of a beta probability distribution with parameters a and b is
∫ ∞
Γ(a + b)z a−1 (1 − z)b−1
f (z; a, b) = where Γ(t) := xt−1 e−x dx
Γ(a)Γ(b) 0
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as st
axes[0].set_title("Original Distributions")
axes[0].plot(f0, lw=2, label="$f_0$")
axes[0].plot(f1, lw=2, label="$f_1$")
axes[1].set_title("Mixtures")
for p in 0.25, 0.5, 0.75:
y = p * f0 + (1 - p) * f1
axes[1].plot(y, lw=2, label=f"$p_k$ = {p}")
for ax in axes:
ax.legend(fontsize=14)
ax.set_xlabel("$k$ values", fontsize=14)
ax.set_ylabel("probability of $z_k$", fontsize=14)
ax.set_ylim(0, 0.07)
plt.tight_layout()
plt.show()
1
Because the decision maker believes that zk+1 is drawn from a mixture of two i.i.d. distributions, he does not believe that
the sequence [zk+1 , zk+2 , . . .] is i.i.d. Instead, he believes that it is exchangeable. See [Kre88] chapter 11, for a discussion of
exchangeability.
f1 = f1 / np.sum(f1)
make_distribution_plots(f0, f1)
After observing zk , zk−1 , . . . , z0 , the decision maker chooses among three distinct actions:
• He decides that x = x0 and draws no more zs
• He decides that x = x1 and draws no more zs
• He postpones deciding now and instead chooses to draw a zk+1
Associated with these three actions, the decision maker can suffer three kinds of losses:
• A loss L0 if he decides x = x0 when actually x = x1
• A loss L1 if he decides x = x1 when actually x = x0
• A cost c if he postpones deciding and chooses instead to draw another z
If we regard x = x0 as a null hypothesis and x = x1 as an alternative hypothesis, then L1 and L0 are losses
associated with two types of statistical errors.
• a type I error is an incorrect rejection of a true null hypothesis (a false positive)
• a type II error is a failure to reject a false null hypothesis (a false negative)
So when we treat x = x0 as the null hypothesis
• We can think of L1 as the loss associated with a type I error
• We can think of L0 as the loss associated with a type II error
Intuition
Lets try to guess what an optimal decision rule might look like before we go further
Suppose at some given point in time that p is close to 1
Then our prior beliefs and the evidence so far point strongly to x = x0
If, on the other hand, p is close to 0, then x = x1 is strongly favored
Finally, if p is in the middle of the interval [0, 1], then we have little information in either direction
This reasoning suggests a decision rule such as the one shown in the figure
As well see, this is indeed the correct form of the decision rule
The key problem is to determine the threshold values α, β, which will depend on the parameters listed above
You might like to pause at this point and try to predict the impact of a parameter such as c or L0 on α or β
A Bellman equation
Let J(p) be the total loss for a decision maker with current belief p who chooses optimally
With some thought, you will agree that J should satisfy the Bellman equation
{ }
J(p) = min (1 − p)L0 , pL1 , c + E[J(p′ )] (6.18)
pf0 (z)
p′ =
pf0 (z) + (1 − p)f1 (z)
when p is fixed and z is drawn from the current best guess, which is the distribution f defined by
A(p) := E[J(p′ )]
where p ∈ [0, 1]
Here
• (1 − p)L0 is the expected loss associated with accepting x0 (i.e., the cost of making a type II error)
• pL1 is the expected loss associated with accepting x1 (i.e., the cost of making a type I error)
• c + A(p) is the expected cost associated with drawing one more z
The optimal decision rule is characterized by two numbers α, β ∈ (0, 1) × (0, 1) that satisfy
and
accept x = x0 if p ≥ α
accept x = x1 if p ≤ β
draw another z if β ≤ p ≤ α
Our aim is to compute the value function J, and from it the associated cutoffs α and β
One sensible approach is to write the three components of J that appear on the right side of the Bellman
equation as separate functions
Later, doing this will help us obey the dont repeat yourself (DRY) golden rule of coding
6.4.4 Implementation
return EJ
J_out = np.zeros(m)
J_interp = interp.UnivariateSpline(pgrid, J, k=1, ext=0)
return J_out
# Build a grid
pg = np.linspace(0, 1, 251)
# Turn the Bellman operator into a function with one argument
bell_op = lambda vf: bellman_operator(pg, 0.5, f0, f1, 5.0, 5.0, vf)
# Pass it to qe's built in iteration routine
J = qe.compute_fixed_point(bell_op,
np.zeros(pg.size), # Initial guess
error_tol=1e-6,
verbose=True,
print_skip=5)
5 8.042e-02 4.868e-02
10 6.418e-04 8.845e-02
15 4.482e-06 1.280e-01
The distance column shows the maximal distance between successive iterates
This converges to zero quickly, indicating a successful iterative procedure
Iteration terminates when the distance falls below some threshold
return np.clip(p_tp1, 0, 1)
Parameters
----------
p : Scalar(Float64)
The current believed probability that model 0 is the true
model.
J : Function
The current value function for a decision to continue
Returns
-------
EJ : Scalar(Float64)
The expected value of the value function tomorrow
"""
# Pull out information
f0, f1 = self.f0, self.f1
return EJ
J_out = np.empty(m)
J_interp = interp.UnivariateSpline(pgrid, J, k=1, ext=0)
return J_out
def solve_model(self):
J = qe.compute_fixed_point(self.bellman_operator, np.zeros(self.m),
error_tol=1e-7, verbose=False)
self.J = J
return J
decision_made = True
decision = 0
return decision, p, t
if decision == 0:
correct = True
else:
correct = False
return correct, p, t
if decision == 1:
correct = True
else:
correct = False
return correct, p, t
# Allocate space
tdist = np.empty(ndraws, int)
cdist = np.empty(ndraws, bool)
for i in range(ndraws):
correct, p, t = simfunc()
tdist[i] = t
cdist[i] = correct
Now lets use our class to solve Bellman equation (6.18) and verify that it gives similar output
# Set up distributions
p_m1 = np.linspace(0, 1, 50)
f0 = np.clip(st.beta.pdf(p_m1, a=1, b=1), 1e-8, np.inf)
f0 = f0 / np.sum(f0)
f1 = np.clip(st.beta.pdf(p_m1, a=9, b=9), 1e-8, np.inf)
f1 = f1 / np.sum(f1)
# Create an instance
wf = WaldFriedman(0.5, 5.0, 5.0, f0, f1, m=251)
6.4.5 Analysis
Now that our routines are working, lets inspect the solutions
Well start with the following parameterization
'''
c: Cost of another draw
L0: Cost of selecting x0 when x1 is true
L1: Cost of selecting x1 when x0 is true
a0, b0: Parameters for f0 (beta distribution)
a1, b1: Parameters for f1 (beta distribution)
m: Size of grid
'''
J = qe.compute_fixed_point(wf.bellman_operator, np.zeros(m),
error_tol=1e-7, verbose=False,
print_skip=10, max_iter=500)
lb, ub = wf.find_cutoff_rule(J)
# Get draws
ndraws = 500
cdist, tdist = wf.stopping_dist(ndraws=ndraws)
ax[0, 0].legend()
ax[0, 1].plot(wf.pgrid, J)
ax[0, 1].annotate(r"$\beta$", xy=(lb + 0.025, 0.5), size=14)
ax[0, 1].annotate(r"$\alpha$", xy=(ub + 0.025, 0.5), size=14)
ax[0, 1].vlines(lb, 0.0, wf.payoff_choose_f1(lb), linestyle="--")
ax[0, 1].vlines(ub, 0.0, wf.payoff_choose_f0(ub), linestyle="--")
ax[0, 1].set(ylim=(0, 0.5 * max(L0, L1)), ylabel="cost",
xlabel="$p_k$", title="Value function $J$")
plt.tight_layout()
plt.show()
analysis_plot()
Value Function
In the top left subfigure we have the two beta distributions, f0 and f1
In the top right we have corresponding value function J
It equals pL1 for p ≤ β, and (1 − p)L0 for p ≥ α
The slopes of the two linear pieces of the value function are determined by L1 and −L0
The value function is smooth in the interior region, where the posterior probability assigned to f0 is in the
indecisive region p ∈ (β, α)
The decision maker continues to sample until the probability that he attaches to model f0 falls below β or
above α
Simulations
The bottom two subfigures show the outcomes of 500 simulations of the decision process
On the left is a histogram of the stopping times, which equal the number of draws of zk required to make a
decision
The average number of draws is around 6.6
On the right is the fraction of correct decisions at the stopping time
In this case the decision maker is correct 80% of the time
Comparative statics
analysis_plot(c=2.5)
A notebook implementation
To facilitate comparative statics, we provide a Jupyter notebook that generates the same plots, but with
sliders
With these sliders you can adjust parameters and immediately observe
• effects on the smoothness of the value function in the indecisive middle range as we increase the
number of grid points in the piecewise linear approximation.
• effects of different settings for the cost parameters L0 , L1 , c, the parameters of two beta distributions
f0 and f1 , and the number of points and linear functions m to use in the piece-wise continuous
approximation to the value function.
• various simulations from f0 and associated distributions of waiting times to making a decision
• associated histograms of correct and incorrect decisions
For several reasons, it is useful to describe the theory underlying the test that Navy Captain G. S. Schuyler
had been told to use and that led him to approach Milton Friedman and Allan Wallis to convey his conjecture
that superior practical procedures existed
Evidently, the Navy had told Captail Schuyler to use what it knew to be a state-of-the-art Neyman-Pearson
test
Well rely on Abraham Walds [Wal47] elegant summary of Neyman-Pearson theory
For our purposes, watch for there features of the setup:
• the assumption of a fixed sample size n
• the application of laws of large numbers, conditioned on alternative probability models, to interpret
the probabilities α and β defined in the Neyman-Pearson theory
Recall that in the sequential analytic formulation above, that
• The sample size n is not fixed but rather an object to be chosen; technically n is a random variable
• The parameters β and α characterize cut-off rules used to determine n as a random variable
• Laws of large numbers make no appearances in the sequential construction
In chapter 1 of Sequential Analysis [Wal47] Abraham Wald summarizes the Neyman-Pearson approach to
hypothesis testing
Wald frames the problem as making a decision about a probability distribution that is partially known
(You have to assume that something is already known in order to state a well posed problem. Usually,
something means a lot.)
By limiting what is unknown, Wald uses the following simple structure to illustrate the main ideas.
• a decision maker wants to decide which of two distributions f0 , f1 govern an i.i.d. random variable z
• The null hypothesis H0 is the statement that f0 governs the data.
• The alternative hypothesis H1 is the statement that f1 governs the data.
• The problem is to devise and analyze a test of hypothesis H0 against the alternative hypothesis H1
on the basis of a sample of a fixed number n independent observations z1 , z2 , . . . , zn of the random
variable z.
To quote Abraham Wald,
• A test procedure leading to the acceptance or rejection of the hypothesis in question is simply a rule
specifying, for each possible sample of size n, whether the hypothesis should be accepted or rejected
on the basis of the sample. This may also be expressed as follows: A test procedure is simply a
subdivision of the totality of all possible samples of size n into two mutually exclusive parts, say part
1 and part 2, together with the application of the rule that the hypothesis be accepted if the observed
sample is contained in part 2. Part 1 is also called the critical region. Since part 2 is the totality of
all samples of size 2 which are not included in part 1, part 2 is uniquely determined by part 1. Thus,
choosing a test procedure is equivalent to determining a critical region.
Lets listen to Wald longer:
• As a basis for choosing among critical regions the following considerations have been advanced by
Neyman and Pearson: In accepting or rejecting H0 we may commit errors of two kinds. We commit
an error of the first kind if we reject H0 when it is true; we commit an error of the second kind if we
accept H0 when H1 is true. After a particular critical region W has been chosen, the probability of
committing an error of the first kind, as well as the probability of committing an error of the second
kind is uniquely determined. The probability of committing an error of the first kind is equal to the
probability, determined by the assumption that H0 is true, that the observed sample will be included
in the critical region W . The probability of committing an error of the second kind is equal to the
probability, determined on the assumption that H1 is true, that the probability will fall outside the
critical region W . For any given critical region W we shall denote the probability of an error of the
first kind by α and the probability of an error of the second kind by β.
Lets listen carefully to how Wald applies a law of large numbers to interpret α and β:
• The probabilities α and β have the following important practical interpretation: Suppose that we
draw a large number of samples of size n. Let M be the number of such samples drawn. Suppose
that for each of these M samples we reject H0 if the sample is included in W and accept H0 if the
sample lies outside W . In this way we make M statements of rejection or acceptance. Some of these
statements will in general be wrong. If H0 is true and if M is large, the probability is nearly 1 (i.e.,
it is practically certain) that the proportion of wrong statements (i.e., the number of wrong statements
divided by M ) will be approximately α. If H1 is true, the probability is nearly 1 that the proportion of
wrong statements will be approximately β. Thus, we can say that in the long run [ here Wald applies
a law of large numbers by driving M → ∞ (our comment, not Walds) ] the proportion of wrong
statements will be α if H0 is true and β if H1 is true.
The quantity α is called the size of the critical region, and the quantity 1 − β is called the power of the
critical region.
Wald notes that
• one critical region W is more desirable than another if it has smaller values of α and β. Although
either α or β can be made arbitrarily small by a proper choice of the critical region W , it is possible
to make both α and β arbitrarily small for a fixed value of n, i.e., a fixed sample size.
Wald summarizes Neyman and Pearsons setup as follows:
• Neyman and Pearson show that a region consisting of all samples (z1 , z2 , . . . , zn ) which satisfy the
inequality
f1 (z1 ) · · · f1 (zn )
≥k
f0 (z1 ) · · · f1 (zn )
is a most powerful critical region for testing the hypothesis H0 against the alternative hypothesis H1 .
The term k on the right side is a constant chosen so that the region will have the required size α.
Wald goes on to discuss Neyman and Pearsons concept of uniformly most powerful test.
Here is how Wald introduces the notion of a sequential test
• A rule is given for making one of the following three decisions at any stage of the experiment (at the
m th trial for each integral value of m ): (1) to accept the hypothesis H , (2) to reject the hypothesis H
, (3) to continue the experiment by making an additional observation. Thus, such a test procedure is
carried out sequentially. On the basis of the first observation one of the aforementioned decisions is
made. If the first or second decision is made, the process is terminated. If the third decision is made,
a second trial is performed. Again, on the basis of the first two observations one of the three decisions
is made. If the third decision is made, a third trial is performed, and so on. The process is continued
until either the first or the second decisions is made. The number n of observations required by such a
test procedure is a random variable, since the value of n depends on the outcome of the observations.
Contents
6.5.1 Overview
In this lecture we consider an extension of the previously studied job search model of McCall [McC70]
In the McCall model, an unemployed worker decides when to accept a permanent position at a specified
wage, given
• his or her discount rate
• the level of unemployment compensation
• the distribution from which wage offers are drawn
In the version considered below, the wage distribution is unknown and must be learned
• The following is based on the presentation in [LS18], section 6.6
Model features
• Infinite horizon dynamic programming with two states and one binary control
• Bayesian updating to learn the unknown distribution
6.5.2 Model
Lets first review the basic McCall model [McC70] and then add the variation we want to consider
Recall that, in the baseline model, an unemployed worker is presented in each period with a permanent job
offer at wage Wt
At time t, our worker either
1. accepts the offer and works permanently at constant wage Wt
2. rejects the offer, receives unemployment compensation c and reconsiders next period
The wage sequence {Wt } is iid and generated from known density h
∑∞
The worker aims to maximize the expected discounted sum of earnings E t=0 β
ty
t The function V satisfies
the recursion
{ ∫ }
w ′ ′ ′
V (w) = max , c+β V (w )h(w )dw (6.19)
1−β
The optimal policy has the form 1{w ≥ w̄}, where w̄ is a constant depending called the reservation wage
Now lets extend the model by considering the variation presented in [LS18], section 6.6
The model is as above, apart from the fact that
• the density h is unknown
• the worker learns about h by starting with a prior and updating based on wage offers that he/she
observes
The worker knows there are two possible distributions F and G with densities f and g
At the start of time, nature selects h to be either f or g the wage distribution from which the entire sequence
{Wt } will be drawn
This choice is not observed by the worker, who puts prior probability π0 on f being chosen
Update rule: workers time t estimate of the distribution is πt f + (1 − πt )g, where πt updates via
πt f (wt+1 )
πt+1 = (6.20)
πt f (wt+1 ) + (1 − πt )g(wt+1 )
This last expression follows from Bayes rule, which tells us that
P{W = w | h = f }P{h = f } ∑
P{h = f | W = w} = and P{W = w} = P{W = w | h = ψ}P{h = ψ}
P{W = w}
ψ∈{f,g}
The fact that (6.20) is recursive allows us to progress to a recursive solution method
Letting
πf (w)
hπ (w) := πf (w) + (1 − π)g(w) and q(w, π) :=
πf (w) + (1 − π)g(w)
we can express the value function for the unemployed worker recursively as follows
{ ∫ }
w ′ ′ ′ ′
V (w, π) = max , c+β V (w , π ) hπ (w ) dw where π ′ = q(w′ , π) (6.21)
1−β
Notice that the current guess π is a state variable, since it affects the workers perception of probabilities for
future rewards
Parameterization
Looking Forward
What kind of optimal policy might result from (6.21) and the parameterization specified above?
Intuitively, if we accept at wa and wa ≤ wb , then all other things being given we should also accept at wb
This suggests a policy of accepting whenever w exceeds some threshold value w̄
But w̄ should depend on π in fact it should be decreasing in π because
• f is a less attractive offer distribution than g
• larger π means more weight on f and less on g
Thus larger π depresses the workers assessment of her future prospects, and relatively low current offers
become more attractive
Summary: We conjecture that the optimal policy is of the form ⊮{w ≥ w̄(π)} for some decreasing function
w̄
Lets set about solving the model and see how our results match with our intuition
We begin by solving via value function iteration (VFI), which is natural but ultimately turns out to be second
best
The code is as follows
class SearchProblem:
"""
A class to store a given parameterization of the "offer distribution
unknown" model.
Parameters
----------
β : scalar(float), optional(default=0.95)
The discount parameter
c : scalar(float), optional(default=0.6)
The unemployment compensation
F_a : scalar(float), optional(default=1)
First parameter of β distribution on F
F_b : scalar(float), optional(default=1)
Second parameter of β distribution on F
G_a : scalar(float), optional(default=3)
First parameter of β distribution on G
G_b : scalar(float), optional(default=1.2)
Second parameter of β distribution on G
w_max : scalar(float), optional(default=2)
Maximum wage possible
w_grid_size : scalar(int), optional(default=40)
Size of the grid on wages
π_grid_size : scalar(int), optional(default=40)
Size of the grid on probabilities
Attributes
----------
β, c, w_max : see Parameters
w_grid : np.ndarray
Grid points over wages, ndim=1
π_grid : np.ndarray
Grid points over π, ndim=1
grid_points : np.ndarray
Combined grid points, ndim=2
F : scipy.stats._distn_infrastructure.rv_frozen
Beta distribution with params (F_a, F_b), scaled by w_max
G : scipy.stats._distn_infrastructure.rv_frozen
Beta distribution with params (G_a, G_b), scaled by w_max
f : function
Density of F
g : function
Density of G
π_min : scalar(float)
Minimum of grid over π
π_max : scalar(float)
Maximum of grid over π
"""
Returns
-------
new_π : scalar(float)
The updated probability
"""
return new_π
Parameters
----------
v : array_like(float, ndim=1, length=len(π_grid))
An approximate value function represented as a
one-dimensional array.
Returns
-------
new_v : array_like(float, ndim=1, length=len(π_grid))
The updated value function
"""
# == Simplify names == #
f, g, β, c, q = self.f, self.g, self.β, self.c, self.q
vf = LinearNDInterpolator(self.grid_points, v)
N = len(v)
new_v = np.empty(N)
for i in range(N):
w, π = self.grid_points[i, :]
v1 = w / (1 - β)
integrand = lambda m: vf(m, q(m, π)) * (π * f(m)
+ (1 - π) * g(m))
integral, error = fixed_quad(integrand, 0, self.w_max)
v2 = c + β * integral
new_v[i] = max(v1, v2)
return new_v
Parameters
----------
v : array_like(float, ndim=1, length=len(π_grid))
An approximate value function represented as a
one-dimensional array.
Returns
-------
policy : array_like(float, ndim=1, length=len(π_grid))
The decision to accept or reject an offer where 1 indicates
accept and 0 indicates reject
"""
# == Simplify names == #
f, g, β, c, q = self.f, self.g, self.β, self.c, self.q
vf = LinearNDInterpolator(self.grid_points, v)
N = len(v)
policy = np.zeros(N, dtype=int)
for i in range(N):
w, π = self.grid_points[i, :]
v1 = w / (1 - β)
return policy
def res_wage_operator(self, ):
"""
Parameters
----------
: array_like(float, ndim=1, length=len(π_grid))
This is reservation wage guess
Returns
-------
new_ : array_like(float, ndim=1, length=len(π_grid))
The updated reservation wage guess.
"""
# == Simplify names == #
β, c, f, g, q = self.β, self.c, self.f, self.g, self.q
# == Turn into a function == #
_f = lambda p: np.interp(p, self.π_grid, )
new_ = np.empty(len())
for i, π in enumerate(self.π_grid):
def integrand(x):
"Integral expression on right-hand side of operator"
return npmax(x, _f(q(x, π))) * (π * f(x) + (1 - π) * g(x))
integral, error = fixed_quad(integrand, 0, self.w_max)
new_[i] = (1 - β) * c + β * integral
return new_
The class SearchProblem is used to store parameters and methods needed to compute optimal actions
The Bellman operator is implemented as the method .bellman_operator(), while .get_greedy()
computes an approximate optimal policy from a guess v of the value function
We will omit a detailed discussion of the code because there is a more efficient solution method
These ideas are implemented in the .res_wage_operator() method
Before explaining it lets look at solutions computed from value function iteration
Heres the value function:
sp = SearchProblem(w_grid_size=100, π_grid_size=100)
v_init = np.zeros(len(sp.grid_points)) + sp.c / (1 - sp.β)
v = compute_fixed_point(sp.bellman_operator, v_init)
policy = sp.get_greedy(v)
Z = np.empty((w_plot_grid_size, π_plot_grid_size))
for i in range(w_plot_grid_size):
for j in range(π_plot_grid_size):
Z[i, j] = vf(w_plot_grid[i], π_plot_grid[j])
fig, ax = plt.subplots(figsize=(6, 6))
ax.contourf(π_plot_grid, w_plot_grid, Z, 12, alpha=0.6, cmap=cm.jet)
cs = ax.contour(π_plot_grid, w_plot_grid, Z, 12, colors="black")
ax.clabel(cs, inline=1, fontsize=10)
ax.set_xlabel('$\pi$', fontsize=14)
ax.set_ylabel('$w$', fontsize=14, rotation=0, labelpad=15)
plt.show()
Z = np.empty((w_plot_grid_size, π_plot_grid_size))
for i in range(w_plot_grid_size):
for j in range(π_plot_grid_size):
Z[i, j] = pf(w_plot_grid[i], π_plot_grid[j])
plt.show()
This section illustrates the point that when it comes to programming, a bit of mathematical analysis goes a
long way
To begin, note that when w = w̄(π), the worker is indifferent between accepting and rejecting
Hence the two choices on the right-hand side of (6.21) have equal value:
∫
w̄(π)
=c+β V (w′ , π ′ ) hπ (w′ ) dw′ (6.22)
1−β
Together, (6.21) and (6.22) give
{ }
w w̄(π)
V (w, π) = max , (6.23)
1−β 1−β
Combining (6.22) and (6.23), we obtain
∫ { }
w̄(π) w′ w̄(π ′ )
=c+β max , hπ (w′ ) dw′
1−β 1−β 1−β
Multiplying by 1 − β, substituting in π ′ = q(w′ , π) and using ◦ for composition of functions yields
∫
{ }
w̄(π) = (1 − β)c + β max w′ , w̄ ◦ q(w′ , π) hπ (w′ ) dw′ (6.24)
Equation (6.24) can be understood as a functional equation, where w̄ is the unknown function
• Lets call it the reservation wage functional equation (RWFE)
• The solution w̄ to the RWFE is the object that we wish to compute
To solve the RWFE, we will first show that its solution is the fixed point of a contraction mapping
To this end, let
• b[0, 1] be the bounded real-valued functions on [0, 1]
• ∥ψ∥ := supx∈[0,1] |ψ(x)|
Consider the operator Q mapping ψ ∈ b[0, 1] into Qψ ∈ b[0, 1] via
∫
{ }
(Qψ)(π) = (1 − β)c + β max w′ , ψ ◦ q(w′ , π) hπ (w′ ) dw′ (6.25)
Comparing (6.24) and (6.25), we see that the set of fixed points of Q exactly coincides with the set of
solutions to the RWFE
∫
{ } { }
|(Qψ)(π) − (Qϕ)(π)| ≤ β max w′ , ψ ◦ q(w′ , π) − max w′ , ϕ ◦ q(w′ , π) hπ (w′ ) dw′ (6.26)
Working case by case, it is easy to check that for real numbers a, b, c we always have
∫
|(Qψ)(π) − (Qϕ)(π)| ≤ β ψ ◦ q(w′ , π) − ϕ ◦ q(w′ , π) hπ (w′ ) dw′ ≤ β∥ψ − ϕ∥ (6.28)
In other words, Q is a contraction of modulus β on the complete metric space (b[0, 1], ∥ · ∥)
Hence
• A unique solution w̄ to the RWFE exists in b[0, 1]
• Qk ψ → w̄ uniformly as k → ∞, for any ψ ∈ b[0, 1]
Implementation
These ideas are implemented in the .res_wage_operator() method from odu.py as shown above
The method corresponds to action of the operator Q
The following exercise asks you to exploit these facts to compute an approximation to w̄
6.5.5 Exercises
Exercise 1
Use the default parameters and the .res_wage_operator() method to compute an optimal policy
Your result should coincide closely with the figure for the optimal policy shown above
Try experimenting with different parameters, and confirm that the change in the optimal policy coincides
with your intuition
6.5.6 Solutions
Exercise 1
This code solves the Offer Distribution Unknown model by iterating on a guess of the reservation wage
function
You should find that the run time is much shorter than that of the value function approach in odu_vfi.py
sp = SearchProblem(π_grid_size=50)
_init = np.ones(len(sp.π_grid))
w_bar = compute_fixed_point(sp.res_wage_operator, _init)
6.5.7 Appendix
The next piece of code is just a fun simulation to see what the effect of a change in the underlying distribution
on the unemployment rate is
At a point in the simulation, the distribution becomes significantly worse
It takes a while for agents to learn this, and in the meantime they are too optimistic, and turn down too many
jobs
As a result, the unemployment rate spikes
The code takes a few minutes to run
# Set up model and compute the function w_bar
sp = SearchProblem(π_grid_size=50, F_a=1, F_b=1)
π_grid, f, g, F, G = sp.π_grid, sp.f, sp.g, sp.F, sp.G
_init = np.ones(len(sp.π_grid))
w_bar_vals = compute_fixed_point(sp.res_wage_operator, _init)
w_bar = lambda x: np.interp(x, π_grid, w_bar_vals)
class Agent:
"""
Holds the employment state and beliefs of an individual agent.
"""
num_agents = 5000
separation_rate = 0.025 # Fraction of jobs that end in each period
separation_num = int(num_agents * separation_rate)
agent_indices = list(range(num_agents))
agents = [Agent() for i in range(num_agents)]
sim_length = 600
H = G # Start with distribution G
change_date = 200 # Change to F after this many periods
unempl_rate = []
for i in range(sim_length):
if i % 20 == 0:
print(f"date = {i}")
if i == change_date:
H = F
# Randomly select separation_num agents and set employment status to 0
np.random.shuffle(agent_indices)
separation_list = agent_indices[:separation_num]
for agent_index in separation_list:
agents[agent_index].employed = 0
# Update agents
for agent in agents:
agent.update(H)
employed = [agent.employed for agent in agents]
unempl_rate.append(1 - np.mean(employed))
10 5.174e-03 2.204e-01
15 9.652e-04 3.433e-01
Converged in 15 steps
date = 0
date = 20
date = 40
date = 60
date = 80
date = 100
date = 120
date = 140
date = 160
date = 180
date = 200
date = 220
date = 240
date = 260
date = 280
date = 300
date = 320
date = 340
date = 360
date = 380
date = 400
date = 420
date = 440
date = 460
date = 480
date = 500
date = 520
date = 540
date = 560
date = 580
Contents
6.6.1 Overview
Model features
• Career and job within career both chosen to maximize expected discounted wage flow
• Infinite horizon dynamic programming with two state variables
6.6.2 Model
∞
∑
E β t wt (6.30)
t=0
where
I = θ + ϵ + βV (θ, ϵ)
∫ ∫
II = θ + ϵ G(dϵ ) + β V (θ, ϵ′ )G(dϵ′ )
′ ′
∫ ∫ ∫ ∫
′ ′ ′ ′
III = θ F (dθ ) + ϵ G(dϵ ) + β V (θ′ , ϵ′ )G(dϵ′ )F (dθ′ )
Evidently I, II and III correspond to stay put, new job and new life, respectively
Parameterization
As in [LS18], section 6.5, we will focus on a discrete version of the model, parameterized as follows:
• both θ and ϵ take values in the set np.linspace(0, B, N) an even grid of N points between 0
and B inclusive
• N = 50
• B=5
• β = 0.95
The distributions F and G are discrete distributions generating draws from the grid points np.
linspace(0, B, N)
A very useful family of discrete distributions is the Beta-binomial family, with probability mass function
( )
n B(k + a, n − k + b)
p(k | n, a, b) = , k = 0, . . . , n
k B(a, b)
Interpretation:
• draw q from a β distribution with shape parameters (a, b)
• run n independent binary trials, each with success probability q
• p(k | n, a, b) is the probability of k successes in these n trials
Nice properties:
• very flexible class of distributions, including uniform, symmetric unimodal, etc.
• only three parameters
Heres a figure showing the effect of different shape parameters when n = 50
n = 50
a_vals = [0.5, 1, 100]
b_vals = [0.5, 1, 100]
fig, ax = plt.subplots()
for a, b in zip(a_vals, b_vals):
ab_label = f'$a = {a:.1f}$, $b = {b:.1f}$'
ax.plot(list(range(0, n+1)), gen_probs(n, a, b), '-o', label=ab_label)
ax.legend()
plt.show()
The code for solving the DP problem described above is found in this file, which is repeated here for
convenience
class CareerWorkerProblem:
"""
An instance of the class is an object with data on a particular
Parameters
----------
β : scalar(float), optional(default=5.0)
Discount factor
B : scalar(float), optional(default=0.95)
Upper bound of for both and θ
N : scalar(int), optional(default=50)
Number of possible realizations for both and θ
F_a : scalar(int or float), optional(default=1)
Parameter `a` from the career distribution
F_b : scalar(int or float), optional(default=1)
Parameter `b` from the career distribution
G_a : scalar(int or float), optional(default=1)
Parameter `a` from the job distribution
G_b : scalar(int or float), optional(default=1)
Parameter `b` from the job distribution
Attributes
----------
β, B, N : see Parameters
θ : array_like(float, ndim=1)
A grid of values from 0 to B
: array_like(float, ndim=1)
A grid of values from 0 to B
F_probs : array_like(float, ndim=1)
The probabilities of different values for F
G_probs : array_like(float, ndim=1)
The probabilities of different values for G
F_mean : scalar(float)
The mean of the distribution for F
G_mean : scalar(float)
The mean of the distribution for G
"""
Parameters
----------
v : array_like(float)
A 2D NumPy array representing the value function
Interpretation: :math:`v[i, j] = v(\theta_i, \epsilon_j)`
Returns
-------
new_v : array_like(float)
The updated value function Tv as an array of shape v.shape
"""
new_v = np.empty(v.shape)
for i in range(self.N):
for j in range(self.N):
# stay put
v1 = self.θ[i] + self.[j] + self.β * v[i, j]
# new job
v2 = self.θ[i] + self.G_mean + self.β * v[i, :] @ self.G_probs
# new life
v3 = self.G_mean + self.F_mean + self.β * self.F_probs @ v @
,→ self.G_probs
new_v[i, j] = max(v1, v2, v3)
return new_v
Parameters
----------
v : array_like(float)
A 2D NumPy array representing the value function
Interpretation: :math:`v[i, j] = v(\theta_i, \epsilon_j)`
Returns
-------
policy : array_like(float)
A 2D NumPy array, where policy[i, j] is the optimal action
at :math:`(\theta_i, \epsilon_j)`.
"""
policy = np.empty(v.shape, dtype=int)
for i in range(self.N):
for j in range(self.N):
v1 = self.θ[i] + self.[j] + self.β * v[i, j]
v2 = self.θ[i] + self.G_mean + self.β * v[i, :] @ self.G_probs
v3 = self.G_mean + self.F_mean + self.β * self.F_probs @ v @
,→self.G_probs
if v1 > max(v2, v3):
action = 1
elif v2 > max(v1, v3):
action = 2
else:
action = 3
policy[i, j] = action
return policy
The optimal policy can be represented as follows (see Exercise 3 for code)
Interpretation:
• If both job and career are poor or mediocre, the worker will experiment with new job and new career
• If career is sufficiently good, the worker will hold it and experiment with new jobs until a sufficiently
good one is found
• If both job and career are good, the worker will stay put
Notice that the worker will always hold on to a sufficiently good career, but not necessarily hold on to even
the best paying job
The reason is that high lifetime wages require both variables to be large, and the worker cannot change
careers without changing jobs
• Sometimes a good job must be sacrificed in order to change to a better career
6.6.4 Exercises
Exercise 1
Using the default parameterization in the class CareerWorkerProblem, generate and plot typical sam-
ple paths for θ and ϵ when the worker follows the optimal policy
In particular, modulo randomness, reproduce the following figure (where the horizontal axis represents time)
Hint: To generate the draws from the distributions F and G, use the class DiscreteRV
Exercise 2
Lets now consider how long it takes for the worker to settle down to a permanent job, given a starting point
of (θ, ϵ) = (0, 0)
In other words, we want to study the distribution of the random variable
T ∗ := the first point in time from which the worker’s job no longer changes
Evidently, the workers job becomes permanent if and only if (θt , ϵt ) enters the stay put region of (θ, ϵ) space
Letting S denote this region, T ∗ can be expressed as the first passage time to S under the optimal policy:
T ∗ := inf{t ≥ 0 | (θt , ϵt ) ∈ S}
Collect 25,000 draws of this random variable and compute the median (which should be about 7)
Repeat the exercise with β = 0.99 and interpret the change
Exercise 3
As best you can, reproduce the figure showing the optimal policy
Hint: The get_greedy() method returns a representation of the optimal policy where values 1, 2 and 3
correspond to stay put, new job and new life respectively. Use this and contourf from matplotlib.
pyplot to produce the different shadings.
Now set G_a = G_b = 100 and generate a new figure with these parameters. Interpret.
6.6.5 Solutions
Exercise 1
wp = CareerWorkerProblem()
v_init = np.ones((wp.N, wp.N)) * 100
v = compute_fixed_point(wp.bellman_operator, v_init, verbose=False, max_
,→iter=200)
optimal_policy = wp.get_greedy(v)
F = np.cumsum(wp.F_probs)
G = np.cumsum(wp.G_probs)
def gen_path(T=20):
i = j = 0
θ_index = []
_index = []
for t in range(T):
if optimal_policy[i, j] == 1: # Stay put
pass
elif optimal_policy[i, j] == 2: # New job
j = int(qe.random.draw(G))
else: # New life
i, j = int(qe.random.draw(F)), int(qe.random.draw(G))
θ_index.append(i)
_index.append(j)
return wp.θ[θ_index], wp.[_index]
plt.show()
Exercise 2
wp = CareerWorkerProblem()
v_init = np.ones((wp.N, wp.N)) * 100
v = compute_fixed_point(wp.bellman_operator, v_init, max_iter=200, print_
,→skip=25)
optimal_policy = wp.get_greedy(v)
F = np.cumsum(wp.F_probs)
G = np.cumsum(wp.G_probs)
def gen_first_passage_time():
t = 0
i = j = 0
while True:
if optimal_policy[i, j] == 1: # Stay put
return t
elif optimal_policy[i, j] == 2: # New job
j = int(qe.random.draw(G))
else: # New life
i, j = int(qe.random.draw(F)), int(qe.random.draw(G))
t += 1
To compute the median with β = 0.99 instead of the default value β = 0.95, replace wp
= CareerWorkerProblem() with wp = CareerWorkerProblem(β=0.99) and increase the
max_iter=200 in v = compute_fixed_point(...) to max_iter=1000
The medians are subject to randomness, but should be about 7 and 14 respectively
Not surprisingly, more patient workers will wait longer to settle down to their final job
Exercise 3
wp = CareerWorkerProblem()
v_init = np.ones((wp.N, wp.N)) * 100
v = compute_fixed_point(wp.bellman_operator, v_init, max_iter=200, print_
,→skip=25)
optimal_policy = wp.get_greedy(v)
fig, ax = plt.subplots(figsize=(6,6))
tg, eg = np.meshgrid(wp.θ, wp.)
lvls=(0.5, 1.5, 2.5, 3.5)
ax.contourf(tg, eg, optimal_policy.T, levels=lvls, cmap=cm.winter, alpha=0.5)
ax.contour(tg, eg, optimal_policy.T, colors='k', levels=lvls, linewidths=2)
ax.set_xlabel('θ', fontsize=14)
ax.set_ylabel('', fontsize=14)
ax.text(1.8, 2.5, 'new life', fontsize=14)
ax.text(4.5, 2.5, 'new job', fontsize=14, rotation='vertical')
ax.text(4.0, 4.5, 'stay put', fontsize=14)
plt.show()
Now we want to set G_a = G_b = 100 and generate a new figure with these parameters
To do this replace: wp = CareerWorkerProblem() with wp =
CareerWorkerProblem(G_a=100, G_b=100)
In the new figure, you will see that the region for which the worker will stay put has grown because the
distribution for ϵ has become more concentrated around the mean, making high-paying jobs less realistic
Contents
– Implementation
– Solving for Policies
– Exercises
– Solutions
6.7.1 Overview
Model features
6.7.2 Model
Let
• xt denote the time-t job-specific human capital of a worker employed at a given firm
• wt denote current wages
Let wt = xt (1 − st − ϕt ), where
• ϕt is investment in job-specific human capital for the current role
• st is search effort, devoted to obtaining new offers from other firms.
For as long as the worker remains in the current job, evolution of {xt } is given by xt+1 = G(xt , ϕt )
When search effort at t is st , the worker receives a new job offer with probability π(st ) ∈ [0, 1]
Value of offer is Ut+1 , where {Ut } is iid with common distribution F
Worker has the right to reject the current offer and continue with existing job
In particular, xt+1 = Ut+1 if accepts and xt+1 = G(xt , ϕt ) if rejects
Letting bt+1 ∈ {0, 1} be binary with bt+1 = 1 indicating an offer, we can write
Agents objective: maximize expected discounted sum of wages via controls {st } and {ϕt }
Taking the expectation of V (xt+1 ) and using (6.31), the Bellman equation for this problem can be written
as
{ ∫ }
V (x) = max x(1 − s − ϕ) + β(1 − π(s))V [G(x, ϕ)] + βπ(s) V [G(x, ϕ) ∨ u]F (du) . (6.32)
s+ϕ≤1
Parameterization
Back-of-the-Envelope Calculations
Before we solve the model, lets make some quick calculations that provide intuition on what the solution
should look like
To begin, observe that the worker has two instruments to build capital and hence wages:
1. invest in capital specific to the current job via ϕ
2. search for a new job with better job-specific capital match via s
Since wages are x(1 − s − ϕ), marginal cost of investment via either ϕ or s is identical
Our risk neutral worker should focus on whatever instrument has the highest expected return
The relative expected return will depend on x
For example, suppose first that x = 0.05
• If s = 1 and ϕ = 0, then since G(x, ϕ) = 0, taking expectations of (6.31) gives expected next period
capital equal to π(s)EU = EU = 0.5
• If s = 0 and ϕ = 1, then next period capital is G(x, ϕ) = G(0.05, 1) ≈ 0.23
Both rates of return are good, but the return from search is better
Next suppose that x = 0.4
• If s = 1 and ϕ = 0, then expected next period capital is again 0.5
• If s = 0 and ϕ = 1, then G(x, ϕ) = G(0.4, 1) ≈ 0.8
6.7.3 Implementation
import numpy as np
from scipy.integrate import fixed_quad as integrate
from scipy.optimize import minimize
import scipy.stats as stats
class JvWorker:
r"""
A Jovanovic-type model of employment with on-the-job search. The
value function is given by
.. math::
for
.. math::
w(x, , s) := x(1 - - s)
+ β (1 - π(s)) V(G(x, ))
+ β π(s) E V[ \max(G(x, ), U)]
Here
* x = human capital
* s = search effort
* :math:`` = investment in human capital
* :math:`π(s)` = probability of new offer given search level s
* :math:`x(1 - - s)` = wage
* :math:`G(x, )` = new human capital when current job retained
* U = RV with distribution F -- new draw of human capital
Parameters
----------
A : scalar(float), optional(default=1.4)
Attributes
----------
A, α, β : see Parameters
x_grid : array_like(float)
The grid over the human capital
"""
Parameters
----------
V : array_like(float)
Returns
-------
s_policy : array_like(float)
The greedy policy computed from V. Only returned if
return_policies == True
new_V : array_like(float)
The updated value function Tv, as an array representing the
values TV(x) over x in x_grid.
"""
# === simplify names, set up arrays, etc. === #
G, π, F, β = self.G, self.π, self.F, self.β
Vf = lambda x: np.interp(x, self.x_grid, V)
N = len(self.x_grid)
new_V, s_policy, _policy = np.empty(N), np.empty(N), np.empty(N)
a, b = F.ppf(0.005), F.ppf(0.995) # Quantiles, for
,→integration
else:
search_grid = np.linspace(, 1.0, 15)
max_val = -1.0
for s in search_grid:
for in search_grid:
current_val = -w((s, )) if s + <= 1.0 else -1.0
if current_val > max_val:
max_val, max_s, max_ = current_val, s,
if return_policies:
return s_policy, _policy
else:
return new_V
where
{ ∫ }
w(s, ϕ) := − x(1 − s − ϕ) + β(1 − π(s))V [G(x, ϕ)] + βπ(s) V [G(x, ϕ) ∨ u]F (du) (6.33)
Here we are minimizing instead of maximizing to fit with SciPys optimization routines
When we represent V , it will be with a NumPy array V giving values on grid x_grid
But to evaluate the right-hand side of (6.33), we need a function, so we replace the arrays V and x_grid
with a function Vf that gives linear interpolation of V on x_grid
Hence in the preliminaries of bellman_operator
• from the array V we define a linear interpolation Vf of its values
– c1 is used to implement the constraint s + ϕ ≤ 1
Lets plot the optimal policies and see what they look like
The code is as follows
axes[-1].set_xlabel("x")
plt.show()
The horizontal axis is the state x, while the vertical axis gives s(x) and ϕ(x)
Overall, the policies match well with our predictions from section
• Worker switches from one investment strategy to the other depending on relative return
• For low values of x, the best option is to search for a new job
• Once x is larger, worker does better by investing in human capital specific to the current position
6.7.5 Exercises
Exercise 1
Lets look at the dynamics for the state process {xt } associated with these policies
The dynamics are given by (6.31) when ϕt and st are chosen according to the optimal policies, and P{bt+1 =
1} = π(st )
Since the dynamics are random, analysis is a bit subtle
One way to do it is to plot, for each x in a relatively fine grid called plot_grid, a large number K of
realizations of xt+1 given xt = x. Plot this with one dot for each realization, in the form of a 45 degree
diagram. Set
K = 50
plot_grid_max, plot_grid_size = 1.2, 100
plot_grid = np.linspace(0, plot_grid_max, plot_grid_size)
fig, ax = plt.subplots()
ax.set_xlim(0, plot_grid_max)
ax.set_ylim(0, plot_grid_max)
By examining the plot, argue that under the optimal policies, the state xt will converge to a constant value x̄
close to unity
Argue that at the steady state, st ≈ 0 and ϕt ≈ 0.6
Exercise 2
In the preceding exercise we found that st converges to zero and ϕt converges to about 0.6
Since these results were calculated at a value of β close to one, lets compare them to the best choice for an
infinitely patient worker
Intuitively, an infinitely patient worker would like to maximize steady state wages, which are a function of
steady state capital
You can take it as givenits certainly truethat the infinitely patient worker does not search in the long run (i.e.,
st = 0 for large t)
Thus, given ϕ, steady state capital is the positive fixed point x∗ (ϕ) of the map x 7→ G(x, ϕ)
Steady state wages can be written as w∗ (ϕ) = x∗ (ϕ)(1 − ϕ)
Graph w∗ (ϕ) with respect to ϕ, and examine the best choice of ϕ
Can you give a rough interpretation for the value that you see?
6.7.6 Solutions
Exercise 1
import random
wp = JvWorker(grid_size=25)
G, π, F = wp.G, wp.π, wp.F # Simplify names
plt.show()
Exercise 2
wp = JvWorker(grid_size=25)
def xbar():
return (wp.A * **wp.α)**(1 / (1 - wp.α))
plt.show()
Contents
6.8.1 Overview
In this lecture were going to study a simple optimal growth model with one agent
The model is a version of the standard one sector infinite horizon growth model studied in
• [SLP89], chapter 2
• [LS18], section 3.1
• EDTC, chapter 1
• [Sun96], chapter 12
The technique we use to solve the model is dynamic programming
Our treatment of dynamic programming follows on from earlier treatments in our lectures on shortest paths
and job search
Well discuss some of the technical details of dynamic programming as we go along
kt+1 + ct ≤ yt (6.34)
In what follows,
• The sequence {ξt } is assumed to be IID
• The common distribution of each ξt will be denoted ϕ
• The production function f is assumed to be increasing and continuous
• Depreciation of capital is not made explicit but can be incorporated into the production function
While many other treatments of the stochastic growth model use kt as the state variable, we will use yt
This will allow us to treat a stochastic model while maintaining only one state variable
We consider alternative states and timing specifications in some of our other lectures
Optimization
[∞ ]
∑
E t
β u(ct ) (6.35)
t=0
subject to
where
• u is a bounded, continuous and strictly increasing utility function and
• β ∈ (0, 1) is a discount factor
In (6.36) we are assuming that the resource constraint (6.34) holds with equality which is reasonable because
u is strictly increasing and no output will be wasted at the optimum
In summary, the agents aim is to select a path c0 , c1 , c2 , . . . for consumption that is
1. nonnegative,
2. feasible in the sense of (6.34),
3. optimal, in the sense that it maximizes (6.35) relative to all other feasible consumption sequences, and
4. adapted, in the sense that the action ct depends only on observable outcomes, not future outcomes
such as ξt+1
In the present context
• yt is called the state variable it summarizes the state of the world at the start of each period
• ct is called the control variable a value chosen by the agent each period after observing the state
One way to think about solving this problem is to look for the best policy function
A policy function is a map from past and present observables into current action
Well be particularly interested in Markov policies, which are maps from the current state yt into a current
action ct
For dynamic programming problems such as this one (in fact for any Markov decision process), the optimal
policy is always a Markov policy
In other words, the current state yt provides a sufficient statistic for the history in terms of making an optimal
decision today
This is quite intuitive but if you wish you can find proofs in texts such as [SLP89] (section 4.1)
Hereafter we focus on finding the best Markov policy
In our context, a Markov policy is a function σ : R+ → R+ , with the understanding that states are mapped
to actions via
In other words, a feasible consumption policy is a Markov policy that respects the resource constraint
The set of all feasible consumption policies will be denoted by Σ
Each σ ∈ Σ determines a continuous state Markov process {yt } for output via
This is the time path for output when we choose and stick with the policy σ
We insert this process into the objective function to get
[ ∞
] [ ∞
]
∑ ∑
E t
β u(ct ) =E t
β u(σ(yt )) (6.39)
t=0 t=0
This is the total expected present value of following policy σ forever, given initial income y0
The aim is to select a policy that makes this number as large as possible
The next section covers these ideas more formally
Optimality
The policy value function vσ associated with a given policy σ is the mapping defined by
[∞ ]
∑
vσ (y) = E t
β u(σ(yt )) (6.40)
t=0
The value function gives the maximal value that can be obtained from state y, after considering all feasible
policies
A policy σ ∈ Σ is called optimal if it attains the supremum in (6.41) for all y ∈ R+
With our assumptions on utility and production function, the value function as defined in (6.41) also satisfies
a Bellman equation
For this problem, the Bellman equation takes the form
{ ∫ }
w(y) = max u(c) + β w(f (y − c)z)ϕ(dz) (y ∈ R+ ) (6.42)
0≤c≤y
• consumption is set to c
As shown in EDTC, theorem 10.1.11 and a range of other texts
The value function v ∗ satisfies the Bellman equation
In other words, (6.42) holds when w = v ∗
The intuition is that maximal value from a given state can be obtained by optimally trading off
• current reward from a given action, vs
• expected discounted future value of the state resulting from that action
The Bellman equation is important because it gives us more information about the value function
It also suggests a way of computing the value function, which we discuss below
Greedy policies
The primary importance of the value function is that we can use it to compute optimal policies
The details are as follows
Given a continuous function w on R+ , we say that σ ∈ Σ is w-greedy if σ(y) is a solution to
{ ∫ }
max u(c) + β w(f (y − c)z)ϕ(dz) (6.43)
0≤c≤y
for every y ∈ R+
In other words, σ ∈ Σ is w-greedy if it optimally trades off current and future rewards when w is taken to
be the value function
In our setting, we have the following key result
A feasible consumption policy is optimal if and only if it is v ∗ -greedy
The intuition is similar to the intuition for the Bellman equation, which was provided after (6.42)
See, for example, theorem 10.1.11 of EDTC
Hence, once we have a good approximation to v ∗ , we can compute the (approximately) optimal policy by
computing the corresponding greedy policy
The advantage is that we are now solving a much lower dimensional optimization problem
{ ∫ }
T w(y) := max u(c) + β w(f (y − c)z)ϕ(dz) (y ∈ R+ ) (6.44)
0≤c≤y
In other words, T sends the function w into the new function T w defined (6.44)
By construction, the set of solutions to the Bellman equation (6.42) exactly coincides with the set of fixed
points of T
For example, if T w = w, then, for any y ≥ 0,
{ ∫ }
∗
w(y) = T w(y) = max u(c) + β v (f (y − c)z)ϕ(dz)
0≤c≤y
One can also show that T is a contraction mapping on the set of continuous bounded functions on R+ under
the supremum distance
Unbounded Utility
The results stated above assume that the utility function is bounded
In practice economists often work with unbounded utility functions and so will we
In the unbounded setting, various optimality theories exist
Unfortunately, they tend to be case specific, as opposed to valid for a large range of applications
Nevertheless, their main conclusions are usually in line with those stated for the bounded case just above (as
long as we drop the word bounded)
Consult, for example, section 12.2 of EDTC, [Kam12] or [MdRV10]
6.8.3 Computation
Lets now look at computing the value function and the optimal policy
The first step is to compute the value function by value function iteration
In theory, the algorithm is as follows
1. Begin with a function w an initial condition
2. Solving (6.44), obtain the function T w
3. Unless some stopping condition is satisfied, set w = T w and go to step 2
This generates the sequence w, T w, T 2 w, . . .
However, there is a problem we must confront before we implement this procedure: The iterates can neither
be calculated exactly nor stored on a computer
To see the issue, consider (6.44)
Even if w is a known function, unless T w can be shown to have some special structure, the only way to
store it is to record the value T w(y) for every y ∈ R+
Clearly this is impossible
What we will do instead is use fitted value function iteration
The procedure is to record the value of the function T w at only finitely many grid points y1 < y2 < · · · < yI
and reconstruct it from this information when required
More precisely, the algorithm will be
1. Begin with an array of values {w1 , . . . , wI } representing the values of some initial function w on the
grid points {y1 , . . . , yI }
2. Build a function ŵ on the state space R+ by interpolation or approximation, based on these data points
3. Obtain and record the value T ŵ(yi ) on each grid point yi by repeatedly solving (6.44)
4. Unless some stopping condition is satisfied, set {w1 , . . . , wI } = {T ŵ(y1 ), . . . , T ŵ(yI )} and go to
step 2
How should we go about step 2?
This is a problem of function approximation, and there are many ways to approach it
Whats important here is that the function approximation scheme must not only produce a good approxima-
tion to T w, but also combine well with the broader iteration algorithm described above
One good choice from both respects is continuous piecewise linear interpolation (see this paper for further
discussion)
The next figure illustrates piecewise linear interpolation of an arbitrary function on grid points
0, 0.2, 0.4, 0.6, 0.8, 1
import numpy as np
import matplotlib.pyplot as plt
def f(x):
y1 = 2 * np.cos(6 * x) + np.sin(14 * x)
return y1 + 2.5
c_grid = np.linspace(0, 1, 6)
def Af(x):
return np.interp(x, c_grid, f(c_grid))
plt.show()
Another advantage of piecewise linear interpolation is that it preserves useful shape properties such as
monotonicity and concavity / convexity
Heres a function that implements the Bellman operator using linear interpolation
Parameters
----------
w : array_like(float, ndim=1)
The value of the input function on different grid points
grid : array_like(float, ndim=1)
The set of grid points
β : scalar
The discount factor
u : function
The utility function
f : function
The production function
shocks : numpy array
An array of draws from the shock, for Monte Carlo integration (to
compute expectations).
Tw : array_like(float, ndim=1) optional (default=None)
Array to write output values to
compute_policy : Boolean, optional (default=False)
Whether or not to compute policy function
"""
# === Apply linear interpolation to w === #
w_func = lambda x: np.interp(x, grid, w)
# == Initialize Tw if necessary == #
if Tw is None:
Tw = np.empty_like(w)
if compute_policy:
σ = np.empty_like(w)
def objective(c):
return - u(c) - β * np.mean(w_func(f(y - c) * shocks))
c_star = fminbound(objective, 1e-10, y)
if compute_policy:
σ[i] = c_star
Tw[i] = - objective(c_star)
if compute_policy:
return Tw, σ
else:
return Tw
An Example
[ ]
∗ ln(1 − αβ) (µ + α ln(αβ)) 1 1 1
v (y) = + − + ln y (6.45)
1−β 1−α 1 − β 1 − αβ 1 − αβ
σ ∗ (y) = (1 − αβ)y
Lets wrap this model in a class because well use it some later lectures too
class LogLinearOG:
"""
Log linear optimal growth model, with log utility, CD production and
multiplicative lognormal shock, so that
y = f(k, z) = z k^α
The class holds parameters and true value and policy functions.
"""
A First Test
To test our code, we want to see if we can replicate the analytical solution numerically, using fitted value
function iteration
First, having run the code for the log linear model shown above, lets generate an instance
lg = LogLinearOG()
# == Unpack parameters / functions for convenience == #
α, β, µ, s = lg.α, lg.β, lg.µ, lg.s
v_star = lg.v_star
We need a grid and some shock draws for Monte Carlo integration
w = bellman_operator(v_star(grid),
grid,
β,
np.log,
lambda k: k**α,
shocks)
The two functions are essentially indistinguishable, so we are off to a good start
Now lets have a look at iterating with the Bellman operator, starting off from an arbitrary initial condition
The initial condition well start with is w(y) = 5 ln(y)
np.log,
lambda k: k**α,
shocks,
Tw)
error = np.max(np.abs(w_new - w))
w[:] = w_new
i += 1
return w
initial_w = 5 * np.log(grid)
initial_w = 5 * np.log(grid)
T = lambda w: bellman_operator(w,
grid,
β,
np.log,
lambda k: k**α,
shocks,
compute_policy=False)
To compute an approximate optimal policy, we take the approximate value function we just calculated and
then compute the corresponding greedy policy
The next figure compares the result to the exact solution, which, as mentioned above, is σ(y) = (1 − αβ)y
Tw, σ = bellman_operator(v_star_approx,
grid,
β,
np.log,
lambda k: k**α,
shocks,
compute_policy=True)
cstar = (1 - α * β) * grid
ax.plot(grid, cstar, lw=2, alpha=0.6, label='true policy function')
ax.legend(loc='lower right')
plt.show()
The figure shows that weve done a good job in this instance of approximating the true policy
6.8.4 Exercises
Exercise 1
s = 0.05
shocks = np.exp(µ + s * np.random.randn(shock_size))
Otherwise, the parameters and primitives are the same as the log linear model discussed earlier in the lecture
Notice that more patient agents typically have higher wealth
Replicate the figure modulo randomness
6.8.5 Solutions
Exercise 1
Heres one solution (assuming as usual that youve executed everything above)
y = np.empty(ts_length)
ξ = np.random.randn(ts_length-1)
y[0] = y0
for t in range(ts_length-1):
y[t+1] = (y[t] - σ(y[t]))**α * np.exp(µ + s * ξ[t])
return y
Tw = np.empty(len(grid))
initial_w = 5 * np.log(grid)
v_star_approx = compute_fixed_point(bellman_operator,
initial_w,
1e-5, # error_tol
500, # max_iter
False, # verbose
5, # print_skip
'iteration',
grid,
β,
np.log,
lambda k: k**α,
shocks,
Tw=Tw,
compute_policy=False)
Tw, σ = bellman_operator(v_star_approx,
grid,
β,
np.log,
lambda k: k**α,
shocks,
compute_policy=True)
ax.legend(loc='lower right')
plt.show()
Contents
6.9.1 Overview
In this lecture well continue our earlier study of the stochastic optimal growth model
In that lecture we solved the associated discounted dynamic programming problem using value function
iteration
The beauty of this technique is its broad applicability
With numerical problems, however, we can often attain higher efficiency in specific applications by deriving
methods that are carefully tailored to the application at hand
The stochastic optimal growth model has plenty of structure to exploit for this purpose, especially when we
adopt some concavity and smoothness assumptions over primitives
Well use this structure to obtain an Euler equation based method thats more efficient than value function
iteration for this and some other closely related applications
In a subsequent lecture well see that the numerical implementation part of the Euler equation method can be
further adjusted to obtain even more efficiency
Lets take the model set out in the stochastic growth model lecture and add the assumptions that
1. u and f are continuously differentiable and strictly concave
2. f (0) = 0
3. limc→0 u′ (c) = ∞ and limc→∞ u′ (c) = 0
4. limk→0 f ′ (k) = ∞ and limk→∞ f ′ (k) = 0
The last two conditions are usually called Inada conditions
Recall the Bellman equation
{ ∫ }
∗ ∗
v (y) = max u(c) + β v (f (y − c)z)ϕ(dz) for all y ∈ R+ (6.46)
0≤c≤y
∫
u′ (c∗ (y)) = β (v ∗ )′ (f (y − c∗ (y))z)f ′ (y − c∗ (y))zϕ(dz) (6.48)
Combining (6.47) and the first-order condition (6.48) gives the famous Euler equation
∫
′ ∗
(u ◦ c )(y) = β (u′ ◦ c∗ )(f (y − c∗ (y))z)f ′ (y − c∗ (y))zϕ(dz) (6.49)
∫
′
(u ◦ σ)(y) = β (u′ ◦ σ)(f (y − σ(y))z)f ′ (y − σ(y))zϕ(dz) (6.50)
over interior consumption policies σ, one solution of which is the optimal policy c∗
Our aim is to solve the functional equation (6.50) and hence obtain c∗
{ ∫ }
T w(y) := max u(c) + β w(f (y − c)z)ϕ(dz) (6.51)
0≤c≤y
Just as we introduced the Bellman operator to solve the Bellman equation, we will now introduce an operator
over policies to help us solve the Euler equation
This operator K will act on the set of all σ ∈ Σ that are continuous, strictly increasing and interior (i.e.,
0 < σ(y) < y for all strictly positive y)
Henceforth we denote this set of policies by P
1. The operator K takes as its argument a σ ∈ P and
2. returns a new function Kσ, where Kσ(y) is the c ∈ (0, y) that solves
∫
u′ (c) = β (u′ ◦ σ)(f (y − c)z)f ′ (y − c)zϕ(dz) (6.52)
We call this operator the Coleman operator to acknowledge the work of [Col90] (although many people
have studied this and other closely related iterative techniques)
In essence, Kσ is the consumption policy that the Euler equation tells you to choose today when your future
consumption policy is σ
The important thing to note about K is that, by construction, its fixed points coincide with solutions to the
functional equation (6.50)
In particular, the optimal policy c∗ is a fixed point
Indeed, for fixed y, the value Kc∗ (y) is the c that solves
∫
u′ (c) = β (u′ ◦ c∗ )(f (y − c)z)f ′ (y − c)zϕ(dz)
• diverges to +∞ as c ↓ 0
Sketching these curves and using the information above will convince you that they cross exactly once as c
ranges over (0, y)
With a bit more analysis, one can show in addition that Kσ ∈ P whenever σ ∈ P
How does Euler equation time iteration compare with value function iteration?
Both can be used to compute the optimal policy, but is one faster or more accurate?
There are two parts to this story
First, on a theoretical level, the two methods are essentially isomorphic
In particular, they converge at the same rate
Well prove this in just a moment
The other side to the story is the speed of the numerical implementation
It turns out that, once we actually implement these two routines, time iteration is faster and more accurate
than value function iteration
More on this below
Equivalent Dynamics
τ ◦g =h◦τ
g = τ −1 ◦ h ◦ τ (6.53)
Heres a similar figure that traces out the action of the maps on a point x ∈ X
g n = τ −1 ◦ hn ◦ τ
Back to Economics
A Bijection
Let V be all strictly concave, continuously differentiable functions v mapping R+ to itself and satisfying
v(0) = 0 and v ′ (y) > u′ (y) for all positive y
For v ∈ V let
Commutative Operators
It is an additional solved exercise (see below) to show that T and K commute under M , in the sense that
M ◦T =K ◦M (6.54)
T n = M −1 ◦ K n ◦ M
6.9.4 Implementation
Weve just shown that the operators T and K have the same rate of convergence
However, it turns out that, once numerical approximation is taken into account, significant differences arises
In particular, the image of policy functions under K can be calculated faster and with greater accuracy than
the image of value functions under T
Our intuition for this result is that
• the Coleman operator exploits more information because it uses first order and envelope conditions
• policy functions generally have less curvature than value functions, and hence admit more accurate
approximations based on grid point information
The Operator
import numpy as np
from scipy.optimize import brentq
Parameters
----------
g : array_like(float, ndim=1)
The value of the input policy function on grid points
grid : array_like(float, ndim=1)
The set of grid points
β : scalar
The discount factor
u_prime : function
The derivative u'(c) of the utility function
f : function
The production function f(k)
f_prime : function
The derivative f'(k)
shocks : numpy array
An array of draws from the shock, for Monte Carlo integration (to
compute expectations).
Kg : array_like(float, ndim=1) optional (default=None)
Array to write output values to
"""
# === Apply linear interpolation to g === #
# == Initialize Kg if necessary == #
if Kg is None:
Kg = np.empty_like(g)
return Kg
It has some similarities to the code for the Bellman operator in our optimal growth lecture
For example, it evaluates integrals by Monte Carlo and approximates functions using linear interpolation
Heres that Bellman operator code again, which needs to be executed because well use it in some tests below
from scipy.optimize import fminbound
Tw will be overwritten.
Parameters
----------
w : array_like(float, ndim=1)
The value of the input function on different grid points
grid : array_like(float, ndim=1)
The set of grid points
β : scalar
The discount factor
u : function
The utility function
f : function
The production function
shocks : numpy array
An array of draws from the shock, for Monte Carlo integration (to
compute expectations).
Tw : array_like(float, ndim=1) optional (default=None)
Array to write output values to
"""
# === Apply linear interpolation to w === #
w_func = lambda x: np.interp(x, grid, w)
# == Initialize Tw if necessary == #
if Tw is None:
Tw = np.empty_like(w)
if compute_policy:
σ = np.empty_like(w)
if compute_policy:
return Tw, σ
else:
return Tw
As we did for value function iteration, lets start by testing our method in the presence of a model that does
have an analytical solution
We assume the following imports
Now lets bring in the log-linear growth model we used in the value function iteration lecture
class LogLinearOG:
"""
Log linear optimal growth model, with log utility, CD production and
multiplicative lognormal shock, so that
y = f(k, z) = z k^α
The class holds parameters and true value and policy functions.
"""
lg = LogLinearOG()
We also need a grid and some shock draws for Monte Carlo integration
c_star_new = coleman_operator(c_star(grid),
grid, β, u_prime,
f, f_prime, shocks)
fig, ax = plt.subplots()
ax.plot(grid, c_star(grid), label="optimal policy $c^*$")
ax.plot(grid, c_star_new, label="$Kc^*$")
ax.legend(loc='upper left')
plt.show()
We cant really distinguish the two plots, so we are looking good, at least for this test
Next lets try iterating from an arbitrary initial condition and see if we converge towards c∗
The initial condition well use is the one that eats the whole pie: c(y) = y
g = grid
n = 15
fig, ax = plt.subplots(figsize=(9, 6))
lb = 'initial condition $c(y) = y$'
ax.plot(grid, g, color=plt.cm.jet(0), lw=2, alpha=0.6, label=lb)
for i in range(n):
new_g = coleman_operator(g, grid, β, u_prime, f, f_prime, shocks)
g = new_g
ax.plot(grid, g, color=plt.cm.jet(i / n), lw=2, alpha=0.6)
plt.show()
We see that the policy has converged nicely, in only a few steps
Now lets compare the accuracy of iteration using the Coleman and Bellman operators
Well generate
1. K n c where c(y) = y
2. (M ◦ T n ◦ M −1 )c where c(y) = y
In each case well compare the resulting policy to c∗
The theory on equivalent dynamics says we will get the same policy function and hence the same errors
But in fact we expect the first method to be more accurate for reasons discussed above
g_init = grid
w_init = u(grid)
sim_length = 20
g, w = g_init, w_init
for i in range(sim_length):
new_g = coleman_operator(g, grid, β, u_prime, f, f_prime, shocks)
new_w = bellman_operator(w, grid, β, u, f, shocks)
g, w = new_g, new_w
fig, ax = plt.subplots()
pf_error = c_star(grid) - g
vf_error = c_star(grid) - vf_g
ax.legend(loc='lower left')
plt.show()
As you can see, time iteration is much more accurate for a given number of iterations
6.9.5 Exercises
Exercise 1
Exercise 2
Exercise 3
Consider the same model as above but with the CRRA utility function
c1−γ − 1
u(c) =
1−γ
Iterate 20 times with Bellman iteration and Euler equation time iteration
• start time iteration from c(y) = y
• start value function iteration from v(y) = u(y)
• set γ = 1.5
Compare the resulting policies and check that they are close
Exercise 4
Do the same exercise, but now, rather than plotting results, time how long 20 iterations takes in each case
6.9.6 Solutions
Solution to Exercise 1
∫
′
u (c(y)) = β v ′ (f (y − c(y))z)f ′ (y − c(y))zϕ(dz) (6.55)
∫
= β (u′ ◦ ((u′ )−1 ◦ v ′ ))(f (y − c(y))z)f ′ (y − c(y))zϕ(dz)
∫
= β v ′ (f (y − c(y))z)f ′ (y − c(y))zϕ(dz)
Solution to Exercise 2
Solution to Exercise 3
Heres the code, which will execute if youve run all the code above
α = 0.65
β = 0.95
µ = 0
s = 0.1
grid_min = 1e-6
grid_max = 4
grid_size = 200
shock_size = 250
def f(k):
return k**α
def f_prime(k):
return α * k**(α - 1)
def u(c):
return (c**(1 - γ) - 1) / (1 - γ)
def u_prime(c):
return c**(-γ)
def crra_bellman(w):
return bellman_operator(w, grid, β, u, f, shocks)
def crra_coleman(g):
return coleman_operator(g, grid, β, u_prime, f, f_prime, shocks)
g_init = grid
w_init = u(grid)
sim_length = 20
g, w = g_init, w_init
for i in range(sim_length):
new_g = crra_coleman(g)
new_w = crra_bellman(w)
g, w = new_g, new_w
fig, ax = plt.subplots()
ax.legend(loc="upper left")
plt.show()
Solution to Exercise 4
g_init = grid
w_init = u(grid)
sim_length = 100
w = w_init
qe.util.tic()
for i in range(sim_length):
new_w = crra_bellman(w)
w = new_w
qe.util.toc()
g = g_init
qe.util.tic()
for i in range(sim_length):
new_g = crra_coleman(g)
g = new_g
qe.util.toc()
If you run this youll find that the two operators execute at about the same speed
However, as we saw above, time iteration is numerically far more accurate for a given number of iterations
Contents
6.10.1 Overview
Lets start by reminding ourselves of the theory and then see how the numerics fit in
Theory
Take the model set out in the time iteration lecture, following the same terminology and notation
The Euler equation is
∫
′ ∗
(u ◦ c )(y) = β (u′ ◦ c∗ )(f (y − c∗ (y))z)f ′ (y − c∗ (y))zϕ(dz) (6.56)
As we saw, the Coleman operator is a nonlinear operator K engineered so that c∗ is a fixed point of K
It takes as its argument a continuous strictly increasing consumption policy g ∈ Σ
It returns a new function Kg, where (Kg)(y) is the c ∈ (0, ∞) that solves
∫
′
u (c) = β (u′ ◦ g)(f (y − c)z)f ′ (y − c)zϕ(dz) (6.57)
Exogenous Grid
As discussed in the lecture on time iteration, to implement the method on a computer we need numerical
approximation
In particular, we represent a policy function by a set of values on a finite grid
The function itself is reconstructed from this representation when necessary, using interpolation or some
other method
Previously, to obtain a finite representation of an updated consumption policy we
• fixed a grid of income points {yi }
• calculated the consumption value ci corresponding to each yi using (6.57) and a root finding routine
Each ci is then interpreted as the value of the function Kg at yi
Thus, with the points {yi , ci } in hand, we can reconstruct Kg via approximation
Iteration then continues
Endogenous Grid
The method discussed above requires a root finding routine to find the ci corresponding to a given income
value yi
Root finding is costly because it typically involves a significant number of function evaluations
{ ∫ }
′ −1 ′ ′
ci = (u ) β (u ◦ g)(f (ki )z) f (ki ) z ϕ(dz) (6.58)
6.10.3 Implementation
Lets implement this version of the Coleman operator and see how it performs
The Operator
import numpy as np
Parameters
----------
g : function
The current guess of the policy function
k_grid : array_like(float, ndim=1)
The set of *exogenous* grid points, for capital k = y - c
β : scalar
The discount factor
u_prime : function
The derivative u'(c) of the utility function
u_prime_inv : function
The inverse of u' (which exists by assumption)
f : function
The production function f(k)
f_prime : function
The derivative f'(k)
shocks : numpy array
An array of draws from the shock, for Monte Carlo integration (to
compute expectations).
"""
import numpy as np
from scipy.optimize import brentq
Parameters
----------
g : array_like(float, ndim=1)
The value of the input policy function on grid points
grid : array_like(float, ndim=1)
The set of grid points
β : scalar
The discount factor
u_prime : function
The derivative u'(c) of the utility function
f : function
The production function f(k)
f_prime : function
The derivative f'(k)
"""
# === Apply linear interpolation to g === #
g_func = lambda x: np.interp(x, grid, g)
# == Initialize Kg if necessary == #
if Kg is None:
Kg = np.empty_like(g)
return Kg
Lets test out the code above on some example parameterizations, after the following imports
As we did for value function iteration and time iteration, lets start by testing our method with the log-linear
benchmark
The first step is to bring in the log-linear growth model that we used in the value function iteration lecture
class LogLinearOG:
"""
Log linear optimal growth model, with log utility, CD production and
multiplicative lognormal shock, so that
y = f(k, z) = z k^α
The class holds parameters and true value and policy functions.
"""
lg = LogLinearOG()
We also need a grid over capital and some shock draws for Monte Carlo integration
c_star_new = coleman_egm(c_star,
k_grid, β, u_prime, u_prime, f, f_prime, shocks)
ax.legend(loc='upper left')
plt.show()
max(abs(c_star_new(k_grid) - c_star(k_grid)))
4.4408920985006262e-16
Next lets try iterating from an arbitrary initial condition and see if we converge towards c∗
Lets start from the consumption policy that eats the whole pie: c(y) = y
g = lambda x: x
n = 15
fig, ax = plt.subplots(figsize=(9, 6))
lb = 'initial condition $c(y) = y$'
for i in range(n):
new_g = coleman_egm(g, k_grid, β, u_prime, u_prime, f, f_prime, shocks)
g = new_g
ax.plot(k_grid, g(k_grid), color=plt.cm.jet(i / n), lw=2, alpha=0.6)
plt.show()
We see that the policy has converged nicely, in only a few steps
6.10.4 Speed
Now lets compare the clock times per iteration for the standard Coleman operator (with exogenous grid) and
the EGM version
Well do so using the CRRA model adopted in the exercises of the Euler equation time iteration lecture
Heres the model and some convenient functions
## Define the model
α = 0.65
β = 0.95
µ = 0
s = 0.1
grid_min = 1e-6
grid_max = 4
grid_size = 200
shock_size = 250
def f(k):
return k**α
def f_prime(k):
return α * k**(α - 1)
def u(c):
return (c**(1 - γ) - 1) / (1 - γ)
def u_prime(c):
return c**(-γ)
def u_prime_inv(c):
return c**(-γ_inv)
def crra_coleman(g):
return coleman_operator(g, k_grid, β, u_prime, f, f_prime, shocks)
def crra_coleman_egm(g):
return coleman_egm(g, k_grid, β, u_prime, u_prime_inv, f, f_prime, shocks)
sim_length = 20
Contents
6.11.1 Overview
Linear quadratic (LQ) control refers to a class of dynamic optimization problems that have found applica-
tions in almost every scientific field
This lecture provides an introduction to LQ control and its economic applications
As we will see, LQ systems have a simple structure that makes them an excellent workhorse for a wide
variety of economic problems
Moreover, while the linear-quadratic structure is restrictive, it is in fact far more flexible than it may appear
initially
These themes appear repeatedly below
Mathematically, LQ control problems are closely related to the Kalman filter
• Recursive formulations of linear-quadratic control problems and Kalman filtering problems both in-
volve matrix Riccati equations
• Classical formulations of linear control and linear filtering problems make use of similar matrix de-
compositions (see for example this lecture and this lecture)
In reading what follows, it will be useful to have some familiarity with
• matrix manipulations
• vectors of random variables
• dynamic programming and the Bellman equation (see for example this lecture and this lecture)
For additional reading on LQ control, see, for example,
• [LS18], chapter 5
• [HS08], chapter 4
• [HLL96], section 3.5
In order to focus on computation, we leave longer proofs to these sources (while trying to provide as much
intuition as possible)
6.11.2 Introduction
The linear part of LQ is a linear law of motion for the state, while the quadratic part refers to preferences
Lets begin with the former, move on to the latter, and then put them together into an optimization problem
Here
• ut is a control vector, incorporating choices available to a decision maker confronting the current state
xt
• {wt } is an uncorrelated zero mean shock process satisfying Ewt wt′ = I, where the right-hand side is
the identity matrix
Regarding the dimensions
• xt is n × 1, A is n × n
• ut is k × 1, B is n × k
• wt is j × 1, C is n × j
Example 1
at+1 + ct = (1 + r)at + yt
Here at is assets, r is a fixed interest rate, ct is current consumption, and yt is current non-financial income
If we suppose that {yt } is serially uncorrelated and N (0, σ 2 ), then, taking {wt } to be standard normal, we
can write the system as
This is clearly a special case of (6.59), with assets being the state and consumption being the control
Example 2
One unrealistic feature of the previous model is that non-financial income has a zero mean and is often
negative
This can easily be overcome by adding a sufficiently large mean
Hence in this example we take yt = σwt+1 + µ for some positive real number µ
Another alteration thats useful to introduce (well see why soon) is to change the control variable from
consumption to the deviation of consumption from some ideal quantity c̄
(Most parameterizations will be such that c̄ is large relative to the amount of consumption that is attainable
in each period, and hence the household wants to increase consumption)
For this reason, we now take our control to be ut := ct − c̄
In terms of these variables, the budget constraint at+1 = (1 + r)at − ct + yt becomes
How can we write this new system in the form of equation (6.59)?
If, as in the previous example, we take at as the state, then we run into a problem: the law of motion contains
some constant terms on the right-hand side
This means that we are dealing with an affine function, not a linear one (recall this discussion)
Fortunately, we can easily circumvent this problem by adding an extra state variable
In particular, if we write
( ) ( )( ) ( ) ( )
at+1 1 + r −c̄ + µ at −1 σ
= + ut + wt+1 (6.61)
1 0 1 1 0 0
( ) ( ) ( ) ( )
at 1 + r −c̄ + µ −1 σ
xt := , A := , B := , C := (6.62)
1 0 1 0 0
Preferences
In the LQ model, the aim is to minimize a flow of losses, where time-t loss is given by the quadratic
expression
Here
• R is assumed to be n × n, symmetric and nonnegative definite
• Q is assumed to be k × k, symmetric and positive definite
Note: In fact, for many economic problems, the definiteness conditions on R and Q can be relaxed. It is
sufficient that certain submatrices of R and Q be nonnegative definite. See [HS08] for details
Example 1
A very simple example that satisfies these assumptions is to take R and Q to be identity matrices, so that
current loss is
Thus, for both the state and the control, loss is measured as squared distance from the origin
(In fact the general case (6.63) can also be understood in this way, but with R and Q identifying other –
non-Euclidean – notions of distance from the zero vector)
Intuitively, we can often think of the state xt as representing deviation from a target, such as
• deviation of inflation from some target level
• deviation of a firms capital stock from some desired quantity
The aim is to put the state close to the target, while using controls parsimoniously
Example 2
Under this specification, the households current loss is the squared deviation of consumption from the ideal
level c̄
Lets now be precise about the optimization problem we wish to consider, and look at how to solve it
The Objective
We will begin with the finite horizon case, with terminal time T ∈ N
In this case, the aim is to choose a sequence of controls {u0 , . . . , uT −1 } to minimize the objective
{T −1 }
∑
E β t
(x′t Rxt + u′t Qut ) + β T x′T Rf xT (6.64)
t=0
Information
Theres one constraint weve neglected to mention so far, which is that the decision maker who solves this
LQ problem knows only the present and the past, not the future
To clarify this point, consider the sequence of controls {u0 , . . . , uT −1 }
When choosing these controls, the decision maker is permitted to take into account the effects of the shocks
{w1 , . . . , wT } on the system
However, it is typically assumed and will be assumed here that the time-t control ut can be made with
knowledge of past and present shocks only
The fancy measure-theoretic way of saying this is that ut must be measurable with respect to the σ-algebra
generated by x0 , w1 , w2 , . . . , wt
This is in fact equivalent to stating that ut can be written in the form ut = gt (x0 , w1 , w2 , . . . , wt ) for some
Borel measurable function gt
(Just about every function thats useful for applications is Borel measurable, so, for the purposes of intuition,
you can read that last phrase as for some function gt )
Now note that xt will ultimately depend on the realizations of x0 , w1 , w2 , . . . , wt
In fact it turns out that xt summarizes all the information about these historical shocks that the decision
maker needs to set controls optimally
More precisely, it can be shown that any optimal control ut can always be written as a function of the current
state alone
Hence in what follows we restrict attention to control policies (i.e., functions) of the form ut = gt (xt )
Actually, the preceding discussion applies to all standard dynamic programming problems
Whats special about the LQ case is that – as we shall soon see the optimal ut turns out to be a linear function
of xt
Solution
To solve the finite horizon LQ problem we can use a dynamic programming strategy based on backwards
induction that is conceptually similar to the approach adopted in this lecture
For reasons that will soon become clear, we first introduce the notation JT (x) = x′ Rf x
Now consider the problem of the decision maker in the second to last period
In particular, let the time be T − 1, and suppose that the state is xT −1
The decision maker must trade off current and (discounted) final losses, and hence solves
The function JT −1 will be called the T − 1 value function, and JT −1 (x) can be thought of as representing
total loss-to-go from state x at time T − 1 when the decision maker behaves optimally
Now lets step back to T − 2
For a decision maker at T − 2, the value JT −1 (x) plays a role analogous to that played by the terminal loss
JT (x) = x′ Rf x for the decision maker at T − 1
That is, JT −1 (x) summarizes the future loss associated with moving to state x
The decision maker chooses her control u to trade off current loss against future loss, where
• the next period state is xT −1 = AxT −2 + Bu + CwT −1 , and hence depends on the choice of current
control
• the cost of landing in state xT −1 is JT −1 (xT −1 )
Her problem is therefore
Letting
The first equality is the Bellman equation from dynamic programming theory specialized to the finite horizon
LQ problem
Now that we have {J0 , . . . , JT }, we can obtain the optimal controls
As a first step, lets find out what the value functions look like
It turns out that every Jt has the form Jt (x) = x′ Pt x + dt where Pt is a n × n matrix and dt is a constant
We can show this by induction, starting from PT := Rf and dT = 0
Using this notation, (6.65) becomes
To obtain the minimizer, we can take the derivative of the r.h.s. with respect to u and set it equal to zero
Applying the relevant rules of matrix calculus, this gives
JT −1 (x) = x′ PT −1 x + dT −1
where
and
dT −1 := β trace(C ′ PT C) (6.69)
and
6.11.4 Implementation
We will use code from lqcontrol.py in QuantEcon.py to solve finite and infinite horizon linear quadratic
control problems
In the module, the various updating, simulation and fixed point methods are wrapped in a class called LQ,
which includes
• Instance data:
– The required parameters Q, R, A, B and optional parameters C, β, T, R_f, N specifying a given
LQ model
An Application
Early Keynesian models assumed that households have a constant marginal propensity to consume from
current income
Data contradicted the constancy of the marginal propensity to consume
In response, Milton Friedman, Franco Modigliani and others built models based on a consumers preference
for an intertemporally smooth consumption stream
(See, for example, [Fri56] or [MB54])
One property of those models is that households purchase and sell financial assets to make consumption
streams smoother than income streams
The household savings problem outlined above captures these ideas
The optimization problem for the household is to choose a consumption sequence in order to minimize
{T −1 }
∑
E β (ct − c̄) + β
t 2 T
qa2T (6.74)
t=0
Here q is a large positive constant, the role of which is to induce the consumer to target zero debt at the end
of her life
(Without such a constraint, the optimal choice is to choose ct = c̄ in each period, letting assets adjust
accordingly)
As before we set yt = σwt+1 + µ and ut := ct − c̄, after which the constraint can be written as in (6.60)
We saw how this constraint could be manipulated into the LQ formulation xt+1 = Axt + But + Cwt+1 by
setting xt = (at 1)′ and using the definitions in (6.62)
To match with this state and control, the objective function (6.74) can be written in the form of (6.64) by
choosing
( ) ( )
0 0 q 0
Q := 1, R := , and Rf :=
0 0 0 0
Now that the problem is expressed in LQ form, we can proceed to the solution by applying (6.70) and (6.72)
After generating shocks w1 , . . . , wT , the dynamics for assets and consumption can be simulated via (6.73)
The following figure was computed using r = 0.05, β = 1/(1 + r), c̄ = 2, µ = 1, σ = 0.25, T = 45 and
q = 106
The shocks {wt } were taken to be iid and standard normal
import numpy as np
import matplotlib.pyplot as plt
from quantecon import LQ
# == Model parameters == #
r = 0.05
β = 1/(1 + r)
T = 45
c_bar = 2
σ = 0.25
µ = 1
q = 1e6
# == Formulate as an LQ problem == #
Q = 1
R = np.zeros((2, 2))
Rf = np.zeros((2, 2))
Rf[0, 0] = q
A = [[1 + r, -c_bar + µ],
[0, 1]]
B = [[-1],
[ 0]]
C = [[σ],
[0]]
# == Plot results == #
n_rows = 2
fig, axes = plt.subplots(n_rows, 1, figsize=(12, 10))
plt.subplots_adjust(hspace=0.5)
for ax in axes:
ax.grid()
ax.set_xlabel('Time')
ax.legend(ncol=2, **legend_args)
plt.show()
The top panel shows the time path of consumption ct and income yt in the simulation
As anticipated by the discussion on consumption smoothing, the time path of consumption is much smoother
than that for income
(But note that consumption becomes more irregular towards the end of life, when the zero final asset re-
quirement impinges more on consumption choices)
The second panel in the figure shows that the time path of assets at is closely correlated with cumulative
unanticipated income, where the latter is defined as
∑
t
zt := σwt
j=0
A key message is that unanticipated windfall gains are saved rather than consumed, while unanticipated
negative shocks are met by reducing assets
(Again, this relationship breaks down towards the end of life due to the zero final asset requirement)
# == Plot results == #
n_rows = 2
fig, axes = plt.subplots(n_rows, 1, figsize=(12, 10))
plt.subplots_adjust(hspace=0.5)
for ax in axes:
ax.grid()
ax.set_xlabel('Time')
ax.legend(ncol=2, **legend_args)
plt.show()
We now have a slowly rising consumption stream and a hump-shaped build up of assets in the middle periods
to fund rising consumption
However, the essential features are the same: consumption is smooth relative to income, and assets are
strongly positively correlated with cumulative unanticipated income
Lets now consider a number of standard extensions to the LQ problem treated above
Time-Varying Parameters
However, the loss of generality is not as large as you might first imagine
In fact, we can tackle many models with time-varying parameters by suitable choice of state variables
One illustration is given below
For further examples and a more systematic treatment, see [HS13], section 2.4
In some LQ problems, preferences include a cross-product term u′t N xt , so that the objective function be-
comes
{T −1 }
∑
E β t (x′t Rxt + u′t Qut + 2u′t N xt ) + β T x′T Rf xT (6.75)
t=0
Infinite Horizon
Finally, we consider the infinite horizon case, with cross-product term, unchanged dynamics and objective
function given by
{ ∞
}
∑
E β t (x′t Rxt + u′t Qut + 2u′t N xt ) (6.78)
t=0
In the infinite horizon case, optimal policies can depend on time only if time itself is a component of the
state vector xt
In other words, there exists a fixed matrix F such that ut = −F xt for all t
That decision rules are constant over time is intuitive after all, the decision maker faces the same infinite
horizon at every stage, with only the current state changing
Equation (6.79) is also called the LQ Bellman equation, and the map that sends a given P into the right-hand
side of (6.79) is called the LQ Bellman operator
The stationary optimal policy for this model is
β
d := trace(C ′ P C) (6.81)
1−β
The state evolves according to the time-homogeneous process xt+1 = (A − BF )xt + Cwt+1
An example infinite horizon problem is treated below
Certainty Equivalence
Linear quadratic control problems of the class discussed above have the property of certainty equivalence
By this we mean that the optimal policy F is not affected by the parameters in C, which specify the shock
process
This can be confirmed by inspecting (6.80) or (6.77)
It follows that we can ignore uncertainty when solving for optimal behavior, and plug it back in when
examining optimal state dynamics
{T −1 }
∑
E β t (ct − c̄)2 + β T qa2T (6.82)
t=0
The fact that at+1 is a linear function of (at , 1, t, t2 ) suggests taking these four variables as the state vector
xt
Once a good choice of state and control (recall ut = ct − c̄) has been made, the remaining specifications fall
into place relatively easily
Thus, for the dynamics we set
at 1 + r −c̄ m1 m2 −1 σ
1 0 1 0 0 0 0
xt :=
t , A :=
0
, B :=
0 , C := (6.84)
1 1 0 0
t2 0 1 2 1 0 0
If you expand the expression xt+1 = Axt + But + Cwt+1 using this specification, you will find that assets
follow (6.83) as desired, and that the other state variables also update appropriately
To implement preference specification (6.82) we take
0 0 0 0 q 0 0 0
0 0 0 0 0 0 0 0
Q := 1, R :=
0
and Rf := (6.85)
0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
The next figure shows a simulation of consumption and assets computed using the compute_sequence
method of lqcontrol.py with initial assets set to zero
In the previous application, we generated income dynamics with an inverted U shape using polynomials,
and placed them in an LQ framework
It is arguably the case that this income process still contains unrealistic features
A more common earning profile is where
1. income grows over working life, fluctuating around an increasing trend, with growth flattening off in
later years
{
p(t) + σwt+1 if t ≤ K
yt = (6.86)
s otherwise
Here
• p(t) := m1 t + m2 t2 with the coefficients m1 , m2 chosen such that p(K) = µ and p(0) = p(2K) = 0
• s is retirement income
We suppose that preferences are unchanged and given by (6.74)
The budget constraint is also unchanged and given by at+1 = (1 + r)at − ct + yt
Our aim is to solve this problem and simulate paths using the LQ techniques described in this lecture
In fact this is a nontrivial problem, as the kink in the dynamics (6.86) at K makes it very difficult to express
the law of motion as a fixed-coefficient linear system
However, we can still use our LQ methods here by suitably linking two component LQ problems
These two LQ problems describe the consumers behavior during her working life (lq_working) and
retirement (lq_retired)
(This is possible because in the two separate periods of life, the respective income processes [polynomial
trend and constant] each fit the LQ framework)
The basic idea is that although the whole problem is not a single time-invariant LQ problem, it is still a
dynamic programming problem, and hence we can use appropriate Bellman equations at every stage
Based on this logic, we can
1. solve lq_retired by the usual backwards induction procedure, iterating back to the start of retire-
ment
2. take the start-of-retirement value function generated by this process, and use it as the terminal condi-
tion Rf to feed into the lq_working specification
3. solve lq_working by backwards induction from this choice of Rf , iterating back to the start of
working life
This process gives the entire life-time sequence of value functions and optimal policies
The next figure shows one simulation based on this procedure
The full set of parameters used in the simulation is discussed in Exercise 2, where you are asked to replicate
the figure
Once again, the dominant feature observable in the simulation is consumption smoothing
The asset path fits well with standard life cycle theory, with dissaving early in life followed by later saving
Assets peak at retirement and subsequently decline
{ ∞
}
∑
E t
β πt where πt := pt qt − cqt − γ(qt+1 − qt )2 (6.87)
t=0
Here
• γ(qt+1 − qt )2 represents adjustment costs
• c is average cost of production
This can be formulated as an LQ problem and then solved and simulated, but first lets study the problem
and try to get some intuition
One way to start thinking about the problem is to consider what would happen if γ = 0
Without adjustment costs there is no intertemporal trade-off, so the monopolist will choose output to maxi-
mize current profit in each period
Its not difficult to show that profit-maximizing output is
a0 − c + dt
q̄t :=
2a1
In light of this discussion, what we might expect for general γ is that
• if γ is close to zero, then qt will track the time path of q̄t relatively closely
• if γ is larger, then qt will be smoother than q̄t , as the monopolist seeks to avoid adjustment costs
This intuition turns out to be correct
The following figures show simulations produced by solving the corresponding LQ problem
The only difference in parameters across the figures is the size of γ
∞
∑ { }
min E β t a1 (qt − q̄t )2 + γu2t (6.88)
t=0
Its now relatively straightforward to find R and Q such that (6.88) can be written as (6.78)
Furthermore, the matrices A, B and C from (6.59) can be found by writing down the dynamics of each
element of the state
Exercise 3 asks you to complete this process, and reproduce the preceding figures
6.11.7 Exercises
Exercise 1
Exercise 2
Exercise 3
6.11.8 Solutions
Exercise 1
yt = m1 t + m2 t2 + σwt+1
where {wt } is iid N (0, 1) and the coefficients m1 and m2 are chosen so that p(t) = m1 t + m2 t2 has an
inverted U shape with
• p(0) = 0, p(T /2) = µ, and
• p(T ) = 0.
# == Model parameters == #
r = 0.05
β = 1/(1 + r)
T = 50
c_bar = 1.5
σ = 0.15
µ = 2
q = 1e4
m1 = T * (µ/(T/2)**2)
m2 = -(µ/(T/2)**2)
# == Formulate as an LQ problem == #
Q = 1
R = np.zeros((4, 4))
Rf = np.zeros((4, 4))
Rf[0, 0] = q
A = [[1 + r, -c_bar, m1, m2],
[0, 1, 0, 0],
[0, 1, 1, 0],
[0, 1, 2, 1]]
B = [[-1],
[ 0],
[ 0],
[ 0]]
C = [[σ],
[0],
[0],
[0]]
ap = xp[0, :] # Assets
c = up.flatten() + c_bar # Consumption
time = np.arange(1, T+1)
income = σ * wp[0, 1:] + m1 * time + m2 * time**2 # Income
# == Plot results == #
n_rows = 2
fig, axes = plt.subplots(n_rows, 1, figsize=(12, 10))
plt.subplots_adjust(hspace=0.5)
for ax in axes:
ax.grid()
ax.set_xlabel('Time')
ax.legend(ncol=2, **legend_args)
plt.show()
Exercise 2
This is a permanent income / life-cycle model with polynomial growth in income over working life fol-
lowed by a fixed retirement income. The model is solved by combining two LQ programming problems as
described in the lecture.
# == Model parameters == #
r = 0.05
β = 1/(1 + r)
T = 60
K = 40
c_bar = 4
σ = 0.35
µ = 4
q = 1e4
s = 1
m1 = 2 * µ/K
m2 = -µ/K**2
up = np.column_stack((up_w, up_r))
c = up.flatten() + c_bar # Consumption
# == Plot results == #
n_rows = 2
fig, axes = plt.subplots(n_rows, 1, figsize=(12, 10))
plt.subplots_adjust(hspace=0.5)
for ax in axes:
ax.grid()
ax.set_xlabel('Time')
ax.legend(ncol=2, **legend_args)
plt.show()
Exercise 3
The first task is to find the matrices A, B, C, Q, R that define the LQ problem
Recall that xt = (q̄t qt 1)′ , while ut = qt+1 − qt
Letting m0 := (a0 − c)/2a1 and m1 := 1/2a1 , we can write q̄t = m0 + m1 dt , and then, with some
manipulation
nience:
{ ∞
}
∑
min E β t a1 (qt − q̄t )2 + γu2t
t=0
# == Useful constants == #
m0 = (a0-c)/(2 * a1)
m1 = 1/(2 * a1)
# == Formulate LQ problem == #
Q = γ
R = [[ a1, -a1, 0],
[-a1, a1, 0],
[ 0, 0, 0]]
A = [[ρ, 0, m0 * (1 - ρ)],
[0, 1, 0],
[0, 0, 1]]
B = [[0],
[1],
[0]]
C = [[m1 * σ],
[ 0],
[ 0]]
time = range(len(q))
Contents
6.12.1 Overview
This lecture describes a rational expectations version of the famous permanent income model of Milton
Friedman [Fri56]
Robert Hall cast Friedmans model within a linear-quadratic setting [Hal78]
Like Hall, we formulate an infinite-horizon linear-quadratic savings problem
We use the model as a vehicle for illustrating
• alternative formulations of the state of a dynamic system
• the idea of cointegration
• impulse response functions
• the idea that changes in consumption are useful as predictors of movements in income
Background readings on the linear-quadratic-Gaussian permanent income model are Halls [Hal78] and
chapter 2 of [LS18]
In this section we state and solve the savings and consumption problem faced by the consumer
Preliminaries
Et [Xt+1 ] = Xt , t = 0, 1, 2, . . .
Here Et := E[· | Ft ] is a conditional mathematical expectation conditional on the time t information set Ft
The latter is just a collection of random variables that the modeler declares to be visible at t
• When not explicitly defined, it is usually understood that Ft = {Xt , Xt−1 , . . . , X0 }
Martingales have the feature that the history of past outcomes provides no predictive power for changes
between current and future outcomes
For example, the current wealth of a gambler engaged in a fair game has this property
One common class of martingales is the family of random walks
Xt+1 = Xt + wt+1
∑
t
Xt = wj + X0
j=1
Not every martingale arises as a random walk (see, for example, Walds martingale)
A consumer has preferences over consumption streams that are ordered by the utility functional
[∞ ]
∑
E0 t
β u(ct ) (6.89)
t=0
where
• Et is the mathematical expectation conditioned on the consumers time t information
• ct is time t consumption
• u is a strictly concave one-period utility function
• β ∈ (0, 1) is a discount factor
The consumer maximizes (6.89) by choosing a consumption, borrowing plan {ct , bt+1 }∞
t=0 subject to the
sequence of budget constraints
1
ct + bt = bt+1 + yt t≥0 (6.90)
1+r
Here
• yt is an exogenous endowment process
• r > 0 is a time-invariant risk-free net interest rate
• bt is one-period risk-free debt maturing at t
The consumer also faces initial conditions b0 and y0 , which can be fixed or random
Assumptions
For the remainder of this lecture, we follow Friedman and Hall in assuming that (1 + r)−1 = β
Regarding the endowment process, we assume it has the state-space representation
where
• {wt } is an iid vector process with Ewt = 0 and Ewt wt′ = I
√
• the spectral radius of A satisfies ρ(A) < 1/β
• U is a selection vector that pins down yt as a particular linear combination of components of zt .
The restriction on ρ(A) prevents income from growing so fast that discounted geometric sums of some
quadratic forms to be described below become infinite
Regarding preferences, we assume the quadratic utility function
Note: Along with this quadratic utility specification, we allow consumption to be negative. However, by
choosing parameters appropriately, we can make the probability that the model generates negative consump-
tion paths over finite time horizons as low as desired.
[ ∞
]
∑
E0 β t b2t < ∞ (6.92)
t=0
This condition rules out an always-borrow scheme that would allow the consumer to enjoy bliss consumption
forever
First-Order Conditions
With our quadratic preference specification, (6.93) has the striking implication that consumption follows a
martingale:
Et [ct+1 ] = ct (6.94)
Note: One way to solve the consumers problem is to apply dynamic programming as in this lecture. We
do this later. But first we use an alternative approach that is revealing and shows the work that dynamic
programming does for us behind the scenes
∞
∑
bt = β j (yt+j − ct+j ) (6.95)
j=0
Take conditional expectations on both sides of (6.95) and use the martingale property of consumption and
the law of iterated expectations to deduce
∞
∑ ct
bt = β j Et [yt+j ] − (6.96)
1−β
j=0
1
A linear marginal utility is essential for deriving (6.94) from (6.93). Suppose instead that we had imposed the following more
standard assumptions on the utility function: u′ (c) > 0, u′′ (c) < 0, u′′′ (c) > 0 and required that c ≥ 0. The Euler equation
remains (6.93). But the fact that u′′′ < 0 implies via Jensens inequality that Et [u′ (ct+1 )] > u′ (Et [ct+1 ]). This inequality together
with (6.93) implies that Et [ct+1 ] > ct (consumption is said to be a submartingale), so that consumption stochastically diverges to
+∞. The consumers savings also diverge to +∞.
2
An optimal decision rule is a map from current state into current actionsin this case, consumption
∑∞ ∞
r ∑ j
ct = (1 − β) β j Et [yt+j ] − bt = β Et [yt+j ] − bt (6.97)
1+r
j=0 j=0
[ ]
The state vector confronting the consumer at t is bt zt
Here
• zt is an exogenous component, unaffected by consumer behavior
• bt is an endogenous component (since it depends on the decision rule)
Note that zt contains all variables useful for forecasting the consumers future endowment
It is plausible that current decisions ct and bt+1 should be expressible as functions of zt and bt
This is indeed the case
In fact, from this discussion we see that
∞
∑ ∞
∑
β j Et [yt+j ] = Et β j yt+j = U (I − βA)−1 zt
j=0 j=0
r [ ]
ct = U (I − βA)−1 zt − bt (6.98)
1+r
bt+1 = (1 + r)(bt + ct − yt )
= (1 + r)bt + r[U (I − βA)−1 zt − bt ] − (1 + r)U zt
= bt + U [r(I − βA)−1 − (1 + r)I]zt
= bt + U (I − βA)−1 (A − I)zt
To get from the second last to the last expression in this chain of equalities is not trivial
∑
A key is to use the fact that (1 + r)β = 1 and (I − βA)−1 = ∞ j j
j=0 β A
A State-Space Representation
We can summarize our dynamics in the form of a linear state-space system governing consumption, debt
and income:
and
[ ] [ ]
U 0 y
Ũ = , ỹt = t
(1 − β)U (I − βA)−1 −(1 − β) ct
We can use the following formulas from linear state space models to compute population mean µt = Ext
and covariance Σt := E[(xt − µt )(xt − µt )′ ]
µy,t = Ũ µt
(6.103)
Σy,t = Ũ Σt Ũ ′
To gain some preliminary intuition on the implications of (6.99), lets look at a highly stylized example where
income is just iid
(Later examples will investigate more realistic income streams)
In particular, let {wt }∞
t=1 be iid and scalar standard normal, and let
[ 1] [ ] [ ]
zt 0 0 [ ] σ
zt = , A= , U= 1 µ , C=
1 0 1 0
∑
t−1
bt = −σ wj
j=1
∑
t
ct = µ + (1 − β)σ wj
j=1
Thus income is iid and debt and consumption are both Gaussian random walks
Defining assets as −bt , we see that assets are just the cumulative sum of unanticipated incomes prior to the
present date
The next figure shows a typical realization with r = 0.05, µ = 1, and σ = 0.15
r = 0.05
β = 1 / (1 + r)
T = 60
σ = 0.15
µ = 1
def time_path():
w = np.random.randn(T+1) # w_0, w_1, ..., w_T
w[0] = 0
b = np.zeros(T+1)
for t in range(1, T+1):
b[t] = w[1:t].sum()
b = -σ * b
c = µ + (1 - β) * (σ * w - b)
return w, b, c
w, b, c = time_path()
ax.plot(list(range(T+1)), µ + σ * w, 'g-',
label="non-financial income", **p_args)
ax.plot(list(range(T+1)), c, 'k-', label="consumption", **p_args)
ax.plot(list(range(T+1)), b, 'b-', label="debt", **p_args)
ax.legend(ncol=3, **legend_args)
plt.show()
plt.show()
In this section we shed more light on the evolution of savings, debt and consumption by representing their
dynamics in several different ways
Halls Representation
Hall [Hal78] suggested an insightful way to summarize the implications of LQ permanent income theory
First, to represent the solution for bt , shift (6.97) forward one period and eliminate bt+1 by using (6.90) to
obtain
∞
∑ [ ]
ct+1 = (1 − β) β j Et+1 [yt+j+1 ] − (1 − β) β −1 (ct + bt − yt )
j=0
∑∞
If we add and subtract β −1 (1 − β) j=0 β
jE
t yt+j from the right side of the preceding equation and rear-
range, we obtain
∞
∑
ct+1 − ct = (1 − β) β j {Et+1 [yt+j+1 ] − Et [yt+j+1 ]} (6.104)
j=0
The right side is the time t + 1 innovation to the expected present value of the endowment process {yt }
We can represent the optimal decision rule for (ct , bt+1 ) in the form of (6.104) and (6.96), which we repeat:
∞
∑ 1
bt = β j Et [yt+j ] − ct (6.105)
1−β
j=0
Equation (6.105) asserts that the consumers debt due at t equals the expected present value of its endowment
minus the expected present value of its consumption stream
A high debt thus indicates a large expected present value of surpluses yt − ct
Recalling again our discussion on forecasting geometric sums, we have
∞
∑
Et β j yt+j = U (I − βA)−1 zt
j=0
∞
∑
Et+1 β j yt+j+1 = U (I − βA)−1 zt+1
j=0
∑∞
Et β j yt+j+1 = U (I − βA)−1 Azt
j=0
Using these formulas together with (6.91) and substituting into (6.104) and (6.105) gives the following
representation for the consumers optimum decision rule:
Cointegration
Representation (6.106) reveals that the joint process {ct , bt } possesses the property that Engle and Granger
[EG87] called cointegration
Cointegration is a tool that allows us to apply powerful results from the theory of stationary stochastic
processes to (certain transformations of) nonstationary models
To apply cointegration in the present context, suppose that zt is asymptotically stationary4
Despite this, both ct and bt will be non-stationary because they have unit roots (see (6.99) for bt )
Nevertheless, there is a linear combination of ct , bt that is asymptotically stationary
In particular, from the second equality in (6.106) we have
∞
∑
(1 − β)bt + ct = (1 − β)Et β j yt+j . (6.108)
j=0
Equation (6.108) asserts that the cointegrating residual on the left side equals the conditional expectation of
the geometric sum of future incomes on the right6
Cross-Sectional Implications
Consider again (6.106), this time in light of our discussion of distribution dynamics in the lecture on linear
systems
The dynamics of ct are given by
4
This would be the case if, for example, the spectral radius of A is strictly less than one
6
See [JYC88], [LL01], [LL04] for interesting applications of related ideas.
or
∑
t
ct = c0 + ŵj for ŵt+1 := (1 − β)U (I − βA)−1 Cwt+1
j=1
The unit root affecting ct causes the time t variance of ct to grow linearly with t
In particular, since {ŵt } is iid, we have
where
Impulse response functions measure responses to various impulses (i.e., temporary shocks)
The impulse response function of {ct } to the innovation {wt } is a box
In particular, the response of ct+j to a unit increase in the innovation wt+1 is (1 − β)U (I − βA)−1 C for all
j≥1
Its useful to express the innovation to the expected present value of the endowment process in terms of a
moving average representation for income yt
The endowment process defined by (6.91) has the moving average representation
where
∑∞ j
• d(L) = j=0 dj L for some sequence dj , where L is the lag operator3
It follows that
The object d(β) is the present value of the moving average coefficients in the representation for the
endowment process yt
Example 1
Formula (6.114) shows how an increment σ1 w1t+1 to the permanent component of income z1t+1 leads to
3
Representation (6.91) implies that d(L) = U (I − AL)−1 C.
5
A moving average representation for a process yt is said to be fundamental if the linear space spanned by y t is equal to
the linear space spanned by wt . A time-invariant innovations representation, attained via the Kalman filter, is by construction
fundamental.
This confirms that none of σ1 w1t is saved, while all of σ2 w2t is saved
The next figure illustrates these very different reactions to transitory and permanent income shocks using
impulse-response functions
r = 0.05
β = 1 / (1 + r)
T = 20 # Time horizon
S = 5 # Impulse date
σ1 = σ2 = 0.15
def time_path(permanent=False):
"Time path of consumption and debt given shock sequence"
w1 = np.zeros(T+1)
w2 = np.zeros(T+1)
b = np.zeros(T+1)
c = np.zeros(T+1)
if permanent:
w1[S+1] = 1.0
else:
w2[S+1] = 1.0
for t in range(1, T):
b[t+1] = b[t] - σ2 * w2[t]
c[t+1] = c[t] + σ1 * w1[t+1] + (1 - β) * σ2 * w2[t+1]
return b, c
L = 0.175
ax.grid(alpha=0.5)
ax.set(xlabel=r'Time', ylim=(-L, L))
axes[0].legend(loc='lower right')
plt.tight_layout()
plt.show()
Example 2
Assume now that at time t the consumer observes yt , and its history up to t, but not zt
Under this assumption, it is appropriate to use an innovation representation to form A, C, U in (6.106)
The discussion in sections 2.9.1 and 2.11.3 of [LS18] shows that the pertinent state space representation for
yt is
[ ] [ ][ ] [ ]
yt+1 1 −(1 − K) yt 1
= + a
at+1 0 0 at 1 t+1
[ ]
[ ] yt
yt = 1 0
at
where
• K := the stationary Kalman gain
• at := yt − E[yt | yt−1 , . . . , y0 ]
In the same discussion in [LS18] it is shown that K ∈ [0, 1] and that K increases as σ1 /σ2 does
In other words, K increases as the ratio of the standard deviation of the permanent shock to that of the
transitory shock increases
Please see first look at the Kalman filter
Applying formulas (6.106) implies
where the endowment process can now be represented in terms of the univariate innovation to yt as
This indicates how the fraction K of the innovation to yt that is regarded as permanent influences the fraction
of the innovation that is saved
The model described above significantly changed how economists think about consumption
While Halls model does a remarkably good job as a first approximation to consumption data, its widely
believed that it doesnt capture important aspects of some consumption/savings data
For example, liquidity constraints and precautionary savings appear to be present sometimes
Further discussion can be found in, e.g., [HM82], [Par99], [Dea91], [Car01]
Contents
6.13.1 Overview
This lecture continues our analysis of the linear-quadratic (LQ) permanent income model of savings and
consumption
As we saw in our previous lecture on this topic, Robert Hall [Hal78] used the LQ permanent income model
to restrict and interpret intertemporal comovements of nondurable consumption, nonfinancial income, and
financial wealth
For example, we saw how the model asserts that for any covariance stationary process for nonfinancial
income
• consumption is a random walk
• financial wealth has a unit root and is cointegrated with consumption
Other applications use the same LQ framework
For example, a model isomorphic to the LQ permanent income model has been used by Robert Barro
[Bar79] to interpret intertemporal comovements of a governments tax collections, its expenditures net of
debt service, and its public debt
This isomorphism means that in analyzing the LQ permanent income model, we are in effect also analyzing
the Barro tax smoothing model
It is just a matter of appropriately relabeling the variables in Halls model
In this lecture, well
• show how the solution to the LQ permanent income model can be obtained using LQ control methods
• represent the model as a linear state space system as in this lecture
• apply QuantEcons LinearStateSpace class to characterize statistical features of the consumers optimal
consumption and borrowing plans
Well then use these characterizations to construct a simple model of cross-section wealth and consumption
dynamics in the spirit of Truman Bewley [Bew86]
(Later well study other Bewley modelssee this lecture)
The model will prove useful for illustrating concepts such as
• stationarity
• ergodicity
• ensemble moments and cross section observations
6.13.2 Setup
Lets recall the basic features of the model discussed in permanent income model
Consumer preferences are ordered by
∞
∑
E0 β t u(ct ) (6.119)
t=0
1
ct + bt = bt+1 + yt , t≥0 (6.120)
1+r
and the no-Ponzi condition
∞
∑
E0 β t b2t < ∞ (6.121)
t=0
The interpretation of all variables and parameters are the same as in the previous lecture
We continue to assume that (1 + r)β = 1
The dynamics of {yt } again follow the linear state space model
The restrictions on the shock process and parameters are the same as in our previous lecture
The LQ permanent income model of consumption is mathematically isomorphic with a version of Barros
[Bar79] model of tax smoothing.
In the LQ permanent income model
• the household faces an exogenous process of nonfinancial income
• the household wants to smooth consumption across states and time
In the Barro tax smoothing model
• a government faces an exogenous sequence of government purchases (net of interest payments on its
debt)
• a government wants to smooth tax collections across states and time
If we set
• Tt , total tax collections in Barros model to consumption ct in the LQ permanent income model
For the purposes of this lecture, lets assume {yt } is a second-order univariate autoregressive process:
We can map this into the linear state space framework in (6.122), as discussed in our lecture on linear models
To do so we take
1 1 0 0 0 [ ]
z t = yt , A = α ρ 1 ρ2 ,
C = σ , and U = 0 1 0
yt−1 0 1 0 0
Previously we solved the permanent income model by solving a system of linear expectational difference
equations subject to two boundary conditions
Here we solve the same model using LQ methods based on dynamic programming
After confirming that answers produced by the two methods agree, we apply QuantEcons LinearStateSpace
class to illustrate features of the model
Why solve a model in two distinct ways?
Because by doing so we gather insights about the structure of the model
Our earlier approach based on solving a system of expectational difference equations brought to the fore the
role of the consumers expectations about future nonfinancial income
On the other hand, formulating the model in terms of an LQ dynamic programming problem reminds us that
• finding the state (of a dynamic programming problem) is an art, and
• iterations on a Bellman equation implicitly jointly solve both a forecasting problem and a control
problem
The LQ Problem
Recall from our lecture on LQ theory that the optimal linear regulator problem is to choose a decision rule
for ut to minimize
∞
∑
E β t {x′t Rxt + u′t Qut },
t=0
where wt+1 is iid with mean vector zero and Ewt wt′ = I
The tildes in Ã, B̃, C̃ are to avoid clashing with notation in (6.122)
The value function for this problem is v(x) = −x′ P x − d, where
• P is the unique positive semidefinite solution of the corresponding matrix Riccati equation
• The scalar d is given by d = β(1 − β)−1 trace(P C̃ C̃ ′ )
The optimal policy is ut = −F xt , where F := β(Q + β B̃ ′ P B̃)−1 B̃ ′ P Ã
Under an optimal decision rule F , the state vector xt evolves according to xt+1 = (Ã − B̃F )xt + C̃wt+1
Please confirm for yourself that, with these definitions, the LQ dynamics (6.123) match the dynamics of zt
and bt described above
To map utility into the quadratic form x′t Rxt + u′t Qut we can set
• Q := 1 (remember that we are minimizing) and
• R := a 4 × 4 matrix of zeros
6.13.4 Implementation
# Set parameters
α, β, ρ1, ρ2, σ = 10.0, 0.95, 0.9, 0.0, 1.0
R = 1 / β
A = np.array([[1., 0., 0.],
[α, ρ1, ρ2],
[0., 1., 0.]])
C = np.array([[0.], [σ], [0.]])
G = np.array([[0., 1., 0.]])
# These choices will initialize the state vector of an individual at zero debt
# and the ergodic distribution of the endowment process. Use these to create
# the Bewley economy.
mxbewley = mxo
sxbewley = sxo
A12 = np.zeros((3,1))
ALQ_l = np.hstack([A, A12])
ALQ_r = np.array([[0, -R, 0, R]])
ALQ = np.vstack([ALQ_l, ALQ_r])
QLQ = np.array([1.0])
BLQ = np.array([0., 0., 0., R]).reshape(4,1)
CLQ = np.array([0., σ, 0., 0.]).reshape(4,1)
β_LQ = β
print(f"A = \n {ALQ}")
print(f"B = \n {BLQ}")
print(f"R = \n {RLQ}")
print(f"Q = \n {QLQ}")
A =
[[ 1. 0. 0. 0. ]
[ 10. 0.9 0. 0. ]
[ 0. 1. 0. 0. ]
[ 0. -1.0526 0. 1.0526]]
B =
[[ 0. ]
[ 0. ]
[ 0. ]
[ 1.0526]]
R =
[[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]]
Q =
[ 1.]
Well save the implied optimal policy function soon compare them with what we get by employing an alter-
native solution method
In our first lecture on the infinite horizon permanent income problem we used a different solution method
The method was based around
• deducing the Euler equations that are the first-order conditions with respect to consumption and sav-
ings
• using the budget constraints and boundary condition to complete a system of expectational linear
difference equations
• solving those equations to obtain the solution
Expressed in state space notation, the solution took the form
# Use the above formulas to create the optimal policies for b_{t+1} and c_t
b_pol = G @ la.inv(np.eye(3, 3) - β * A) @ (A - np.eye(3, 3))
c_pol = (1 - β) * G @ la.inv(np.eye(3, 3) - β * A)
# Use the following values to start everyone off at b=0, initial incomes zero
µ_0 = np.array([1., 0., 0., 0.])
Σ_0 = np.zeros((4, 4))
A_LSS calculated as we have here should equal ABF calculated above using the LQ model
ABF - A_LSS
array([[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ],
[-0.0001, -0. , 0. , 0. ]])
[[ 65.5172 0.3448 0. ]]
[[ 65.5172 0.3448 -0. -0.05 ]]
We have verified that the two methods give the same solution
Now lets create instances of the LinearStateSpace class and use it to do some interesting experiments
To do this, well use the outcomes from our second method
We generate 25 paths of the exogenous non-financial income process and the associated optimal consump-
tion and debt paths.
In a first set of graphs, darker lines depict a particular sample path, while the lighter lines describe 24 other
paths
A second graph plots a collection of simulations against the population distribution that we extract from the
LinearStateSpace instance LSS
Comparing sample paths with population distributions at each date t is a useful exercisesee our discussion
of the laws of large numbers
# Simulation/Moment Parameters
moment_generator = LSS.moment_sequence()
for i in range(npaths):
sims = LSS.simulate(T)
bsim[i, :] = sims[0][-1, :]
csim[i, :] = sims[1][1, :]
ysim[i, :] = sims[1][0, :]
# Get T
T = bsim.shape[1]
# Plot debt
ax[1].plot(bsim[0, :], label="b", color="r")
ax[1].plot(bsim.T, alpha=.1, color="r")
ax[1].legend(loc=4)
ax[1].set(xlabel="t", ylabel="debt")
fig.tight_layout()
return fig
# Consumption fan
ax[0].plot(xvals, cons_mean, color="k")
ax[0].plot(csim.T, color="k", alpha=.25)
ax[0].fill_between(xvals, c_perc_95m, c_perc_95p, alpha=.25, color="b")
ax[0].fill_between(xvals, c_perc_90m, c_perc_90p, alpha=.25, color="r")
ax[0].set(title="Consumption/Debt over time",
ylim=(cmean-15, cmean+15), ylabel="consumption")
# Debt fan
ax[1].plot(xvals, debt_mean, color="k")
ax[1].plot(bsim.T, color="k", alpha=.25)
ax[1].fill_between(xvals, d_perc_95m, d_perc_95p, alpha=.25, color="b")
ax[1].fill_between(xvals, d_perc_90m, d_perc_90p, alpha=.25, color="r")
ax[1].set(xlabel="t", ylabel="debt")
fig.tight_layout()
return fig
Now lets create figures with initial conditions of zero for y0 and b0
plt.show()
plt.show()
∞
∑
(1 − β)bt + ct = (1 − β)Et β j yt+j (6.124)
j=0
So at time 0 we have
∞
∑
c0 = (1 − β)E0 β j yt
t=0
This tells us that consumption starts at the income that would be paid by an annuity whose value equals the
expected discounted value of nonfinancial income at time t = 0
To support that level of consumption, the consumer borrows a lot early and consequently builds up substan-
tial debt
In fact, he or she incurs so much debt that eventually, in the stochastic steady state, he consumes less each
period than his nonfinancial income
He uses the gap between consumption and nonfinancial income mostly to service the interest payments due
on his debt
Thus, when we look at the panel of debt in the accompanying graph, we see that this is a group of ex ante
identical people each of whom starts with zero debt
All of them accumulate debt in anticipation of rising nonfinancial income
They expect their nonfinancial income to rise toward the invariant distribution of income, a consequence of
our having started them at y−1 = y−2 = 0
Cointegration residual
The following figure plots realizations of the left side of (6.124), which, as discussed in our last lecture, is
called the cointegrating residual
As mentioned above,∑the right side can be thought of as an annuity payment on the expected present value
of future income Et ∞ j
j=0 β yt+j
∑
Early along a realization, ct is approximately constant while (1 − β)bt and (1 − β)Et ∞ j
j=0 β yt+j both rise
markedly as the households present value of income and borrowing rise pretty much together
This example illustrates the following point: the definition of cointegration implies that the cointegrating
residual is asymptotically covariance stationary, not covariance stationary
The cointegrating residual for the specification with zero income and zero debt initially has a notable tran-
sient component that dominates its behavior early in the sample.
By altering initial conditions, we shall remove this transient in our second example to be presented below
return fig
cointegration_figure(bsim0, csim0)
plt.show()
When we set y−1 = y−2 = 0 and b0 = 0 in the preceding exercise, we make debt head north early in the
sample
Average debt in the cross-section rises and approaches asymptote
We can regard these as outcomes of a small open economy that borrows from abroad at the fixed gross
interest rate R = r + 1 in anticipation of rising incomes
So with the economic primitives set as above, the economy converges to a steady state in which there is an
excess aggregate supply of risk-free loans at a gross interest rate of R
This excess supply is filled by foreigner lenders willing to make those loans
We can use virtually the same code to rig a poor mans Bewley [Bew86] model in the following way
• as before, we start everyone at b0 = 0
[ ]
y−1
• But instead of starting everyone at y−1 = y−2 = 0, we draw from the invariant distribution of
y−2
plt.show()
plt.show()
cointegration_figure(bsimb, csimb)
plt.show()
Contents
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 697
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
6.14.1 Overview
Well spend most of this lecture studying the finite-state Markov specification, but will briefly treat the linear
state space specification before concluding
This lecture can be viewed as a followup to Optimal Savings II: LQ Techniques and a warm up for a model
of tax smoothing described in Optimal Taxation with State-Contingent Debt
Linear-quadratic versions of the Lucas-Stokey tax-smoothing model are described in Optimal Taxation in
an LQ Economy
The key difference between those lectures and this one is
• Here the decision maker takes all prices as exogenous, meaning that his decisions do not affect them
• In Optimal Taxation in an LQ Economy and Optimal Taxation with State-Contingent Debt, the de-
cision maker – the government in the case of these lectures – recognizes that his decisions affect
prices
So these later lectures are partly about how the government should manipulate prices of government debt
6.14.2 Background
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 699
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
In particular, the state of the world is given by st that follows a Markov chain with transition probability
matrix
[∞ ]
∑
E t
β u(ct ) where u(ct ) = −(ct − γ)2 and 0 < β < 1 (6.125)
t=0
We can regard these as Barro [Bar79] tax-smoothing models if we set ct = Tt and Gt = yt , where Tt is
total tax collections and {Gt } is an exogenous government expenditures process
Market Structure
The two models differ in how effectively the market structure allows the consumer to transfer resources
across time and Markov states, there being more transfer opportunities in the complete markets setting than
in the incomplete markets setting
Watch how these differences in opportunities affect
• how smooth consumption is across time and Markov states
• how the consumer chooses to make his levels of indebtedness behave over time and across Markov
states
where bt is the consumers one-period debt that falls due at time t and bt+1 (s̄j | st ) are the consumers time t
sales of the time t + 1 consumption good in Markov state s̄j , a source of time t revenues
An analogue of Halls assumption that the one-period risk-free gross interest rate is β −1 is
This confirms that (6.126) is a natural analogue of Halls assumption about the risk-free one-period interest
rate
First-order necessary conditions for maximizing the consumers expected utility are
u′ (ct+1 )
β P{st+1 | st } = q(st+1 | st )
u′ (ct )
or, under our assumption (6.126) on Arrow security prices,
ct+1 = ct (6.127)
Thus, our consumer sets ct = c̄ for all t ≥ 0 for some value c̄ that it is our job now to determine
Guess: Well make the plausible guess that
so that the amount borrowed today turns out to depend only on tomorrows Markov state. (Why is this is a
plausible guess?)
To determine c̄, we shall pursue the implications of the consumers budget constraints in each Markov state
today and our guess (6.128) about the consumers debt level choices
For t ≥ 1, these imply
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 701
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
or
[ ] [ ] [ ] [ ][ ]
b(s̄1 ) c̄ y(s̄1 ) P11 P12 b(s̄1 )
+ = +β
b(s̄2 ) c̄ y(s̄2 ) P21 P22 b(s̄2 )
If we substitute (6.130) into the first equation of (6.129) and rearrange, we discover that
b(s̄1 ) = b0 (6.131)
We can then use the second equation of (6.129) to deduce the restriction
y(s̄1 ) − y(s̄2 ) + [q(s̄1 | s̄1 ) − q(s̄1 | s̄2 ) − 1]b0 + [q(s̄2 | s̄1 ) + 1 − q(s̄2 | s̄2 )]b(s̄2 ) = 0, (6.132)
Key outcomes
The preceding calculations indicate that in the complete markets version of our model, we obtain the fol-
lowing striking results:
• The consumer chooses to make consumption perfectly constant across time and Markov states
We computed the constant level of consumption c̄ and indicated how that level depends on the underly-
ing specifications of preferences, Arrow securities prices, the stochastic process of exogenous nonfinancial
income, and the initial debt level b0
• The consumers debt neither accumulates, nor decumulates, nor drifts. Instead the debt level each
period is an exact function of the Markov state, so in the two-state Markov case, it switches between
two values
• We have verified guess (6.128)
We computed how one of those debt levels depends entirely on initial debt – it equals it – and how the other
value depends on virtually all remaining parameters of the model
Code
Heres some code that, among other things, contains a function called consumption_complete()
This function computes b(s̄1 ), b(s̄2 ), c̄ as outcomes given a set of parameters, under the assumption of com-
plete markets
import numpy as np
import quantecon as qe
import scipy.linalg as la
class ConsumptionProblem:
"""
The data for a consumption problem, including some default values.
"""
def __init__(self,
β=.96,
y=[2, 1.5],
b0=3,
P=np.asarray([[.8, .2],
[.4, .6]])):
"""
Parameters
----------
β : discount factor
P : 2x2 transition matrix
y : list containing the two income levels
b0 : debt in period 0 (= state_1 debt level)
"""
self.β = β
self.y = y
self.b0 = b0
self.P = P
def consumption_complete(cp):
"""
Computes endogenous values for the complete market case.
Parameters
----------
cp : instance of ConsumptionProblem
Returns
-------
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 703
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
b2 : debt in state_2
Q = β * P
"""
β, P, y, b0 = cp.β, cp.P, cp.y, cp.b0 # Unpack
Parameters
----------
cp : instance of ConsumptionProblem
N_simul : int
"""
# Useful variables
y = np.asarray(y).reshape(2, 1)
v = np.linalg.inv(np.eye(2) - β * P) @ y
for i, s in enumerate(s_path):
cp = ConsumptionProblem()
c_bar, b1, b2 = consumption_complete(cp)
debt_complete = np.asarray([b1, b2])
np.isclose(c_bar + b2 - cp.y[1] - (cp.β * cp.P)[1, :] @ debt_complete, 0)
True
Below, well take the outcomes produced by this code – in particular the implied consumption and debt paths
– and compare them with outcomes from an incomplete markets model in the spirit of Hall [Hal78] and
Barro [Bar79] (and also, for those who love history, Gallatin (1807) [Gal37])
This is a version of the original models of Hall (1978) and Barro (1979) in which the decision makers
ability to substitute intertemporally is constrained by his ability to buy or sell only one security, a risk-free
one-period bond bearing a constant gross interest rate that equals β −1
Given an initial debt b0 at time 0, the consumer faces a sequence of budget constraints
ct + bt = yt + βbt+1 , t≥0
where β is the price at time t of a risk-free claim on one unit of time consumption at time t + 1
First-order conditions for the consumers problem are
∑
u′ (ct+1,j )Pij = u′ (ct,i )
j
∑
ct+1,j Pij = ct,i , (6.133)
j
∞
∑
bt = Et β j yt+j − (1 − β)−1 ct (6.134)
j=0
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 705
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
and
∞
∑
ct = (1 − β) Et β j yt+j − bt . (6.135)
j=0
ct as a net interest rate factor 1 − β times the sum of the expected present value
Equation (6.135) expresses∑
of nonfinancial income Et ∞ j=0 β yt+j and financial wealth −bt
j
Substituting (6.135) into the one-period budget constraint and rearranging leads to
∞
∑
bt+1 − bt = β −1 (1 − β)Et β j yt+j − yt (6.136)
j=0
∑∞
Now lets do a useful calculation that will yield a convenient expression for the key term Et j=0 β
jy
t+j in
our finite Markov chain setting
Define
∞
∑
vt := Et β j yt+j
j=0
In our finite Markov chain setting, vt = v(1) when st = s̄1 and vt = v(2) when st = s̄2
Therefore, we can write
v(1) = y(1) + βP11 v(1) + βP12 v(2)
v(2) = y(2) + βP21 v(1) + βP22 v(2)
or
⃗v = ⃗y + βP⃗v
[ ] [ ]
v(1) y(1)
where ⃗v = and ⃗y =
v(2) y(2)
We can also write the last expression as
⃗v = (I − βP )−1 ⃗y
In our finite Markov chain setting, from expression (6.135), consumption at date t when debt is bt and the
Markov state today is st = i is evidently
( )
c(bt , i) = (1 − β) [(I − βP )−1 ⃗y ]i − bt (6.137)
Summary of Outcomes
In contrast to outcomes in the complete markets model, in the incomplete markets model
• consumption drifts over time as a random walk; the level of consumption at time t depends on the
level of debt that the consumer brings into the period as well as the expected discounted present value
of nonfinancial income at t
• the consumers debt drifts upward over time in response to low realizations of nonfinancial income and
drifts downward over time in response to high realizations of nonfinancial income
• the drift over time in the consumers debt and the dependence of current consumption on todays debt
level account for the drift over time in consumption
The code above also contains a function called consumption_incomplete() that uses (6.137) and (6.138) to
• simulate paths of yt , ct , bt+1
• plot these against values of of c̄, b(s1 ), b(s2 ) found in a corresponding complete markets economy
Lets try this, using the same parameters in both complete and incomplete markets economies
np.random.seed(1)
N_simul = 150
cp = ConsumptionProblem()
ax[0].set_title('Consumption paths')
ax[0].plot(np.arange(N_simul), c_path, label='incomplete market')
ax[0].plot(np.arange(N_simul), c_bar * np.ones(N_simul), label='complete
,→market')
ax[0].plot(np.arange(N_simul), y_path, label='income', alpha=.6, ls='--')
ax[0].legend()
ax[0].set_xlabel('Periods')
ax[1].set_title('Debt paths')
ax[1].plot(np.arange(N_simul), debt_path, label='incomplete market')
ax[1].plot(np.arange(N_simul), debt_complete[s_path], label='complete market')
ax[1].plot(np.arange(N_simul), y_path, label='income', alpha=.6, ls='--')
ax[1].legend()
ax[1].axhline(0, color='k', ls='--')
ax[1].set_xlabel('Periods')
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 707
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
plt.show()
In the graph on the left, for the same sample path of nonfinancial income yt , notice that
• consumption is constant when there are complete markets, but it takes a random walk in the incom-
plete markets version of the model
• the consumers debt oscillates between two values that are functions of the Markov state in the com-
plete markets model, while the consumers debt drifts in a unit root fashion in the incomplete markets
economy
We can simply relabel variables to acquire tax-smoothing interpretations of our two models
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].legend()
ax[0].set_xlabel('Periods')
ax[0].set_ylim([1.4, 2.1])
plt.show()
where
Qij = βPij
is the price of one unit of output next period in state j when todays Markov state is i and bi is the governments
level of assets in Markov state i
That is, bi is the amount of the one-period loans owned by the government that fall due at time t
As above, well assume that the initial Markov state is state 1
In addition, to simplify our example, well set the governments initial asset level to 0, so that b1 = 0
Heres our code to compute a quantitative example with zero debt in peace time:
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 709
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
# Parameters
β = .96
y = [1, 2]
b0 = 0
P = np.asarray([[.8, .2],
[.4, .6]])
cp = ConsumptionProblem(β, y, b0, P)
Q = β * P
N_simul = 150
print(f"P \n {P}")
print(f"Q \n {Q}")
print(f"Govt expenditures in peace and war = {y}")
print(f"Constant tax collections = {c_bar}")
print(f"Govt assets in two states = {debt_complete}")
msg = """
Now let's check the government's budget constraint in peace and war.
Our assumptions imply that the government always purchases 0 units of the
Arrow peace security.
"""
print(msg)
AS1 = Q[0, 1] * b2
print(f"Spending on Arrow war security in peace = {AS1}")
AS2 = Q[1, 1] * b2
print(f"Spending on Arrow war security in war = {AS2}")
print("\n")
print("Government tax collections plus asset levels in peace and war")
TB1 = c_bar + b1
print(f"T+b in peace = {TB1}")
TB2 = c_bar + b2
print(f"T+b in war = {TB2}")
print("\n")
print("Total government spending in peace and war")
G1 = y[0] + AS1
G2 = y[1] + AS2
print(f"Peace = {G1}")
print(f"War = {G2}")
print("\n")
print("Let's see ex post and ex ante returns on Arrow securities")
Π = np.reciprocal(Q)
exret = Π
print(f"Ex post returns to purchase of Arrow securities = {exret}")
exant = Π * P
print(f"Ex ante returns to purchase of Arrow securities {exant}")
P
[[0.8 0.2]
[0.4 0.6]]
Q
[[0.768 0.192]
[0.384 0.576]]
Govt expenditures in peace and war = [1, 2]
Constant tax collections = 1.3116883116883118
Govt assets in two states = [0. 1.62337662]
Now let's check the government's budget constraint in peace and war.
Our assumptions imply that the government always purchases 0 units of the
Arrow peace security.
Explanation
In this example, the government always purchase 0 units of the Arrow security that pays off in peace time
(Markov state 1)
But it purchases a positive amount of the security that pays off in war time (Markov state 2)
We recommend plugging the quantities computed above into the government budget constraints in the two
Markov states and staring
This is an example in which the government purchases insurance against the possibility that war breaks out
or continues
• the insurance does not pay off so long as peace continues
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 711
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Also, start the system in Markov state 2 (war) with initial government assets −10, so that the government
starts the war in debt and b2 = −10
Now well use a setting like that in first lecture on the permanent income model
In that model, there were
• incomplete markets: the consumer could trade only a single risk-free one-period bond bearing gross
one-period risk-free interest rate equal to β −1
• the consumers exogenous nonfinancial income was governed by a linear state space model driven by
Gaussian shocks, the kind of model studied in an earlier lecture about linear state space models
Well write down a complete markets counterpart of that model
So now well suppose that nonfinancial income is governed by the state space system
where ϕ(· | µ, Σ) is a multivariate Gaussian distribution with mean vector µ and covariance matrix Σ
Let b(xt+1 ) be a vector of state-contingent debt due at t + 1 as a function of the t + 1 state xt+1 .
Using the pricing function assumed in (6.139), the value at t of b(xt+1 ) is
∫
β b(xt+1 )ϕ(xt+1 | Axt , CC ′ )dxt+1 = βEt bt+1
In the complete markets setting, the consumer faces a sequence of budget constraints
ct + bt = yt + βEt bt+1 , t ≥ 0
We assume as before that the consumer cares about the expected value of
∞
∑
β t u(ct ), 0<β<1
t=0
In the incomplete markets version of the model, we assumed that u(ct ) = −(ct − γ)2 , so that the above
utility functional became
∞
∑
− β t (ct − γ)2 , 0<β<1
t=0
But in the complete markets version, we can assume a more general form of utility function that satisfies
u′ > 0 and u′′ < 0
The first-order condition for the consumers problem with complete markets and our assumption about Arrow
securities prices is
or
1
bt = Sy (I − βA)−1 xt − c̄ (6.140)
1−β
where the value of c̄ satisfies
1
b̄0 = Sy (I − βA)−1 x0 − c̄ (6.141)
1−β
where b̄0 is an initial level of the consumers debt, specified as a parameter of the problem
Thus, in the complete markets version of the consumption-smoothing model, ct = c̄, ∀t ≥ 0 is determined
by (6.141) and the consumers debt is a fixed function of the state xt described by (6.140)
Heres an example that shows how in this setting the availability of insurance against fluctuating nonfinancial
income allows the consumer completely to smooth consumption across time and across states of the world.
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 713
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
# Debt
x_hist, y_hist = lss.simulate(T)
b_hist = np.squeeze(S_y @ rm @ x_hist - cbar / (1 - β))
# Define parameters
N_simul = 150
α, ρ1, ρ2 = 10.0, 0.9, 0.0
σ = 1.0
# Consumption plots
ax[0].set_title('Cons and income', fontsize=17)
ax[0].plot(np.arange(N_simul), c_hist_com, label='consumption')
ax[0].plot(np.arange(N_simul), y_hist_com, label='income', alpha=.6,
,→linestyle='--')
ax[0].legend()
ax[0].set_xlabel('Periods')
ax[0].set_ylim([-5.0, 110])
# Debt plots
ax[1].set_title('Debt and income')
ax[1].plot(np.arange(N_simul), b_hist_com, label='debt')
ax[1].plot(np.arange(N_simul), y_hist_com, label='Income', alpha=.6,
,→linestyle='--')
ax[1].legend()
ax[1].axhline(0, color='k')
ax[1].set_xlabel('Periods')
plt.show()
Interpretation of Graph
The incomplete markets version of the model with nonfinancial income being governed by a linear state
space system is described in the first lecture on the permanent income model and the followup lecture on the
permanent income model
In that version, consumption follows a random walk and the consumers debt follows a process with a unit
root
6.14. Consumption and Tax Smoothing with Complete and Incomplete Markets 715
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
We leave it to the reader to apply the usual isomorphism to deduce the corresponding implications for a
tax-smoothing model like Barros [Bar79]
In optimal taxation in an LQ economy and recursive optimal taxation, we study complete-markets models
in which the government recognizes that it can manipulate Arrow securities prices
In optimal taxation with incomplete markets, we study an incomplete-markets model in which the govern-
ment manipulates asset prices
Contents
6.15.1 Overview
Next we study an optimal savings problem for an infinitely lived consumerthe common ancestor described
in [LS18], section 1.3
This is an essential sub-problem for many representative macroeconomic models
• [Aiy94]
• [Hug93]
• etc.
It is related to the decision problem in the stochastic optimal growth model and yet differs in important ways
For example, the choice problem for the agent includes an additive income term that leads to an occasionally
binding constraint
Our presentation of the model will be relatively brief
• For further details on economic intuition, implication and models, see [LS18]
• Proofs of all mathematical results stated below can be found in this paper
To solve the model we will use Euler equation based time iteration, similar to this lecture
This method turns out to be
• Globally convergent under mild assumptions, even when utility is unbounded (both above and below)
• More efficient numerically than value function iteration
References
Other useful references include [Dea91], [DH10], [Kuh13], [Rab02], [Rei09] and [SE77]
Lets write down the model and then discuss how to solve it
Set Up
Consider a household that chooses a state-contingent consumption plan {ct }t≥0 to maximize
∞
∑
E β t u(ct )
t=0
subject to
Here
• β ∈ (0, 1) is the discount factor
• at is asset holdings at time t, with ad-hoc borrowing constraint at ≥ −b
• ct is consumption
• zt is non-capital income (wages, unemployment compensation, etc.)
• R := 1 + r, where r > 0 is the interest rate on savings
Non-capital income {zt } is assumed to be a Markov process taking values in Z ⊂ (0, ∞) with stochastic
kernel Π
This means that Π(z, B) is the probability that zt+1 ∈ B given zt = z
The expectation of f (zt+1 ) given zt = z is written as
∫
f (ź) Π(z, dź)
2. u is smooth, strictly increasing and strictly concave with limc→0 u′ (c) = ∞ and limc→∞ u′ (c) = 0
The asset space is [−b, ∞) and the state is the pair (a, z) ∈ S := [−b, ∞) × Z
A feasible consumption path from (a, z) ∈ S is a consumption sequence {ct } such that {ct } and its induced
asset path {at } satisfy
1. (a0 , z0 ) = (a, z)
2. the feasibility constraints in (6.142), and
3. measurability of ct w.r.t. the filtration generated by {z1 , . . . , zt }
The meaning of the third point is just that consumption at time t can only be a function of outcomes that
have already been observed
{ ∞
}
∑
V (a, z) := sup E β t u(ct ) (6.143)
t=0
where the supremum is over all feasible consumption paths from (a, z).
An optimal consumption path from (a, z) is a feasible consumption path from (a, z) that attains the supre-
mum in (6.143)
To pin down such paths we can use a version of the Euler equation, which in the present setting is
and
In essence, this says that the natural arbitrage relation u′ (ct ) = βR Et [u′ (ct+1 )] holds when the choice of
current consumption is interior
Interiority means that ct is strictly less than its upper bound Rat + zt + b
(The lower boundary case ct = 0 never arises at the optimum because u′ (0) = ∞)
When ct does hit the upper bound Rat + zt + b, the strict inequality u′ (ct ) > βR Et [u′ (ct+1 )] can occur
because ct cannot increase sufficiently to attain equality
With some thought and effort, one can show that (6.144) and (6.145) are equivalent to
{ }
u′ (ct ) = max βR Et [u′ (ct+1 )] , u′ (Rat + zt + b) (6.146)
Optimality Results
Moreover, there exists an optimal consumption function c∗ : S → [0, ∞) such that the path from (a, z)
generated by
(a0 , z0 ) = (a, z), zt+1 ∼ Π(zt , dy), ct = c∗ (at , zt ) and at+1 = Rat + zt − ct
satisfies both (6.146) and (6.147), and hence is the unique optimal path from (a, z)
In summary, to solve the optimization problem, we need to compute c∗
6.15.3 Computation
Time Iteration
We can rewrite (6.146) to make it a statement about functions rather than random variables
In particular, consider the functional equation
{ ∫ }
′ ′ ′
u ◦ c (a, z) = max γ u ◦ c {Ra + z − c(a, z), ź} Π(z, dź) , u (Ra + z + b) (6.148)
{ ∫ }
′ ′ ′
u (t) = max γ u ◦ c {Ra + z − t, ź} Π(z, dź) , u (Ra + z + b) (6.149)
where
{ ∫ }
T v(a, z) = max u(c) + β v(Ra + z − c, ź)Π(z, dź) (6.151)
0≤c≤Ra+z+b
We have to be careful with VFI (i.e., iterating with T ) in this setting because u is not assumed to be bounded
• In fact typically unbounded both above and below e.g. u(c) = log c
• In which case, the standard DP theory does not apply
• T n v is not guaranteed to converge to the value function for arbitrary continous bounded v
Nonetheless, we can always try the popular strategy iterate and hope
We can then check the outcome by comparing with that produced by TI
The latter is known to converge, as described above
Implementation
Heres the code for a class called ConsumerProblem that stores primitives, as well as
• a bellman_operator function, which implements the Bellman operator T specified above
• a coleman_operator function, which implements the Coleman operator K specified above
• an initialize, which generates suitable initial conditions for iteration
import numpy as np
from scipy.optimize import fminbound, brentq
class ConsumerProblem:
"""
A class that stores primitives for the income fluctuation problem. The
income process is assumed to be a finite state Markov chain.
Parameters
----------
r : scalar(float), optional(default=0.01)
A strictly positive scalar giving the interest rate
β : scalar(float), optional(default=0.96)
The discount factor, must satisfy (1 + r) * β < 1
Π : array_like(float), optional(default=((0.60, 0.40),(0.05, 0.95))
A 2D NumPy array giving the Markov matrix for {z_t}
z_vals : array_like(float), optional(default=(0.5, 0.95))
The state space of {z_t}
b : scalar(float), optional(default=0)
The borrowing constraint
grid_max : scalar(float), optional(default=16)
Max of the grid used to solve the problem
grid_size : scalar(int), optional(default=50)
Number of grid points to solve problem, a grid on [-b, grid_max]
u : callable, optional(default=np.log)
The utility function
du : callable, optional(default=lambda x: 1/x)
The derivative of u
Attributes
----------
r, β, Π, z_vals, b, u, du : see Parameters
asset_grid : np.ndarray
One dimensional grid for assets
"""
def __init__(self,
r=0.01,
β=0.96,
Π=((0.6, 0.4), (0.05, 0.95)),
z_vals=(0.5, 1.0),
b=0,
grid_max=16,
grid_size=50,
u=np.log,
du=lambda x: 1/x):
self.u, self.du = u, du
self.r, self.R = r, 1 + r
self.β, self.b = β, b
self.Π, self.z_vals = np.array(Π), tuple(z_vals)
self.asset_grid = np.linspace(-b, grid_max, grid_size)
Parameters
----------
V : array_like(float)
A NumPy array of dim len(cp.asset_grid) times len(cp.z_vals)
cp : ConsumerProblem
An instance of ConsumerProblem that stores primitives
return_policy : bool, optional(default=False)
Indicates whether to return the greed policy given V or the
updated value function TV. Default is TV.
Returns
-------
array_like(float)
Returns either the greed policy given V or the updated value
function TV.
"""
# === Simplify names, set up arrays === #
R, Π, β, u, b = cp.R, cp.Π, cp.β, cp.u, cp.b
asset_grid, z_vals = cp.asset_grid, cp.z_vals
new_V = np.empty(V.shape)
new_c = np.empty(V.shape)
z_idx = list(range(len(z_vals)))
if return_policy:
return new_c
else:
return new_V
Parameters
----------
c : array_like(float)
A NumPy array of dim len(cp.asset_grid) times len(cp.z_vals)
cp : ConsumerProblem
An instance of ConsumerProblem that stores primitives
Returns
-------
array_like(float)
The updated policy, where updating is by the Coleman
operator.
"""
# === simplify names, set up arrays === #
R, Π, β, du, b = cp.R, cp.Π, cp.β, cp.du, cp.b
asset_grid, z_vals = cp.asset_grid, cp.z_vals
z_size = len(z_vals)
γ = R * β
vals = np.empty(z_size)
return Kc
def initialize(cp):
"""
Creates a suitable initial conditions V and c for value function and time
iteration respectively.
Parameters
----------
cp : ConsumerProblem
An instance of ConsumerProblem that stores primitives
Returns
-------
V : array_like(float)
Initial condition for value function iteration
c : array_like(float)
Initial condition for Coleman operator iteration
"""
# === Simplify names, set up arrays === #
R, β, u, b = cp.R, cp.β, cp.u, cp.b
asset_grid, z_vals = cp.asset_grid, cp.z_vals
shape = len(asset_grid), len(z_vals)
V, c = np.empty(shape), np.empty(shape)
return V, c
Both bellman_operator and coleman_operator use linear interpolation along the asset grid to
approximate the value and consumption functions
The following exercises walk you through several applications where policy functions are computed
In exercise 1 you will see that while VFI and TI produce similar results, the latter is much faster
Intuition behind this fact was provided in a previous lecture on time iteration
6.15.4 Exercises
Exercise 1
The first exercise is to replicate the following figure, which compares TI and VFI as solution methods
cp = ConsumerProblem()
v, c = initialize(cp)
Exercise 2
Exercise 3
Now lets consider the long run asset levels held by households
Well take r = 0.03 and otherwise use default parameters
The following figure is a 45 degree diagram showing the law of motion for assets when consumption is
optimal
m = ConsumerProblem(r=0.03, grid_max=4)
v_init, c_init = initialize(m)
K = lambda c: coleman_operator(c, m)
c = qe.compute_fixed_point(K, c_init, verbose=False)
a = m.asset_grid
R, z_vals = m.R, m.z_vals
a′ = h(a, z) := Ra + z − c∗ (a, z)
• The histogram in the figure used a single time series {at } of length 500,000
• Given the length of this time series, the initial condition (a0 , z0 ) will not matter
• You might find it helpful to use the MarkovChain class from quantecon
Exercise 4
Following on from exercises 2 and 3, lets look at how savings and aggregate asset holdings vary with the
interest rate
• Note: [LS18] section 18.6 can be consulted for more background on the topic treated in this exercise
For a given parameterization of the model, the mean of the stationary distribution can be interpreted as
aggregate capital in an economy with a unit mass of ex-ante identical households facing idiosyncratic shocks
Lets look at how this measure of aggregate capital varies with the interest rate and borrowing constraint
The next figure plots aggregate capital against the interest rate for b in (1, 3)
The horizontal axis is aggregate capital computed as the mean of the stationary distribution
Exercise 4 is to replicate the figure, making use of code from previous exercises
Try to explain why the measure of aggregate capital is equal to −b when r = 0 for both cases shown here
6.15.5 Solutions
Exercise 1
cp = ConsumerProblem()
K = 80
# Bellman iteration
V, c = initialize(cp)
print("Starting value function iteration")
for i in range(K):
# print f"Current iterate = {i}")
V = bellman_operator(V, cp)
c1 = bellman_operator(V, cp, return_policy=True)
# Policy iteration
print("Starting policy function iteration")
V, c2 = initialize(cp)
for i in range(K):
# print f"Current iterate = {i}"
c2 = coleman_operator(c2, cp)
Exercise 2
ax.set_xlabel('asset level')
ax.set_ylabel('consumption (low income)')
ax.legend(loc='upper left')
plt.show()
Exercise 3
return a
cp = ConsumerProblem(r=0.03, grid_max=4)
a = compute_asset_series(cp)
fig, ax = plt.subplots(figsize=(10, 8))
ax.hist(a, bins=20, alpha=0.5, normed=True)
ax.set_xlabel('assets')
ax.set_xlim(-0.05, 0.75)
plt.show()
Exercise 4
M = 25
r_vals = np.linspace(0, 0.04, M)
fig, ax = plt.subplots(figsize=(10,8))
6.16 Robustness
Contents
• Robustness
– Overview
– The Model
– Constructing More Robust Policies
– Robustness as Outcome of a Two-Person Zero-Sum Game
– The Stochastic Case
– Implementation
– Application
– Appendix
6.16.1 Overview
This lecture modifies a Bellman equation to express a decision makers doubts about transition dynamics
His specification doubts make the decision maker want a robust decision rule
Robust means insensitive to misspecification of transition dynamics
The decision maker has a single approximating model
He calls it approximating to acknowledge that he doesnt completely trust it
He fears that outcomes will actually be determined by another model that he cannot describe explicitly
All that he knows is that the actual data-generating model is in some (uncountable) set of models that
surrounds his approximating model
He quantifies the discrepancy between his approximating model and the genuine data-generating model by
using a quantity called entropy
(Well explain what entropy means below)
He wants a decision rule that will work well enough no matter which of those other models actually governs
outcomes
This is what it means for his decision rule to be robust to misspecification of an approximating model
This may sound like too much to ask for, but . . .
. . . a secret weapon is available to design robust decision rules
The secret weapon is max-min control theory
A value-maximizing decision maker enlists the aid of an (imaginary) value-minimizing model chooser to
construct bounds on the value attained by a given decision rule under different models of the transition
dynamics
The original decision maker uses those bounds to construct a decision rule with an assured performance
level, no matter which model actually governs outcomes
Note: In reading this lecture, please dont think that our decision maker is paranoid when he conducts a
worst-case analysis. By designing a rule that works well against a worst-case, his intention is to construct a
rule that will work well across a set of models.
Our robust decision maker wants to know how well a given rule will work when he does not know a single
transition law . . .
. . . he wants to know sets of values that will be attained by a given decision rule F under a set of transition
laws
Ultimately, he wants to design a decision rule F that shapes these sets of values in ways that he prefers
With this in mind, consider the following graph, which relates to a particular decision problem to be ex-
plained below
Inspiring Video
If you want to understand more about why one serious quantitative researcher is interested in this approach,
we recommend Lars Peter Hansens Nobel lecture
Other References
For simplicity, we present ideas in the context of a class of problems with linear transition laws and quadratic
objective functions
To fit in with our earlier lecture on LQ control, we will treat loss minimization rather than value maximiza-
tion
To begin, recall the infinite horizon LQ problem, where an agent chooses a sequence of controls {ut } to
minimize
∞
∑ { }
β t x′t Rxt + u′t Qut (6.152)
t=0
As before,
• xt is n × 1, A is n × n
• ut is k × 1, B is n × k
• wt is j × 1, C is n × j
• R is n × n and Q is k × k
Here xt is the state, ut is the control, and wt is a shock vector.
For now we take {wt } := {wt }∞
t=1 to be deterministic a single fixed sequence
We also allow for model uncertainty on the part of the agent solving this optimization problem
In particular, the agent takes wt = 0 for all t ≥ 0 as a benchmark model, but admits the possibility that this
model might be wrong
As a consequence, she also considers a set of alternative models expressed in terms of sequences {wt } that
are close to the zero sequence
She seeks a policy that will do well enough for a set of alternative models whose members are pinned down
by sequences {wt }
Soon
∑∞ well quantify the quality of a model specification in terms of the maximal size of the expression
β t+1 w ′ w
t=0 t+1 t+1
If our agent takes {wt } as a given deterministic sequence, then, drawing on intuition from earlier lectures
on dynamic programming, we can anticipate Bellman equations such as
The penalty parameter θ controls how much we penalize the maximizing agent for harming the minimizing
agent
By raising θ more and more, we more and more limit the ability of maximizing agent to distort outcomes
relative to the approximating model
So bigger θ is implicitly associated with smaller distortion sequences {wt }
where
and I is a j × j identity matrix. Substituting this expression for the maximum into (6.154) yields
P = B(D(P ))
The operator B is the standard (i.e., non-robust) LQ Bellman operator, and P = B(P ) is the standard matrix
Riccati equation coming from the Bellman equation see this discussion
Under some regularity conditions (see [HS08]), the operator B ◦ D has a unique positive definite fixed point,
which we denote below by P̂
A robust policy, indexed by θ, is u = −F̂ x where
We also define
The interpretation of K̂ is that wt+1 = K̂xt on the worst-case path of {xt }, in the sense that this vector is
the maximizer of (6.155) evaluated at the fixed rule u = −F̂ x
Note that P̂ , F̂ , K̂ are all determined by the primitives and θ
Note also that if θ is very large, then D is approximately equal to the identity mapping
Hence, when θ is large, P̂ and F̂ are approximately equal to their standard LQ values
Furthermore, when θ is large, K̂ is approximately equal to zero
Conversely, smaller θ is associated with greater fear of model misspecification, and greater concern for
robustness
What we have done above can be interpreted in terms of a two-person zero-sum game in which F̂ , K̂ are
Nash equilibrium objects
Agent 1 is our original agent, who seeks to minimize loss in the LQ program while admitting the possibility
of misspecification
Agent 2 is an imaginary malevolent player
Agent 2s malevolence helps the original agent to compute bounds on his value function across a set of
models
We begin with agent 2s problem
Agent 2s Problem
Agent 2
1. knows a fixed policy F specifying the behavior of agent 1, in the sense that ut = −F xt for all t
2. responds by choosing a shock sequence {wt } from a set of paths sufficiently close to the benchmark
sequence {0, 0, 0, . . .}
A
∑∞natural way to say sufficiently close to the zero sequence is to restrict the summed inner product
′
t=1 wt wt to be small
However, to obtain a time-invariant recursive formulation, it turns out to be convenient to restrict a dis-
counted inner product
∞
∑
β t wt′ wt ≤ η (6.160)
t=1
Now let F be a fixed policy, and let JF (x0 , w) be the present-value cost of that policy given sequence
w := {wt } and initial condition x0 ∈ Rn
Substituting −F xt for ut in (6.152), this value can be written as
∞
∑
JF (x0 , w) := β t x′t (R + F ′ QF )xt (6.161)
t=0
where
or, equivalently,
∞
∑ { }
min β t −x′t (R + F ′ QF )xt + βθwt+1
′
wt+1 (6.163)
w
t=0
subject to (6.162)
Whats striking about this optimization problem is that it is once again an LQ discounted dynamic program-
ming problem, with w = {wt } as the sequence of controls
The expression for the optimal policy can be found by applying the usual LQ formula (see here)
We denote it by K(F, θ), with the interpretation wt+1 = K(F, θ)xt
The remaining step for agent 2s problem is to set θ to enforce the constraint (6.160), which can be done by
choosing θ = θη such that
∞
∑
β β t x′t K(F, θη )′ K(F, θη )xt = η (6.164)
t=0
Here xt is given by (6.162) which in this case becomes xt+1 = (A − BF + CK(F, θ))xt
Define the minimized object on the right side of problem (6.163) as Rθ (x0 , F ).
Because minimizers minimize we have
∞
∑ ∞
∑
{ }
Rθ (x0 , F ) ≤ β t −x′t (R + F ′ QF )xt + βθ ′
β t wt+1 wt+1 ,
t=0 t=0
∞
∑ { }
Rθ (x0 , F ) − θ ent ≤ β t −x′t (R + F ′ QF )xt (6.165)
t=0
where
∞
∑
′
ent := β β t wt+1 wt+1
t=0
∞
∑
ent = β β t x′t K(F, θ)′ K(F, θ)xt (6.166)
t=0
To construct the lower bound on the set of values associated with all perturbations w satisfying the entropy
constraint (6.160) at a given entropy level, we proceed as follows:
Note: This procedure sweeps out a set of separating hyperplanes indexed by different values for the La-
grange multiplier θ
∞
∑ { }
Vθ̃ (x0 , F ) = max β t −x′t (R + F ′ QF )xt − β θ̃wt+1
′
wt+1 (6.167)
w
t=0
∞
∑ { }
Vθ̃ (x0 , F ) + θ̃ ent ≥ β t −x′t (R + F ′ QF )xt (6.168)
t=0
where
∞
∑
′
ent ≡ β β t wt+1 wt+1
t=0
∞
∑
ent = β β t x′t K(F, θ̃)′ K(F, θ̃)xt (6.169)
t=0
To construct the upper bound on the set of values associated all perturbations w with a given entropy we
proceed much as we did for the lower bound
Now in the interest of reshaping these sets of values by choosing F , we turn to agent 1s problem
Agent 1s Problem
∞
∑ { }
min β t x′t Rxt + u′t Qut − βθwt+1
′
wt+1 (6.170)
{ut }
t=0
∞
∑ { }
β t x′t (R − βθK ′ K)xt + u′t Qut (6.171)
t=0
subject to
Once again, the expression for the optimal policy can be found here we denote it by F̃
Nash Equilibrium
Clearly the F̃ we have obtained depends on K, which, in agent 2s problem, depended on an initial policy F
Holding all other parameters fixed, we can represent this relationship as a mapping Φ, where
F̃ = Φ(K(F, θ))
Now we turn to the stochastic case, where the sequence {wt } is treated as an iid sequence of random vectors
In this setting, we suppose that our agent is uncertain about the conditional probability distribution of wt+1
The agent takes the standard normal distribution N (0, I) as the baseline conditional distribution, while
admitting the possibility that other nearby distributions prevail
These alternative conditional distributions of wt+1 might depend nonlinearly on the history xs , s ≤ t
To implement this idea, we need a notion of what it means for one distribution to be near another one
Here we adopt a very useful measure of closeness for distributions known as the relative entropy, or
Kullback-Leibler divergence
For densities p, q, the Kullback-Leibler divergence of q from p is defined as
∫ [ ]
p(x)
DKL (p, q) := ln p(x) dx
q(x)
{ [∫ ]}
′ ′
J(x) = min max x Rx + u Qu + β J(Ax + Bu + Cw) ψ(dw) − θDKL (ψ, ϕ) (6.173)
u ψ∈P
Here P represents the set of all densities on Rn and ϕ is the benchmark distribution N (0, I)
The distribution ϕ is chosen as the least desirable conditional distribution in terms of next period outcomes,
while taking into account the penalty term θDKL (ψ, ϕ)
This penalty term plays a role analogous to the one played by the deterministic penalty θw′ w in (6.154),
since it discourages large deviations from the benchmark
The maximization problem in (6.173) appears highly nontrivial after all, we are maximizing over an infinite
dimensional space consisting of the entire set of densities
However, it turns out that the solution is tractable, and in fact also falls within the class of normal distribu-
tions
First, we note that J has the form J(x) = x′ P x + d for some positive definite matrix P and constant real
number d
Moreover, it turns out that if (I − θ−1 C ′ P C)−1 is nonsingular, then
{∫ }
′
max (Ax + Bu + Cw) P (Ax + Bu + Cw) ψ(dw) − θDKL (ψ, ϕ)
ψ∈P (6.174)
= (Ax + Bu)′ D(P )(Ax + Bu) + κ(θ, P )
where
( )
ψ = N (θI − C ′ P C)−1 C ′ P (Ax + Bu), (I − θ−1 C ′ P C)−1 (6.175)
Substituting the expression for the maximum into Bellman equation (6.173) and using J(x) = x′ P x + d
gives
{ }
x′ P x + d = min x′ Rx + u′ Qu + β (Ax + Bu)′ D(P )(Ax + Bu) + β [d + κ(θ, P )] (6.176)
u
Since constant terms do not affect minimizers, the solution is the same as (6.157), leading to
To solve this Bellman equation, we take P̂ to be the positive definite fixed point of B ◦ D
In addition, we take dˆ as the real number solving d = β [d + κ(θ, P )], which is
β
dˆ := κ(θ, P ) (6.177)
1−β
The robust policy in this stochastic case is the minimizer in (6.176), which is once again u = −F̂ x for F̂
given by (6.158)
Substituting the robust policy into (6.175) we obtain the worst case shock distribution:
Before turning to implementation, we briefly outline how to compute several other quantities of interest
One thing we will be interested in doing is holding a policy fixed and computing the discounted loss associ-
ated with that policy
So let F be a given policy and let JF (x) be the associated loss, which, by analogy with (6.173), satisfies
{ [∫ ]}
′ ′
JF (x) = max x (R + F QF )x + β JF ((A − BF )x + Cw) ψ(dw) − θDKL (ψ, ϕ)
ψ∈P
Writing JF (x) = x′ PF x + dF and applying the same argument used to derive (6.174) we get
[ ]
x′ PF x + dF = x′ (R + F ′ QF )x + β x′ (A − BF )′ D(PF )(A − BF )x + dF + κ(θ, PF )
and
β β
dF := κ(θ, PF ) = θ ln[det(I − θ−1 C ′ PF C)−1 ] (6.178)
1−β 1−β
If you skip ahead to the appendix, you will be able to verify that −PF is the solution to the Bellman equation
in agent 2s problem discussed above we use this in our computations
6.16.6 Implementation
The QuantEcon.py package provides a class called RBLQ for implementation of robust LQ optimal control
The code can be found on GitHub
Here is a brief description of the methods of the class
• d_operator() and b_operator() implement D and B respectively
• robust_rule() and robust_rule_simple() both solve for the triple F̂ , K̂, P̂ , as described
in equations (6.158) – (6.159) and the surrounding discussion
– robust_rule() is more efficient
– robust_rule_simple() is more transparent and easier to follow
• K_to_F() and F_to_K() solve the decision problems of agent 1 and agent 2 respectively
• compute_deterministic_entropy() computes the left-hand side of (6.164)
• evaluate_F() computes the loss and entropy associated with a given policy see this discussion
6.16.7 Application
Let us consider a monopolist similar to this one, but now facing model uncertainty
The inverse demand function is pt = a0 − a1 yt + dt
where
iid
dt+1 = ρdt + σd wt+1 , {wt } ∼ N (0, 1)
(yt+1 − yt )2
r t = pt yt − γ − cyt
2
∑∞
Its objective is to maximize expected discounted profits, or, equivalently, to minimize E t=0 β
t (−r
t)
The standard normal distribution for wt is understood as the agents baseline, with uncertainty parameterized
by θ
We compute value-entropy correspondences for two policies
1. The no concern for robustness policy F0 , which is the ordinary LQ loss minimizer
2. A moderate concern for robustness policy Fb , with θ = 0.02
The code for producing the graph shown above, with blue being for the robust policy, is as follows
"""
"""
import pandas as pd
import numpy as np
from scipy.linalg import eig
import matplotlib.pyplot as plt
import quantecon as qe
# == model parameters == #
a_0 = 100
a_1 = 0.5
ρ = 0.9
σ_d = 0.05
β = 0.95
c = 2
γ = 50.0
θ = 0.002
ac = (a_0 - c) / 2.0
# == Define LQ matrices == #
R = -R # For minimization
Q = γ / 2
# -------------------------------------------------------------------------- #
# Functions
# -------------------------------------------------------------------------- #
"""
rlq = qe.robustlq.RBLQ(Q, R, A, B, C, β, θ)
K_F, P_F, d_F, O_F, o_F = rlq.evaluate_F(F)
x0 = np.array([[1.], [0.], [0.]])
value = - x0.T @ P_F @ x0 - d_F
entropy = x0.T @ O_F @ x0 + o_F
return list(map(float, (value, entropy)))
Parameters
==========
emax: scalar
The target entropy value
F: array_like
The policy function to be evaluated
bw: str
A string specifying whether the implied shock path follows best
or worst assumptions. The only acceptable values are 'best' and
'worst'.
Returns
=======
df: pd.DataFrame
A pandas DataFrame containing the value function and entropy
values up to the emax parameter. The columns are 'value' and
'entropy'.
"""
if bw == 'worst':
θs = 1 / np.linspace(1e-8, 1000, grid_size)
else:
θs = -1 / np.linspace(1e-8, 1000, grid_size)
for θ in θs:
df.loc[θ] = evaluate_policy(θ, F)
if df.loc[θ, 'entropy'] >= emax:
break
df = df.dropna(how='any')
return df
# -------------------------------------------------------------------------- #
# Main
# -------------------------------------------------------------------------- #
emax = 1.6e6
fig, ax = plt.subplots()
ax.set_xlim(0, emax)
ax.set_ylabel("Value")
ax.set_xlabel("Entropy")
ax.grid()
class Curve:
curves = []
for df in df_pair:
# == Plot curves == #
x, y = df['entropy'], df['value']
x, y = (np.asarray(a, dtype='float') for a in (x, y))
egrid = np.linspace(0, emax, 100)
curve = Curve(x, y)
print(ax.plot(egrid, curve(egrid), color=c, **plot_args))
curves.append(curve)
# == Color fill between curves == #
ax.fill_between(egrid,
curves[0](egrid),
curves[1](egrid),
color=c, alpha=0.1)
plt.show()
Can you explain the different shape of the value-entropy correspondence for the robust policy?
6.16.8 Appendix
We sketch the proof only of the first claim in this section, which is that, for any given θ, K(F̂ , θ) = K̂,
where K̂ is as given in (6.159)
This is the content of the next lemma
Lemma. If P̂ is the fixed point of the map B ◦ D and F̂ is the robust policy as given in (6.158), then
Proof: As a first step, observe that when F = F̂ , the Bellman equation associated with the LQ problem
(6.162) – (6.163) is
(revisit this discussion if you dont know where (6.180) comes from) and the optimal policy is
Suppose for a moment that −P̂ solves the Bellman equation (6.180)
In this case the policy becomes
Using the definition of D, we can rewrite the right-hand side more simply as
Although it involves a substantial amount of algebra, it can be shown that the latter is just P̂
(Hint: Use the fact that P̂ = B(D(P̂ )))
Contents
– Exercises
– Solutions
– Appendix: Algorithms
6.17.1 Overview
In this lecture we discuss a family of dynamic programming problems with the following features:
1. a discrete state space and discrete choices (actions)
2. an infinite horizon
3. discounted rewards
4. Markov state transitions
We call such problems discrete dynamic programs, or discrete DPs
Discrete DPs are the workhorses in much of modern quantitative economics, including
• monetary economics
• search and labor economics
• household savings and consumption theory
• investment theory
• asset pricing
• industrial organization, etc.
When a given model is not inherently discrete, it is common to replace it with a discretized version in order
to use discrete DP techniques
This lecture covers
• the theory of dynamic programming in a discrete setting, plus examples and applications
• a powerful set of routines for solving discrete DPs from the QuantEcon code libary
Code
References
For background reading on dynamic programming and additional applications, see, for example,
• [LS18]
• [HLL96], section 3.5
• [Put05]
• [SLP89]
• [Rus96]
• [MF02]
• EDTC, chapter 5
Loosely speaking, a discrete DP is a maximization problem with an objective function of the form
∞
∑
E β t r(st , at ) (6.181)
t=0
where
• st is the state variable
• at is the action
• β is a discount factor
• r(st , at ) is interpreted as a current reward when the state is st and the action chosen is at
Each pair (st , at ) pins down transition probabilities Q(st , at , st+1 ) for the next period state st+1
Thus, actions influence not only current rewards but also the future time path of the state
The essence of dynamic programming problems is to trade off current rewards vs favorable positioning of
the future state (modulo randomness)
Examples:
• consuming today vs saving and accumulating assets
• accepting a job offer today vs seeking a better one in the future
• exercising an option now vs waiting
Policies
The most fruitful way to think about solutions to discrete DP problems is to compare policies
In general, a policy is a randomized map from past actions and states to current action
In the setting formalized below, it suffices to consider so-called stationary Markov policies, which consider
only the current state
In particular, a stationary Markov policy is a map σ from states to actions
• at = σ(st ) indicates that at is the action to be taken in state st
It is known that, for any arbitrary policy, there exists a stationary Markov policy that dominates it at least
weakly
• See section 5.5 of [Put05] for discussion and proofs
In what follows, stationary Markov policies are referred to simply as policies
The aim is to find an optimal policy, in the sense of one that maximizes (6.181)
Lets now step through these ideas more carefully
Formal definition
SA := {(s, a) | s ∈ S, a ∈ A(s)}
3. A reward function r : SA → R
4. A transition probability function Q : SA → ∆(S), where ∆(S) is the set of probability distributions
over S
Comments
• {st } ∼ Qσ means that the state is generated by stochastic matrix Qσ
• See this discussion on computing expectations of Markov chains for an explanation of the expression
in (6.182)
Notice that were not really distinguishing between functions from S to R and vectors in Rn
This is natural because they are in one to one correspondence
Let vσ (s) denote the discounted sum of expected reward flows from policy σ when the initial state is s
To calculate this quantity we pass the expectation through the sum in (6.181) and use (6.182) to get
∞
∑
vσ (s) = β t (Qtσ rσ )(s) (s ∈ S)
t=0
This function is called the policy value function for the policy σ
The optimal value function, or simply value function, is the function v ∗ : S → R defined by
(We can use max rather than sup here because the domain is a finite set)
A policy σ ∈ Σ is called optimal if vσ (s) = v ∗ (s) for all s ∈ S
Given any w : S → R, a policy σ ∈ Σ is called w-greedy if
{ }
∑
σ(s) ∈ arg max r(s, a) + β w(s′ )Q(s, a, s′ ) (s ∈ S)
a∈A(s) s′ ∈S
As discussed in detail below, optimal policies are precisely those that are v ∗ -greedy
Two Operators
Tσ v = rσ + βQσ v
{ }
∑
v(s) = max r(s, a) + β v(s′ )Q(s, a, s′ ) (s ∈ S),
a∈A(s)
s′ ∈S
Now that the theory has been set out, lets turn to solution methods
Code for solving discrete DPs is available in ddp.py from the QuantEcon.py code library
It implements the three most important solution methods for discrete dynamic programs, namely
• value function iteration
• policy function iteration
• modified policy function iteration
Lets briefly review these algorithms and their implementation
Perhaps the most familiar method for solving all manner of dynamic programs is value function iteration
This algorithm uses the fact that the Bellman operator T is a contraction mapping with fixed point v ∗
Hence, iterative application of T to any initial function v 0 : S → R converges to v ∗
The details of the algorithm can be found in the appendix
This routine, also known as Howards policy improvement algorithm, exploits more closely the particular
structure of a discrete DP problem
Each iteration consists of
1. A policy evaluation step that computes the value vσ of a policy σ by solving the linear equation
v = Tσ v
2. A policy improvement step that computes a vσ -greedy policy
In the current setting policy iteration computes an exact optimal policy in finitely many iterations
• See theorem 10.2.6 of EDTC for a proof
The details of the algorithm can be found in the appendix
Modified policy iteration replaces the policy evaluation step in policy iteration with partial policy evaluation
The latter computes an approximation to the value of a policy σ by iterating Tσ for a specified number of
times
This approach can be useful when the state space is very large and the linear system in the policy evaluation
step of policy iteration is correspondingly difficult to solve
The details of the algorithm can be found in the appendix
s′ = a + U where U ∼ U [0, . . . , B]
Discrete DP Representation
{
1
if a ≤ s′ ≤ a + B
Q(s, a, s′ ) := B+1 (6.183)
0 otherwise
This information will be used to create an instance of DiscreteDP by passing the following information
1. An n × m reward array R
2. An n × m × n transition probability array Q
3. A discount factor β
For R we set R[s, a] = u(s − a) if a ≤ s and −∞ otherwise
For Q we follow the rule in (6.183)
Note:
• The feasibility constraint is embedded into R by setting R[s, a] = −∞ for a ∈
/ A(s)
• Probability distributions for (s, a) with a ∈
/ A(s) can be arbitrary
The following code sets up these objects for us
import numpy as np
class SimpleOG:
self.populate_Q()
self.populate_R()
def populate_R(self):
"""
Populate the R matrix, with R[s, a] = -np.inf for infeasible
state-action pairs.
"""
for s in range(self.n):
for a in range(self.m):
def populate_Q(self):
"""
Populate the Q matrix by setting
for a in range(self.m):
self.Q[:, a, a:(a + self.B + 1)] = 1.0 / (self.B + 1)
import quantecon as qe
results = ddp.solve(method='policy_iteration')
dir(results)
(In IPython version 4.0 and above you can also type results. and hit the tab key)
The most important attributes are v, the value function, and σ, the optimal policy
results.v
results.sigma
array([0, 0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 5, 5, 5, 5])
Since weve used policy iteration, these results will be exact unless we hit the iteration bound max_iter
Lets make sure this didnt happen
results.max_iter
250
results.num_iter
Another interesting object is results.mc, which is the controlled chain defined by Qσ∗ , where σ ∗ is the
optimal policy
In other words, it gives the dynamics of the state when the agent follows the optimal policy
Since this object is an instance of MarkovChain from QuantEcon.py (see this lecture for more discussion),
we can easily simulate it, compute its stationary distribution and so on
results.mc.stationary_distributions
If we look at the bar graph we can see the rightward shift in probability mass
Heres how we could set up these objects for the preceding example
def u(c):
return c**α
s_indices = []
a_indices = []
Q = []
R = []
b = 1.0 / (B + 1)
for s in range(n):
for a in range(min(M, s) + 1): # All feasible a at this s
s_indices.append(s)
a_indices.append(a)
q = np.zeros(n)
q[a:(a + B + 1)] = b # b on these values, otherwise 0
Q.append(q)
R.append(u(s - a))
For larger problems you might need to write this code more efficiently by vectorizing or using Numba
6.17.5 Exercises
In the stochastic optimal growth lecture dynamic programming lecture, we solve a benchmark model that
has an analytical solution to check we could replicate it numerically
The exercise is to replicate this solution using DiscreteDP
6.17.6 Solutions
Setup
As in the lecture, we let f (k) = k α with α = 0.65, u(c) = log c, and β = 0.95
α = 0.65
f = lambda k: k**α
u = np.log
β = 0.95
Here we want to solve a finite state version of the continuous state model above
We discretize the state space into a grid of size grid_size=500, from 10−6 to grid_max=2
grid_max = 2
grid_size = 500
grid = np.linspace(1e-6, grid_max, grid_size)
We choose the action to be the amount of capital to save for the next period (the state is the capital stock at
the beginning of the period)
Thus the state indices and the action indices are both 0, , grid_size-1
Action (indexed by) a is feasible at state (indexed by) s if and only if grid[a] < f([grid[s]) (zero
consumption is not allowed because of the log utility)
Thus the Bellman equation is:
# State-action indices
s_indices, a_indices = np.where(C > 0)
print(L)
print(s_indices)
print(a_indices)
118841
[ 0 1 1 ..., 499 499 499]
[ 0 0 1 ..., 389 390 391]
R = u(C[s_indices, a_indices])
(Degenerate) transition probability matrix Q (of shape (L, grid_size)), where we choose the
scipy.sparse.lil_matrix format, while any format will do (internally it will be converted to the csr format):
Q = sparse.lil_matrix((L, grid_size))
Q[np.arange(L), a_indices] = 1
(If you are familiar with the data structure of scipy.sparse.csr_matrix, the following is the most efficient way
to create the Q matrix in the current case)
# data = np.ones(L)
# indptr = np.arange(L+1)
# Q = sparse.csr_matrix((data, a_indices, indptr), shape=(L, grid_size))
Notes
Here we intensively vectorized the operations on arrays to simplify the code
As noted, however, vectorization is memory consumptive, and it can be prohibitively so for grids with large
size
res = ddp.solve(method='policy_iteration')
v, σ, num_iter = res.v, res.sigma, res.num_iter
num_iter
10
Note that sigma contains the indices of the optimal capital stocks to save for the next period. The following
translates sigma to the corresponding consumption vector
def v_star(k):
return c1 + c2 * np.log(k)
def c_star(k):
return (1 - ab) * k**α
Let us compare the solution of the discrete model with that of the original continuous model
np.abs(v - v_star(grid)).max()
121.49819147053378
np.abs(v - v_star(grid))[1:].max()
0.012681735127500815
np.abs(c - c_star(grid)).max()
0.0038265231000100819
In fact, the optimal consumption obtained in the discrete version is not really monotone, but the decrements
are quit small:
diff = np.diff(c)
(diff >= 0).all()
False
174
np.abs(diff[dec_ind]).max()
0.0019618533397668392
True
Value iteration
ddp.epsilon = 1e-4
ddp.max_iter = 500
res1 = ddp.solve(method='value_iteration')
res1.num_iter
294
np.array_equal(σ, res1.sigma)
True
res2 = ddp.solve(method='modified_policy_iteration')
res2.num_iter
16
np.array_equal(σ, res2.sigma)
True
Speed comparison
%timeit ddp.solve(method='value_iteration')
%timeit ddp.solve(method='policy_iteration')
%timeit ddp.solve(method='modified_policy_iteration')
As is often the case, policy iteration and modified policy iteration are much faster than value iteration
Let us first visualize the convergence of the value iteration algorithm as in the lecture, where we use ddp.
bellman_operator implemented as a method of DiscreteDP
plt.show()
Finally, let us work on Exercise 2, where we plot the trajectories of the capital stock for three different
discount factors, 0.9, 0.94, and 0.98, with initial condition k0 = 0.1
sample_size = 25
fig, ax = plt.subplots(figsize=(8,5))
ax.set_xlabel("time")
ax.set_ylabel("capital")
ax.set_ylim(0.10, 0.30)
ax.legend(loc='lower right')
plt.show()
This appendix covers the details of the solution algorithms implemented for DiscreteDP
We will make use of the following notions of approximate optimality:
• For ε > 0, v is called an ε-approximation of v ∗ if ∥v − v ∗ ∥ < ε
• A policy σ ∈ Σ is called ε-optimal if vσ is an ε-approximation of v ∗
Value Iteration
The DiscreteDP value iteration method implements value function iteration as follows
1. Choose any v 0 ∈ Rn , and specify ε > 0; set i = 0
2. Compute v i+1 = T v i
3. If ∥v i+1 − v i ∥ < [(1 − β)/(2β)]ε, then go to step 4; otherwise, set i = i + 1 and go to step 2
4. Compute a v i+1 -greedy policy σ, and return v i+1 and σ
Given ε > 0, the value iteration algorithm
• terminates in a finite number of iterations
• returns an ε/2-approximation of the optimal value function and an ε-optimal policy function (unless
iter_max is reached)
(While not explicit, in the actual implementation each algorithm is terminated if the number of iterations
reaches iter_max)
Policy Iteration
SEVEN
These lectures look at important economic models that also illustrate common equilibrium concepts.
Contents
7.1.1 Outline
In 1969, Thomas C. Schelling developed a simple but striking model of racial segregation [Sch69]
His model studies the dynamics of racially mixed neighborhoods
Like much of Schellings work, the model shows how local interactions can lead to surprising aggregate
structure
In particular, it shows that relatively mild preference for neighbors of similar race can lead in aggregate to
the collapse of mixed neighborhoods, and high levels of segregation
In recognition of this and other research, Schelling was awarded the 2005 Nobel Prize in Economic Sciences
(joint with Robert Aumann)
In this lecture we (in fact you) will build and run a version of Schellings model
779
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
We will cover a variation of Schellings model that is easy to program and captures the main idea
Set Up
Suppose we have two types of people: orange people and green people
For the purpose of this lecture, we will assume there are 250 of each type
These agents all live on a single unit square
The location of an agent is just a point (x, y), where 0 < x, y < 1
Preferences
We will say that an agent is happy if half or more of her 10 nearest neighbors are of the same type
Here nearest is in terms of Euclidean distance
An agent who is not happy is called unhappy
An important point here is that agents are not averse to living in mixed areas
They are perfectly happy if half their neighbors are of the other color
Behavior
7.1.3 Results
Lets have a look at the results we got when we coded and ran this model
As discussed above, agents are initially mixed randomly together
But after several cycles they become segregated into distinct regions
In this instance, the program terminated after 4 cycles through the set of agents, indicating that all agents
had reached a state of happiness
What is striking about the pictures is how rapidly racial integration breaks down
This is despite the fact that people in the model dont actually mind living mixed with the other type
Even with these preferences, the outcome is a high degree of segregation
7.1.4 Exercises
Exercise 1
* Data:
* Methods:
7.1.5 Solutions
Exercise 1
Heres one solution that does the job we want. If you feel like a further exercise you can probably speed up
some of the computations and then increase the number of agents.
class Agent:
def draw_location(self):
self.location = uniform(0, 1), uniform(0, 1)
# == Main == #
num_of_type_0 = 250
num_of_type_1 = 250
num_neighbors = 10 # Number of agents regarded as neighbors
require_same_type = 5 # Want at least this many neighbors to be same type
count = 1
# == Loop until none wishes to move == #
while True:
print('Entering loop ', count)
plot_distribution(agents, count)
count += 1
no_one_moved = True
for agent in agents:
old_location = agent.location
agent.update(agents)
if agent.location != old_location:
no_one_moved = False
if no_one_moved:
break
print('Converged, terminating.')
Entering loop 1
Entering loop 2
Entering loop 3
Entering loop 4
Converged, terminating.
Contents
– The Model
– Implementation
– Dynamics of an Individual Worker
– Endogenous Job Finding Rate
– Exercises
– Solutions
– Lake Model Solutions
7.2.1 Overview
Prerequisites
Before working through what follows, we recommend you read the lecture on finite Markov chains
You will also need some basic linear algebra and probability
Aggregate Variables
The value b(Et + Ut ) is the mass of new workers entering the labor force unemployed
The total stock of workers Nt = Et + Ut evolves as
This law tells us how total employment and unemployment evolve over time
Letting
( ) ( )
ut Ut /Nt
xt := =
et Et /Nt
7.2.3 Implementation
class LakeModel:
"""
Solves the lake model and computes dynamics of unemployment stocks and
rates.
Parameters:
------------
λ : scalar
The job finding rate for currently unemployed workers
α : scalar
The dismissal rate for currently employed workers
b : scalar
Entry rate into the labor force
d : scalar
Exit rate from the labor force
"""
def __init__(self, λ=0.283, α=0.013, b=0.0124, d=0.00822):
self._λ, self._α, self._b, self._d = λ, α, b, d
self.compute_derived_values()
def compute_derived_values(self):
# Unpack names to simplify expression
λ, α, b, d = self._λ, self._α, self._b, self._d
self._g = b - d
self._A = np.array([[(1-d) * (1-λ) + b, (1 - d) * α + b],
[ (1-d) * λ, (1 - d) * (1 - α)]])
@property
def g(self):
return self._g
@property
def A(self):
return self._A
@property
def A_hat(self):
return self._A_hat
@property
def λ(self):
return self._λ
@λ.setter
def λ(self, new_value):
self._α = new_value
self.compute_derived_values()
@property
def α(self):
return self._α
@α.setter
def α(self, new_value):
self._α = new_value
self.compute_derived_values()
@property
def b(self):
return self._b
@b.setter
def b(self, new_value):
self._b = new_value
self.compute_derived_values()
@property
def d(self):
return self._d
@d.setter
def d(self, new_value):
self._d = new_value
self.compute_derived_values()
Returns
--------
xbar : steady state vector of employment and unemployment rates
"""
x = 0.5 * np.ones(2)
error = tol + 1
Parameters
------------
X0 : array
Contains initial values (E0, U0)
T : int
Number of periods to simulate
Returns
---------
X : iterator
Contains sequence of employment and unemployment stocks
"""
Parameters
------------
x0 : array
Contains initial values (e0,u0)
T : int
Number of periods to simulate
Returns
---------
x : iterator
Contains sequence of employment and unemployment rates
"""
x = np.atleast_1d(x0) # Recast as array just in case
for t in range(T):
yield x
x = self.A_hat @ x
As desired, if we create an instance and update a primitive like α, derived objects like A will also change
lm = LakeModel()
lm.α
0.013
lm.A
lm.α = 2
lm.A
Aggregate Dynamics
Lets run a simulation under the default parameters (see above) starting from X0 = (12, 138)
lm = LakeModel()
N_0 = 150 # Population
e_0 = 0.92 # Initial employment rate
u_0 = 1 - e_0 # Initial unemployment rate
T = 50 # Simulation length
axes[2].plot(X_path.sum(1), lw=2)
axes[2].set_title('Labor force')
for ax in axes:
ax.grid()
plt.tight_layout()
plt.show()
The aggregates Et and Ut dont converge because their sum Et + Ut grows at rate g
On the other hand, the vector of employment and unemployment rates xt can be in a steady state x̄ if there
exists an x̄ such that
• x̄ = Âx̄
• the components satisfy ē + ū = 1
This equation tells us that a steady state level x̄ is an eigenvector of  associated with a unit eigenvalue
We also have xt → x̄ as t → ∞ provided that the remaining eigenvalue of  has modulus less that 1
This is the case for our default parameters:
lm = LakeModel()
e, f = np.linalg.eigvals(lm.A_hat)
abs(e), abs(f)
(0.69530673783584618, 1.0)
Lets look at the convergence of the unemployment and employment rate to steady state levels (dashed red
line)
lm = LakeModel()
e_0 = 0.92 # Initial employment rate
u_0 = 1 - e_0 # Initial unemployment rate
T = 50 # Simulation length
xbar = lm.rate_steady_state()
plt.tight_layout()
plt.show()
An individual workers employment dynamics are governed by a finite state Markov process
The worker can be in one of two states:
• st = 0 means unemployed
• st = 1 means employed
Lets start off under the assumption that b = d = 0
The associated transition matrix is then
( )
1−λ λ
P =
α 1−α
Let ψt denote the marginal distribution over employment / unemployment states for the worker at time t
As usual, we regard it as a row vector
We know from an earlier discussion that ψt follows the law of motion
ψt+1 = ψt P
We also know from the lecture on finite Markov chains that if α ∈ (0, 1) and λ ∈ (0, 1), then P has a unique
stationary distribution, denoted here by ψ ∗
The unique stationary distribution satisfies
α
ψ ∗ [0] =
α+λ
Not surprisingly, probability mass on the unemployment state increases with the dismissal rate and falls with
the job finding rate rate
Ergodicity
1∑
T
s̄u,T := ⊮{st = 0}
T
t=1
and
1∑
T
s̄e,T := ⊮{st = 1}
T
t=1
Convergence rate
How long does it take for time series sample averages to converge to cross sectional averages?
We can use QuantEcon.pys MarkovChain class to investigate this
Lets plot the path of the sample averages over 5,000 periods
lm = LakeModel(d=0, b=0)
T = 5000 # Simulation length
α, λ = lm.α, lm.λ
P = [[1 - λ, λ],
[ α, 1 - α]]
mc = MarkovChain(P)
xbar = lm.rate_steady_state()
plt.tight_layout()
plt.show()
Reservation Wage
The most important thing to remember about the model is that optimal decisions are characterized by a
reservation wage w̄
• If the wage offer w in hand is greater than or equal to w̄, then the worker accepts
• Otherwise, the worker rejects
As we saw in our discussion of the model, the reservation wage depends on the wage offer distribution and
the parameters
• α, the separation rate
• β, the discount factor
• γ, the offer arrival rate
• c, unemployment compensation
Suppose that all workers inside a lake model behave according to the McCall search model
The exogenous probability of leaving employment remains α
But their optimal decision rules determine the probability λ of leaving unemployment
This is now
∑
λ = γP{wt ≥ w̄} = γ p(w′ ) (7.1)
w′ ≥w̄
Fiscal Policy
We can use the McCall search version of the Lake Model to find an optimal level of unemployment insurance
We assume that the government sets unemployment compensation c
The government imposes a lump sum tax τ sufficient to finance total unemployment payments
To attain a balanced budget at a steady state, taxes, the steady state unemployment rate u, and the unem-
ployment compensation rate must satisfy
τ = uc
For a given level of unemployment benefit c, we can solve for a tax that balances the budget in the steady
state
τ = u(c, τ )c
W := e E[V | employed] + u U
where the notation V and U is as defined in the McCall search model lecture
The wage offer distribution will be a discretized version of the lognormal distribution LN (log(20), 1), as
shown in the next figure
We will make use of code we wrote in the McCall model lecture, embedded below for convenience
The first piece of code, repeated below, implements value function iteration
import numpy as np
from quantecon.distributions import BetaBinomial
from numba import jit
@jit
def u(c, σ):
if c > 0:
return (c**(1 - σ) - 1) / (1 - σ)
else:
return -10e6
class McCallModel:
"""
Stores the parameters and functions associated with a given model.
"""
def __init__(self,
α=0.2, # Job separation rate
β=0.98, # Discount rate
γ=0.7, # Job offer rate
c=6.0, # Unemployment compensation
σ=2.0, # Utility parameter
w_vec=None, # Possible wage values
p_vec=None): # Probabilities over w_vec
# Add a default wage vector and probabilities over the vector using
# the beta-binomial distribution
if w_vec is None:
n = 60 # number of possible outcomes for wage
self.w_vec = np.linspace(10, 20, n) # wages between 10 and 20
a, b = 600, 400 # shape parameters
dist = BetaBinomial(n-1, a, b)
self.p_vec = dist.pdf()
else:
self.w_vec = w_vec
self.p_vec = p_vec
@jit
def _update_bellman(α, β, γ, c, σ, w_vec, p_vec, V, V_new, U):
"""
A jitted function to update the Bellman equations. Note that V_new is
modified in place (i.e, modified by this function). The new value of U is
returned.
"""
for w_idx, w in enumerate(w_vec):
U_new = u(c, σ) + β * (1 - γ) * U + \
β * γ * np.sum(np.maximum(U, V) * p_vec)
return U_new
Parameters
----------
mcm : an instance of McCallModel
tol : float
error tolerance
max_iter : int
the maximum number of iterations
"""
return V, U
The second piece of code repeated from the McCall model lecture is used to complete the reservation wage
If V(w) > U for all w, then the reservation wage w_bar is set to
the lowest wage in mcm.w_vec.
Parameters
----------
mcm : an instance of McCallModel
return_values : bool (optional, default=False)
Return the value functions as well
Returns
-------
w_bar : scalar
The reservation wage
"""
V, U = solve_mccall_model(mcm)
w_idx = np.searchsorted(V - U, 0)
if w_idx == len(V):
w_bar = np.inf
else:
w_bar = mcm.w_vec[w_idx]
if return_values == False:
return w_bar
else:
return w_bar, V, U
Now lets compute and plot welfare, employment, unemployment, and tax revenue as a function of the
unemployment compensation rate
from scipy.stats import norm
from scipy.optimize import brentq
def compute_optimal_quantities(c, τ ):
"""
Compute the reservation wage, job finding rate and value functions of the
workers given c and τ .
"""
mcm = McCallModel(α=α_q,
β=β,
γ=γ,
c=c-τ , # post tax compensation
σ=σ,
w_vec=w_vec-τ , # post tax wages
p_vec=p_vec)
def compute_steady_state_quantities(c, τ ):
"""
Compute the steady state unemployment rate given c and τ using optimal
quantities from the McCall model and computing corresponding steady state
quantities
"""
w_bar, λ, V, U = compute_optimal_quantities(c, τ )
return e, u, welfare
def find_balanced_budget_tax(c):
"""
Find tax level that will induce a balanced budget.
"""
def steady_state_budget(t):
e, u, w = compute_steady_state_quantities(c, t)
return t - u * c
tax_vec = []
unempl_vec = []
empl_vec = []
welfare_vec = []
for c in c_vec:
t = find_balanced_budget_tax(c)
e_rate, u_rate, welfare = compute_steady_state_quantities(c, t)
tax_vec.append(t)
unempl_vec.append(u_rate)
empl_vec.append(e_rate)
welfare_vec.append(welfare)
plt.tight_layout()
plt.show()
The figure that the preceding code listing generates is shown below
7.2.6 Exercises
Exercise 1
Consider an economy with initial stock of workers N0 = 100 at the steady state level of employment in the
baseline parameterization
• α = 0.013
• λ = 0.283
• b = 0.0124
• d = 0.00822
Exercise 2
Consider an economy with initial stock of workers N0 = 100 at the steady state level of employment in the
baseline parameterization
Suppose that for 20 periods the birth rate was temporarily high (b = 0.0025) and then returned to its original
level
Plot the transition dynamics of the unemployment and employment stocks for 50 periods
Plot the transition dynamics for the rates
How long does the economy take to return to its original steady state?
7.2.7 Solutions
Exercise 1
We begin by constructing the class containing the default parameters and assigning the steady state values
to x0
lm = LakeModel()
x0 = lm.rate_steady_state()
print(f"Initial Steady State: {x0}")
N0 = 100
T = 50
lm.lmda = 0.2
axes[0].plot(X_path[:, 0])
axes[0].set_title('Unemployment')
axes[1].plot(X_path[:, 1])
axes[1].set_title('Employment')
axes[2].plot(X_path.sum(1))
axes[2].set_title('Labor force')
for ax in axes:
ax.grid()
plt.tight_layout()
plt.show()
plt.tight_layout()
plt.show()
We see that it takes 20 periods for the economy to converge to its new steady state levels
Exercise 2
This next exercise has the economy experiencing a boom in entrances to the labor market and then later
returning to the original levels
For 20 periods the economy has a new entry rate into the labor market
Lets start off at the baseline parameterization and record the steady state
lm = LakeModel()
x0 = lm.rate_steady_state()
lm.b = b_hat
X_path1 = np.vstack(lm.simulate_stock_path(x0 * N0, T_hat)) # simulate stocks
x_path1 = np.vstack(lm.simulate_rate_path(x0, T_hat)) # simulate rates
Now we reset b to the original value and then, using the state after 20 periods for the new initial conditions,
we simulate for the additional 30 periods
lm.b = 0.0124
X_path2 = np.vstack(lm.simulate_stock_path(X_path1[-1, :2], T-T_hat+1)) #
,→simulate stocks
x_path2 = np.vstack(lm.simulate_rate_path(x_path1[-1, :2], T-T_hat+1)) #
,→simulate rates
axes[0].plot(X_path[:, 0])
axes[0].set_title('Unemployment')
axes[1].plot(X_path[:, 1])
axes[1].set_title('Employment')
axes[2].plot(X_path.sum(1))
axes[2].set_title('Labor force')
for ax in axes:
ax.grid()
plt.tight_layout()
plt.show()
plt.tight_layout()
plt.show()
Contents
7.3.1 Overview
This 1971 paper is one of a small number of research articles that kicked off the rational expectations
revolution
We follow Lucas and Prescott by employing a setting that is readily Bellmanized (i.e., capable of being
formulated in terms of dynamic programming problems)
Because we use linear quadratic setups for demand and costs, we can adapt the LQ programming techniques
described in this lecture
We will learn about how a representative agents problem differs from a planners, and how a planning prob-
lem can be used to compute rational expectations quantities
We will also learn about how a rational expectations equilibrium can be characterized as a fixed point of a
mapping from a perceived law of motion to an actual law of motion
Equality between a perceived and an actual law of motion for endogenous market-wide objects captures in
a nutshell what the rational expectations equilibrium concept is all about
Finally, we will learn about the important Big K, little k trick, a modeling device widely used in macroeco-
nomics
Except that for us
• Instead of Big K it will be Big Y
• Instead of little k it will be little y
This widely used method applies in contexts in which a representative firm or agent is a price taker operating
within a competitive equilibrium
We want to impose that
• The representative firm or individual takes aggregate Y as given when it chooses individual y, but . . .
• At the end of the day, Y = y, so that the representative firm is indeed representative
The Big Y , little y trick accomplishes these two goals by
• Taking Y as beyond control when posing the choice problem of who chooses y; but . . .
• Imposing Y = y after having solved the individuals optimization problem
Please watch for how this strategy is applied as the lecture unfolds
We begin by applying the Big Y , little y trick in a very simple static context
Consider a static model in which a collection of n firms produce a homogeneous good that is sold in a
competitive market
Each of these n firms sells output y
p = a0 − a1 Y (7.2)
where
• ai > 0 for i = 0, 1
• Y = ny is the market-wide level of output
Each firm has total cost function
[ ]
max (a0 − a1 Y )y − c1 y − 0.5c2 y 2 (7.3)
y
a0 − a1 Y − c1 − c2 y = 0 (7.4)
At this point, but not before, we substitute Y = ny into (7.4) to obtain the following linear equation
Further Reading
Our first illustration of a rational expectations equilibrium involves a market with n firms, each of which
seeks to maximize the discounted present value of profits in the face of adjustment costs
The adjustment costs induce the firms to make gradual adjustments, which in turn requires consideration of
future prices
Individual firms understand that, via the inverse demand curve, the price is determined by the amounts
supplied by other firms
Hence each firm wants to forecast future total industry supplies
In our context, a forecast is generated by a belief about the law of motion for the aggregate state
Rational expectations equilibrium prevails when this belief coincides with the actual law of motion generated
by production choices induced by this belief
We formulate a rational expectations equilibrium in terms of a fixed point of an operator that maps beliefs
into optimal beliefs
To illustrate, consider a collection of n firms producing a homogeneous good that is sold in a competitive
market.
Each of these n firms sells output yt
The price pt of the good lies on the inverse demand curve
pt = a0 − a1 Yt (7.6)
where
• ai > 0 for i = 0, 1
• Yt = nyt is the market-wide level of output
∞
∑
β t rt (7.7)
t=0
where
γ(yt+1 − yt )2
rt := pt yt − , y0 given (7.8)
2
Regarding the parameters,
• β ∈ (0, 1) is a discount factor
• γ > 0 measures the cost of adjusting the rate of output
Regarding timing, the firm observes pt and yt when it chooses yt+1 at at time t
To state the firms optimization problem completely requires that we specify dynamics for all state variables
This includes ones that the firm cares about but does not control like pt
We turn to this problem now
In view of (7.6), the firms incentive to forecast the market price translates into an incentive to forecast
aggregate output Yt
Aggregate output depends on the choices of other firms
We assume that n is such a large number that the output of any single firm has a negligible effect on aggregate
output
That justifies firms in regarding their forecasts of aggregate output as being unaffected by their own output
decisions
We suppose the firm believes that market-wide output Yt follows the law of motion
For now lets fix a particular belief H in (7.9) and investigate the firms response to it
Let v be the optimal value function for the firms problem given H
The value function satisfies the Bellman equation
{ }
γ(y ′ − y)2 ′
v(y, Y ) = max a 0 y − a 1 yY − + βv(y , H(Y )) (7.10)
y′ 2
Lets denote the firms optimal policy function by h, so that
where
{ }
γ(y ′ − y)2 ′
h(y, Y ) := arg max a0 y − a1 yY − + βv(y , H(Y )) (7.12)
y′ 2
First-Order Characterization of h
In what follows it will be helpful to have a second characterization of h, based on first order conditions
The first-order necessary condition for choosing y ′ is
An important useful envelope result of Benveniste-Scheinkman [BS79] implies that to differentiate v with
respect to y we can naively differentiate the right side of (7.10), giving
vy (y, Y ) = a0 − a1 Y + γ(y ′ − y)
The firm optimally sets an output path that satisfies (7.14), taking (7.9) as given, and subject to
• the initial conditions for (y0 , Y0 )
• the terminal condition limt→∞ β t yt vy (yt , Yt ) = 0
This last condition is called the transversality condition, and acts as a first-order necessary condition at
infinity
The firms decision rule solves the difference equation (7.14) subject to the given initial condition y0 and the
transversality condition
Note that solving the Bellman equation (7.10) for v and then h in (7.12) yields a decision rule that automat-
ically imposes both the Euler equation (7.14) and the transversality condition
Thus, when firms believe that the law of motion for market-wide output is (7.9), their optimizing behavior
makes the actual law of motion be (7.15)
A rational expectations equilibrium or recursive competitive equilibrium of the model with adjustment costs
is a decision rule h and an aggregate law of motion H such that
1. Given belief H, the map h is the firms optimal policy function
2. The law of motion H satisfies H(Y ) = nh(Y /n, Y ) for all Y
Thus, a rational expectations equilibrium equates the perceived and actual laws of motion (7.9) and (7.15)
As weve seen, the firms optimum problem induces a mapping Φ from a perceived law of motion H for
market-wide output to an actual law of motion Φ(H)
The mapping Φ is the composition of two operations, taking a perceived law of motion into a decision rule
via (7.10)–(7.12), and a decision rule into an actual law via (7.15)
The H component of a rational expectations equilibrium is a fixed point of Φ
Now lets consider the problem of computing the rational expectations equilibrium
Misbehavior of Φ
Readers accustomed to dynamic programming arguments might try to address this problem by choosing
some guess H0 for the aggregate law of motion and then iterating with Φ
Unfortunately, the mapping Φ is not a contraction
In particular, there is no guarantee that direct iterations on Φ converge1
1
A literature that studies whether models populated with agents who learn can converge to rational expectations equilibria
features iterations on a modification of the mapping Φ that can be approximated as γΦ + (1 − γ)I. Here I is the identity operator
and γ ∈ (0, 1) is a relaxation parameter. See [MS89] and [EH01] for statements and applications of this approach to establish
conditions under which collections of adaptive agents who use least squares learning converge to a rational expectations equilibrium.
Our plan of attack is to match the Euler equations of the market problem with those for a single-agent choice
problem
As well see, this planning problem can be solved by LQ control (linear regulator)
The optimal quantities from the planning problem are rational expectations equilibrium quantities
The rational expectations equilibrium price can be obtained as a shadow price in the planning problem
For convenience, in this section we set n = 1
We first compute a sum of consumer and producer surplus at time t
∫ Yt
γ(Yt+1 − Yt )2
s(Yt , Yt+1 ) := (a0 − a1 x) dx − (7.16)
0 2
The first term is the area under the demand curve, while the second measures the social costs of changing
output
The planning problem is to choose a production plan {Yt } to maximize
∞
∑
β t s(Yt , Yt+1 )
t=0
{ }
a1 2 γ(Y ′ − Y )2 ′
V (Y ) = max a0 Y − Y − + βV (Y ) (7.17)
Y′ 2 2
The associated first order condition is
−γ(Y ′ − Y ) + βV ′ (Y ′ ) = 0 (7.18)
V ′ (Y ) = a0 − a1 Y + γ(Y ′ − Y )
Substituting this into equation (7.18) and rearranging leads to the Euler equation
As you are asked to show in the exercises, the fact that the planners problem is an LQ problem implies an
optimal policy and hence aggregate law of motion taking the form
Yt+1 = κ0 + κ1 Yt (7.20)
yt+1 = h0 + h1 yt + h2 Yt (7.21)
7.3.4 Exercises
Exercise 1
Express the solution of the firms problem in the form (7.21) and give the values for each hj
If there were n identical competitive firms all behaving according to (7.21), what would (7.21) imply for the
actual law of motion (7.9) for market supply
Exercise 2
Consider the following κ0 , κ1 pairs as candidates for the aggregate law of motion component of a rational
expectations equilibrium (see (7.20))
Extending the program that you wrote for exercise 1, determine which if any satisfy the definition of a
rational expectations equilibrium
• (94.0886298678, 0.923409232937)
• (93.2119845412, 0.984323478873)
• (95.0818452486, 0.952459076301)
Describe an iterative algorithm that uses the program that you wrote for exercise 1 to compute a rational
expectations equilibrium
(You are not being asked actually to use the algorithm you are suggesting)
Exercise 3
Exercise 4
∑∞
A monopolist faces the industry demand curve (7.6) and chooses {Yt } to maximize t=0 β
tr
t where
γ(Yt+1 − Yt )2
rt = pt Yt −
2
Formulate this problem as an LQ problem
Compute the optimal policy using the same parameters as the previous exercise
In particular, solve for the parameters in
Yt+1 = m0 + m1 Yt
Compare your results with the previous exercise. Comment.
7.3.5 Solutions
import numpy as np
import matplotlib.pyplot as plt
Exercise 1
To map a problem into a discounted optimal linear control problem, we need to define
• state vector xt and control vector ut
• matrices A, B, Q, R that define preferences and the law of motion for the state
For the state and control vectors we choose
yt
xt = Yt ,
ut = yt+1 − yt
1
For , B, Q, R we set
1 0 0 1 0 a1 /2 −a0 /2
A = 0 κ1 κ0 , B = 0 ,
R = a1 /2 0 0 , Q = γ/2
0 0 1 0 −a0 /2 0 0
By multiplying out you can confirm that
yt+1 − yt = −F0 yt − F1 Yt − F2
h0 = −F2 , h 1 = 1 − F0 , h2 = −F1
# == Model parameters == #
a0 = 100
a1 = 0.05
β = 0.95
γ = 10.0
# == Beliefs == #
κ0 = 95.5
κ1 = 0.95
lq = LQ(Q, R, A, B, beta=β)
P, F, d = lq.stationary_values()
F = F.flatten()
out1 = f"F = [{F[0]:.3f}, {F[1]:.3f}, {F[2]:.3f}]"
h0, h1, h2 = -F[2], 1 - F[0], -F[1]
out2 = f"(h0, h1, h2) = ({h0:.3f}, {h1:.3f}, {h2:.3f})"
print(out1)
print(out2)
For the case n > 1, recall that Yt = nyt , which, combined with the previous equation, yields
Exercise 2
To determine whether a κ0 , κ1 pair forms the aggregate law of motion component of a rational expectations
equilibrium, we can proceed as follows:
• Determine the corresponding firm law of motion yt+1 = h0 + h1 yt + h2 Yt
• Test whether the associated aggregate law :Yt+1 = nh(Yt /n, Yt ) evaluates to Yt+1 = κ0 + κ1 Yt
In the second step we can use Yt = nyt = yt , so that Yt+1 = nh(Yt /n, Yt ) becomes
The output tells us that the answer is pair (iii), which implies (h0 , h1 , h2 ) = (95.0819, 1.0000, −.0475)
(Notice we use np.allclose to test equality of floating point numbers, since exact equality is too strict)
Regarding the iterative algorithm, one could loop from a given (κ0 , κ1 ) pair to the associated firm law and
then to a new (κ0 , κ1 ) pair
This amounts to implementing the operator Φ described in the lecture
(There is in general no guarantee that this iterative process will converge to a rational expectations equilib-
rium)
Exercise 3
Yt+1 − Yt = −F0 Yt − F1
we can obtain the implied aggregate law of motion via κ0 = −F1 and κ1 = 1 − F0
The Python code to solve this problem is below:
lq = LQ(Q, R, A, B, beta=β)
P, F, d = lq.stationary_values()
F = F.flatten()
κ0, κ1 = -F[1], 1 - F[0]
print(κ0, κ1)
95.0818745921 0.952459062704
The output yields the same (κ0 , κ1 ) pair obtained as an equilibrium from the previous exercise
Exercise 4
The monopolists LQ problem is almost identical to the planners problem from the previous exercise, except
that
[ ]
a1 −a0 /2
R=
−a0 /2 0
lq = LQ(Q, R, A, B, beta=β)
P, F, d = lq.stationary_values()
F = F.flatten()
m0, m1 = -F[1], 1 - F[0]
print(m0, m1)
73.472944035 0.926527055965
We see that the law of motion for the monopolist is approximately Yt+1 = 73.4729 + 0.9265Yt
In the rational expectations case the law of motion was approximately Yt+1 = 95.0818 + 0.9525Yt
One way to compare these two laws of motion is by their fixed points, which give long run equilibrium
output in each case
For laws of the form Yt+1 = c0 + c1 Yt , the fixed point is c0 /(1 − c1 )
If you crunch the numbers, you will see that the monopolist adopts a lower long run quantity than obtained
by the competitive market, implying a higher market price
This is analogous to the elementary static-case results
7.4.1 Overview
7.4.2 Background
Two firms are the only producers of a good the demand for which is governed by a linear inverse demand
function
p = a0 − a1 (q1 + q2 ) (7.22)
Here p = pt is the price of the good, qi = qit is the output of firm i = 1, 2 at time t and a0 > 0, a1 > 0
In (7.22) and what follows,
• the time subscript is suppressed when possible to simplify notation
• x̂ denotes a next period value of variable x
Each firm recognizes that its output affects total output and therefore the market price
The one-period payoff function of firm i is price times quantity minus adjustment costs:
Firm i chooses a decision rule that sets next period quantity q̂i as a function fi of the current state (qi , q−i )
An essential aspect of a Markov perfect equilibrium is that each firm takes the decision rule of the other firm
as known and given
Given f−i , the Bellman equation of firm i is
vi (qi , q−i ) = max {πi (qi , q−i , q̂i ) + βvi (q̂i , f−i (q−i , qi ))} (7.25)
q̂i
Definition A Markov perfect equilibrium of the duopoly model is a pair of value functions (v1 , v2 ) and a
pair of policy functions (f1 , f2 ) such that, for each i ∈ {1, 2} and each possible state,
• The value function vi satisfies the Bellman equation (7.25)
• The maximizer on the right side of (7.25) is equal to fi (qi , q−i )
The adjective Markov denotes that the equilibrium decision rules depend only on the current values of the
state variables, not other parts of their histories
Perfect means complete, in the sense that the equilibrium is constructed by backward induction and hence
builds in optimizing behavior for each firm at all possible future states
• These include many states that will not be reached when we iterate forward on the pair of equilibrium
strategies fi starting from a given initial state
Computation
One strategy for computing a Markov perfect equilibrium is iterating to convergence on pairs of Bellman
equations and decision rules
In particular, let vij , fij be the value function and policy function for firm i at the j-th iteration
Imagine constructing the iterates
{ }
vij+1 (qi , q−i ) = max πi (qi , q−i , q̂i ) + βvij (q̂i , f−i (q−i , qi )) (7.26)
q̂i
As we saw in the duopoly example, the study of Markov perfect equilibria in games with two players leads
us to an interrelated pair of Bellman equations
In linear quadratic dynamic games, these stacked Bellman equations become stacked Riccati equations with
a tractable mathematical structure
Well lay out that structure in a general setup and then apply it to some simple problems
1 −1
t∑
{ }
β t−t0 x′t Ri xt + u′it Qi uit + u′−it Si u−it + 2x′t Wi uit + 2u′−it Mi uit (7.27)
t=t0
Here
• xt is an n × 1 state vector and uit is a ki × 1 vector of controls for player i
• Ri is n × n
• Si is k−i × k−i
• Qi is ki × ki
• Wi is n × ki
• Mi is k−i × ki
• A is n × n
• Bi is n × ki
Computing Equilibrium
1 −1
t∑
{ }
β t−t0 x′t Π1t xt + u′1t Q1 u1t + 2u′1t Γ1t xt (7.29)
t=t0
subject to
F1t = (Q1 + βB1′ P1t+1 B1 )−1 (βB1′ P1t+1 Λ1t + Γ1t ) (7.31)
where P1t solves the matrix Riccati difference equation
P1t = Π1t − (βB1′ P1t+1 Λ1t + Γ1t )′ (Q1 + βB1′ P1t+1 B1 )−1 (βB1′ P1t+1 Λ1t + Γ1t ) + βΛ′1t P1t+1 Λ1t
(7.32)
Similarly, the policy that solves player 2s problem is
F2t = (Q2 + βB2′ P2t+1 B2 )−1 (βB2′ P2t+1 Λ2t + Γ2t ) (7.33)
where P2t solves
P2t = Π2t − (βB2′ P2t+1 Λ2t + Γ2t )′ (Q2 + βB2′ P2t+1 B2 )−1 (βB2′ P2t+1 Λ2t + Γ2t ) + βΛ′2t P2t+1 Λ2t
(7.34)
Here in all cases t = t0 , . . . , t1 − 1 and the terminal conditions are Pit1 = 0
The solution procedure is to use equations (7.31), (7.32), (7.33), and (7.34), and work backwards from time
t1 − 1
Since were working backwards, P1t+1 and P2t+1 are taken as given at each stage
Moreover, since
Key insight
A key insight is that equations (7.31) and (7.33) are linear in F1t and F2t
After these equations have been solved, we can take Fit and solve for Pit in (7.32) and (7.34)
Infinite horizon
We often want to compute the solutions of such games for infinite horizons, in the hope that the decision
rules Fit settle down to be time invariant as t1 → +∞
In practice, we usually fix t1 and compute the equilibrium of an infinite horizon game by driving t0 → −∞
This is the approach we adopt in the next section
Implementation
We use the function nnash from QuantEcon.py that computes a Markov perfect equilibrium of the infinite
horizon linear quadratic dynamic game in the manner described above
7.4.4 Application
Lets use these procedures to treat some applications, starting with the duopoly model
A duopoly model
To map the duopoly model into coupled linear-quadratic dynamic programming problems, define the state
and controls as
1
xt := q1t and uit := qi,t+1 − qit , i = 1, 2
q2t
If we write
where Q1 = Q2 = γ,
0 − a20 0 0 0 − a20
R1 := − a20 a1 a1
and R2 := 0 0 a1
2 2
0 a1
2 0 − a20 a1
2 a1
The optimal decision rule of firm i will take the form uit = −Fi xt , inducing the following closed loop
system for the evolution of x in the Markov perfect equilibrium:
Consider the previously presented duopoly model with parameter values of:
• a0 = 10
• a1 = 2
• β = 0.96
• γ = 12
From these we compute the infinite horizon MPE using the preceding code
"""
"""
import numpy as np
import quantecon as qe
# == Parameters == #
a0 = 10.0
a1 = 2.0
β = 0.96
γ = 12.0
# == In LQ form == #
A = np.eye(3)
B1 = np.array([[0.], [1.], [0.]])
B2 = np.array([[0.], [0.], [1.]])
Q1 = Q2 = γ
S1 = S2 = W1 = W2 = M1 = M2 = 0.0
# == Display policies == #
print("Computed policies for firm 1 and firm 2:\n")
print(f"F1 = {F1}")
print(f"F2 = {F2}")
print("\n")
One way to see that Fi is indeed optimal for firm i taking F2 as given is to use QuantEcon.pys LQ class
In particular, lets take F2 as computed above, plug it into (7.29) and (7.30) to get firm 1s problem and solve
it using LQ
We hope that the resulting policy will agree with F1 as computed above
Λ1 = A - B2 @ F2
lq1 = qe.LQ(Q1, R1, Λ1, B1, beta=β)
P1_ih, F1_ih, d = lq1.stationary_values()
F1_ih
This is close enough for rock and roll, as they say in the trade
Indeed, np.allclose agrees with our assessment
np.allclose(F1, F1_ih)
True
Dynamics
Lets now investigate the dynamics of price and output in this simple duopoly model under the MPE policies
Given our optimal policies F 1 and F 2, the state evolves according to (7.35)
AF = A - B1 @ F1 - B2 @ F2
n = 20
x = np.empty((3, n))
x[:, 0] = 1, 1, 1
for t in range(n-1):
x[:, t+1] = AF @ x[:, t]
q1 = x[1, :]
q2 = x[2, :]
q = q1 + q2 # Total output, MPE
p = a0 - a1 * q # Price, MPE
Note that the initial condition has been set to q10 = q20 = 1.0
The resulting figure looks as follows
To gain some perspective we can compare this to what happens in the monopoly case
The first panel in the next figure compares output of the monopolist and industry output under the MPE, as
a function of time
The second panel shows analogous curves for price
Here parameters are the same as above for both the MPE and monopoly solutions
The monopolist initial condition is q0 = 2.0 to mimic the industry initial condition q10 = q20 = 1.0 in the
MPE case
As expected, output is higher and prices are lower under duopoly than monopoly
7.4.5 Exercises
Exercise 1
Replicate the pair of figures showing the comparison of output and prices for the monopolist and duopoly
under MPE
Parameters are as in duopoly_mpe.py and you can use that code to compute MPE policies under duopoly
The optimal policy in the monopolist case can be computed using QuantEcon.pys LQ class
Exercise 2
St = Dpit + b
where
[ ]′
• St = S1t S2t
• D is a 2 × 2 negative definite matrix and
• b is a vector of constants
1 ∑
T
lim (pit Sit − Eit − Cit )
T →∞ T
t=0
Decision rules for price and quantity take the form uit = −Fi xt
The Markov perfect equilibrium of Judds model can be computed by filling in the matrices appropriately
The exercise is to calculate these matrices and compute the following figures
The first figure shows the dynamics of inventories for each firm when the parameters are
δ = 0.02
D = np.array([[-1, 0.5], [0.5, -1]])
b = np.array([25, 25])
c1 = c2 = np.array([1, -2, 1])
e1 = e2 = np.array([10, 10, 3])
7.4.6 Solutions
Exercise 1
First lets compute the duopoly MPE under the stated parameters
# == Parameters == #
a0 = 10.0
a1 = 2.0
β = 0.96
γ = 12.0
# == In LQ form == #
A = np.eye(3)
B1 = np.array([[0.], [1.], [0.]])
B2 = np.array([[0.], [0.], [1.]])
R1 = [[ 0., -a0/2, 0.],
[-a0 / 2., a1, a1 / 2.],
[ 0, a1 / 2., 0.]]
Q1 = Q2 = γ
S1 = S2 = W1 = W2 = M1 = M2 = 0.0
Now we evaluate the time path of industry output and prices given initial condition q10 = q20 = 1
AF = A - B1 @ F1 - B2 @ F2
n = 20
x = np.empty((3, n))
x[:, 0] = 1, 1, 1
for t in range(n-1):
x[:, t+1] = AF @ x[:, t]
q1 = x[1, :]
q2 = x[2, :]
q = q1 + q2 # Total output, MPE
p = a0 - a1 * q # Price, MPE
xt = qt − q̄ and ut = qt+1 − qt
R = a1 and Q = γ
A=B=1
R = a1
Q = γ
A = B = 1
lq_alt = qe.LQ(Q, R, A, B, beta=β)
P, F, d = lq_alt.stationary_values()
q_bar = a0 / (2.0 * a1)
qm = np.empty(n)
qm[0] = 2
x0 = qm[0] - q_bar
x = x0
for i in range(1, n):
x = A * x - B * F * x
qm[i] = float(x) + q_bar
pm = a0 - a1 * qm
ax = axes[0]
ax.plot(qm, 'b-', lw=2, alpha=0.75, label='monopolist output')
ax.plot(q, 'g-', lw=2, alpha=0.75, label='MPE total output')
ax = axes[1]
ax.plot(pm, 'b-', lw=2, alpha=0.75, label='monopolist price')
ax.plot(p, 'g-', lw=2, alpha=0.75, label='MPE price')
ax.set(ylabel="price", xlabel="time")
ax.legend(loc='upper right', frameon=0)
plt.show()
Exercise 2
δ = 0.02
D = np.array([[-1, 0.5], [0.5, -1]])
b = np.array([25, 25])
c1 = c2 = np.array([1, -2, 1])
e1 = e2 = np.array([10, 10, 3])
δ_1 = 1 - δ
S1 = np.zeros((2, 2))
S2 = np.copy(S1)
W1 = np.array([[ 0, 0],
[ 0, 0],
[-0.5 * e1[1], b[0] / 2.]])
W2 = np.array([[ 0, 0],
[ 0, 0],
[-0.5 * e2[1], b[1] / 2.]])
Now lets look at the dynamics of inventories, and reproduce the graph corresponding to δ = 0.02
AF = A - B1 @ F1 - B2 @ F2
n = 25
x = np.empty((3, n))
x[:, 0] = 2, 0, 1
for t in range(n-1):
x[:, t+1] = AF @ x[:, t]
I1 = x[0, :]
I2 = x[1, :]
fig, ax = plt.subplots(figsize=(9, 5))
ax.plot(I1, 'b-', lw=2, alpha=0.75, label='inventories, firm 1')
ax.plot(I2, 'g-', lw=2, alpha=0.75, label='inventories, firm 2')
ax.set_title(rf'$\delta = {δ}$')
ax.legend()
plt.show()
7.5.1 Overview
Basic setup
Decisions of two agents affect the motion of a state vector that appears as an argument of payoff functions
of both agents
As described in Markov perfect equilibrium, when decision makers have no concerns about the robustness
of their decision rules to misspecifications of the state dynamics, a Markov perfect equilibrium can be
computed via backwards recursion on two sets of equations
• a pair of Bellman equations, one for each agent
• a pair of equations that express linear decision rules for each agent as functions of that agents contin-
uation value function as well as parameters of preferences and state transition matrices
This lecture shows how a similar equilibrium concept and similar computational procedures apply when we
impute concerns about robustness to both decision makers
A Markov perfect equilibrium with robust agents will be characterized by
• a pair of Bellman equations, one for each agent
• a pair of equations that express linear decision rules for each agent as functions of that agents contin-
uation value function as well as parameters of preferences and state transition matrices
• a pair of equations that express linear decision rules for worst-case shocks for each agent as functions
of that agents continuation value function as well as parameters of preferences and state transition
matrices
Below, well construct robust firms version of the classic duopoly model with adjustment costs analyzed in
Markov perfect equilibrium
As we saw in Markov perfect equilibrium, the study of Markov perfect equilibria in dynamic games with
two players leads us to an interrelated pair of Bellman equations
In linear quadratic dynamic games, these stacked Bellman equations become stacked Riccati equations with
a tractable mathematical structure
We consider a general linear quadratic regulator game with two players, each of whom fears model mis-
specificationyers
We often call the players agents
The agents share a common baseline model for the transition dynamics of the state vector
• this is a counterpart of a rational expectations assumption of shared beliefs
But now one or more agents doubt that the baseline model is correctly specified
The agents express the possibility that their baseline specification is incorrect by adding a contribution Cvit
to the time t transition law for the state
• C is the usual volatility matrix that appears in stochastic versions of optimal linear regulator problems
• vit is a possibly history-dependent vector of distortions to the dynamics of the state that agent i uses
to represent misspecification of the original model
For convenience, well start with a finite horizon formulation, where t0 is the initial date and t1 is the common
terminal date
Player i takes a sequence {u−it } as given and chooses a sequence {uit } to minimize and {vit } to maximize
1 −1
t∑
{ }
β t−t0 x′t Ri xt + u′it Qi uit + u′−it Si u−it + 2x′t Wi uit + 2u′−it Mi uit − θi vit
′
vit (7.36)
t=t0
Here
• xt is an n × 1 state vector, uit is a ki × 1 vector of controls for player i, and
• vit is an h × 1 vector of distortions to the state dynamics that concern player i
• Ri is n × n
• Si is k−i × k−i
• Qi is ki × ki
• Wi is n × ki
• Mi is k−i × ki
• A is n × n
• Bi is n × ki
• C is n × h
• θi ∈ [θi , +∞] is a scalar multiplier parameter of player i
If θi = +∞, player i completely trusts the baseline model
If θi <∞ , player i suspects that some other unspecified model actually governs the transition dynamics
′ v is a time t contribution to an entropy penalty that an (imaginary) loss-maximizing agent
The term θi vit it
inside agent is mind charges for distorting the law of motion in a way that harms agent i
• the imaginary loss-maximizing agent helps the loss-minimizing agent by helping him construct
bounds on the behavior of his decision rule over a large set of alternative models of state transition
dynamics
Computing Equilibrium
A robust Markov perfect equilibrium is a pair of sequences {F1t , F2t } and a pair of sequences {K1t , K2t }
over t = t0 , . . . , t1 − 1 that satisfy
• {F1t , K1t } solves player 1s robust decision problem, taking {F2t } as given, and
• {F2t , K2t } solves player 2s robust decision problem, taking {F1t } as given
If we substitute u2t = −F2t xt into (7.36) and (7.37), then player 1s problem becomes minimization-
maximization of
1 −1
t∑
{ }
β t−t0 x′t Π1t xt + u′1t Q1 u1t + 2u′1t Γ1t xt − θ1 v1t
′
v1t (7.38)
t=t0
subject to
where
• Λit := A − B−i F−it
′ S F
• Πit := Ri + F−it i −it
The matrix F1t in the policy rule u1t = −F1t xt that solves agent 1s problem satisfies
F1t = (Q1 + βB1′ D1 (P1t+1 )B1 )−1 (βB1′ D1 (P1t+1 )Λ1t + Γ1t ) (7.41)
P1t = Π1t − (βB1′ D1 (P1t+1 )Λ1t + Γ1t )′ (Q1 + βB1′ D1 (P1t+1 )B1 )−1 (βB1′ D1 (P1t+1 )Λ1t + Γ1t ) + βΛ′1t D1 (P1t+1 )Λ1t
(7.42)
F2t = (Q2 + βB2′ D2 (P2t+1 )B2 )−1 (βB2′ D2 (P2t+1 )Λ2t + Γ2t ) (7.43)
P2t = Π2t − (βB2′ D2 (P2t+1 )Λ2t + Γ2t )′ (Q2 + βB2′ D2 (P2t+1 )B2 )−1 (βB2′ D2 (P2t+1 )Λ2t + Γ2t ) + βΛ′2t D2 (P2t+1 )Λ2t
(7.44)
Here in all cases t = t0 , . . . , t1 − 1 and the terminal conditions are Pit1 = 0
The solution procedure is to use equations (7.41), (7.42), (7.43), and (7.44), and work backwards from time
t1 − 1
Since were working backwards, P1t+1 and P2t+1 are taken as given at each stage
Moreover, since
• some terms on the right hand side of (7.41) contain F2t
• some terms on the right hand side of (7.43) contain F1t
we need to solve these k1 + k2 equations simultaneously
Key insight
As in Markov perfect equilibrium, a key insight here is that equations (7.41) and (7.43) are linear in F1t and
F2t
After these equations have been solved, we can take Fit and solve for Pit in (7.42) and (7.44)
Notice how js control law Fjt is a function of {Fis , s ≥ t, i ̸= j}
Thus, agent is choice of {Fit ; t = t0 , . . . , t1 − 1} influences agent js choice of control laws
However, in the Markov perfect equilibrium of this game, each agent is assumed to ignore the influence that
his choice exerts on the other agents choice
After these equations have been solved, we can also deduce associated sequences of worst-case shocks
Worst-case shocks
Infinite horizon
We often want to compute the solutions of such games for infinite horizons, in the hope that the decision
rules Fit settle down to be time invariant as t1 → +∞
In practice, we usually fix t1 and compute the equilibrium of an infinite horizon game by driving t0 → −∞
This is the approach we adopt in the next section
Implementation
We use the function nnash_robust to compute a Markov perfect equilibrium of the infinite horizon linear
quadratic dynamic game with robust planers in the manner described above
7.5.3 Application
A duopoly model
Without concerns for robustness, the model is identical to the duopoly model from the Markov perfect
equilibrium lecture
To begin, we briefly review the structure of that model
Two firms are the only producers of a good the demand for which is governed by a linear inverse demand
function
p = a0 − a1 (q1 + q2 ) (7.45)
Here p = pt is the price of the good, qi = qit is the output of firm i = 1, 2 at time t and a0 > 0, a1 > 0
In (7.45) and what follows,
• the time subscript is suppressed when possible to simplify notation
• x̂ denotes a next period value of variable x
Each firm recognizes that its output affects total output and therefore the market price
The one-period payoff function of firm i is price times quantity minus adjustment costs:
Substituting the inverse demand curve (7.45) into (7.46) lets us express the one-period payoff as
Firm i chooses a decision rule that sets next period quantity q̂i as a function fi of the current state (qi , q−i )
This completes our review of the duopoly model without concerns for robustness
Now we activate robustness concerns of both firms
To map a robust version of the duopoly model into coupled robust linear-quadratic dynamic programming
problems, we again define the state and controls as
1
xt := q1t and uit := qi,t+1 − qit , i = 1, 2
q2t
If we write
where Q1 = Q2 = γ,
0 − a20 0 0 0 − a20
R1 := − a20 a1 a1
2 and R2 := 0 0 a1
2
0 a1
2 0 − a20 a1
2 a1
then we recover the one-period payoffs (7.46) for the two firms in the duopoly model
The law of motion for the state xt is xt+1 = Axt + B1 u1t + B2 u2t where
1 0 0 0 0
A := 0 1 0 , B1 := 1 , B2 := 0
0 0 1 0 1
A robust decision rule of firm i will take the form uit = −Fi xt , inducing the following closed loop system
for the evolution of x in the Markov perfect equilibrium:
"""
"""
import numpy as np
import quantecon as qe
# == Parameters == #
a0 = 10.0
a1 = 2.0
β = 0.96
γ = 12.0
# == In LQ form == #
A = np.eye(3)
B1 = np.array([[0.], [1.], [0.]])
B2 = np.array([[0.], [0.], [1.]])
Q1 = Q2 = γ
S1 = S2 = W1 = W2 = M1 = M2 = 0.0
# == Display policies == #
print("Computed policies for firm 1 and firm 2:\n")
print(f"F1 = {F1}")
print(f"F2 = {F2}")
print("\n")
We add robustness concerns to the Markov Perfect Equilibrium model by extending the function qe.nnash
(link) into a robustness version by adding the maximization operator D(P ) into the backward induction
The MPE with robustness function is nnash_robust
The functions code is as follows
from scipy.linalg import solve
import matplotlib.pyplot as plt
def nnash_robust(A, C, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2,
θ1, θ2, beta=1.0, tol=1e-8, max_iter=1000):
r"""
Compute the limit of a Nash linear quadratic dynamic game with
robustness concern.
Parameters
----------
A : scalar(float) or array_like(float)
Corresponds to the MPE equations, should be of size (n, n)
C : scalar(float) or array_like(float)
As above, size (n, c), c is the size of w
B1 : scalar(float) or array_like(float)
As above, size (n, k_1)
B2 : scalar(float) or array_like(float)
As above, size (n, k_2)
R1 : scalar(float) or array_like(float)
As above, size (n, n)
R2 : scalar(float) or array_like(float)
As above, size (n, n)
Q1 : scalar(float) or array_like(float)
As above, size (k_1, k_1)
Q2 : scalar(float) or array_like(float)
As above, size (k_2, k_2)
S1 : scalar(float) or array_like(float)
As above, size (k_1, k_1)
S2 : scalar(float) or array_like(float)
As above, size (k_2, k_2)
W1 : scalar(float) or array_like(float)
As above, size (n, k_1)
W2 : scalar(float) or array_like(float)
As above, size (n, k_2)
M1 : scalar(float) or array_like(float)
Returns
-------
F1 : array_like, dtype=float, shape=(k_1, n)
Feedback law for agent 1
F2 : array_like, dtype=float, shape=(k_2, n)
Feedback law for agent 2
P1 : array_like, dtype=float, shape=(n, n)
The steady-state solution to the associated discrete matrix
Riccati equation for agent 1
P2 : array_like, dtype=float, shape=(n, n)
The steady-state solution to the associated discrete matrix
Riccati equation for agent 2
"""
# == Unload parameters and make sure everything is a matrix == #
params = A, C, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2
params = map(np.asmatrix, params)
A, C, B1, B2, R1, R2, Q1, Q2, S1, S2, W1, W2, M1, M2 = params
# == Initial values == #
n = A.shape[0]
k_1 = B1.shape[1]
k_2 = B2.shape[1]
v1 = np.eye(k_1)
v2 = np.eye(k_2)
P1 = np.eye(n) * 1e-5
P2 = np.eye(n) * 1e-5
F1 = np.random.randn(k_1, n)
F2 = np.random.randn(k_2, n)
for it in range(max_iter):
# update
F10 = F1
F20 = F2
I = np.eye(C.shape[1])
# D1(P1)
# Note: INV1 may not be solved if the matrix is singular
INV1 = solve(θ1 * I - C.T @ P1 @ C, I)
D1P1 = P1 + P1 @ C @ INV1 @ C.T @ P1
# D2(P2)
# Note: INV2 may not be solved if the matrix is singular
INV2 = solve(θ2 * I - C.T @ P2 @ C, I)
D2P2 = P2 + P2 @ C @ INV2 @ C.T @ P2
Λ1 = A - B2 @ F2
Λ2 = A - B1 @ F1
Π1 = R1 + F2.T @ S1 @ F2
Π2 = R2 + F1.T @ S2 @ F1
Γ1 = W1.T - M1.T @ F2
Γ2 = W2.T - M2.T @ F1
# Compute P1 and P2
P1 = Π1 - (B1.T @ D1P1 @ Λ1 + Γ1).T @ F1 + \
Λ1.T @ D1P1 @ Λ1
P2 = Π2 - (B2.T @ D2P2 @ Λ2 + Γ2).T @ F2 + \
Λ2.T @ D2P2 @ Λ2
else:
raise ValueError(f'No convergence: Iteration limit of {maxiter}
,→reached in nnash')
Some details
where
1
xt := q1t and uit := qi,t+1 − qit , i = 1, 2
q2t
and
0 − a20 0 0 0 − a20
R1 := − a20 a1 a1
, R2 := 0 0 a1 , Q1 = Q2 = γ, S1 = S2 = 0, W1 = W2 = 0, M1 = M2 =
2 2
0 a1
2 0 − a20 a1
2 a1
# == Parameters == #
a0 = 10.0
a1 = 2.0
β = 0.96
γ = 12.0
# == In LQ form == #
A = np.eye(3)
B1 = np.array([[0.], [1.], [0.]])
B2 = np.array([[0.], [0.], [1.]])
Q1 = Q2 = γ
S1 = S2 = W1 = W2 = M1 = M2 = 0.0
Consistency check
We first conduct a comparison test to check if nnash_robust agrees with qe.nnash in the non-
robustness case in which each θi ≈ +∞
We can see that the results are consistent across the two functions.
We want to compare the dynamics of price and output under the baseline MPE model with those under the
baseline model under the robust decision rules within the robust MPE
This means that we simulate the state dynamics under the MPE equilibrium closed loop transition matrix
Ao = A − B1 F1 − B2 F2
where F1 and F2 are the firms robust decision rules within the robust markov_perfect equilibrium
• by simulating under the baseline model transition dynamics and the robust MPE rules we are in
assuming that at the end of the day firms concerns about misspcification of the baseline model do not
materialize
• a short way of saying this is that misspecification fears are all just in the minds of the firms
• simulating under the baseline model is a common practice in the literature
• note that some assumption about the model that actually governs the data has to be made in order to
create a simulation
• later we will describe the (erroneous) beliefs of the two firms that justify their robust decisions as best
responses to transition laws that are distorted relative to the baseline model
After simulating xt under the baseline transition dynamics and robust decision rules Fi , i = 1, 2, we extract
and plot industry output qt = q1t + q2t and price pt = a0 a1 qt
Here we set the robustness and volatility matrix parameters as follows:
• θ1 = 0.02
• θ2 = 0.04
0
• C = 0.01
0.01
Because we have set θ1 < θ2 < +∞ we know that
• both firms fear that the baseline specification of the state transition dynamics are incorrect
• firm 1 fears misspecification more than firm 2
# == Robustness parameters and matrix == #
C = np.asmatrix([[0], [0.01], [0.01]])
θ1 = 0.02
θ2 = 0.04
n = 20
The following code prepares graphs that compare market-wide output q1t + q2t and the price of the good
pt under equilibrium decision rules Fi , i = 1, 2 from an ordinary Markov perfect equilibrium and a the
decision rules under a Markov perfect equilibrium with robust firms with multiplier parameters θi , i = 1, 2
set as described above
Both industry output and price are under the transition dynamics associated with the baseline model; only
the decision rules Fi differ across the two equilibrium objects presented
ax = axes[0]
ax.plot(q, 'g-', lw=2, alpha=0.75, label='MPE output')
ax.plot(qr, 'm-', lw=2, alpha=0.75, label='RMPE output')
ax.set(ylabel="output", xlabel="time", ylim=(2, 4))
ax.legend(loc='upper left', frameon=0)
ax = axes[1]
ax.plot(p, 'g-', lw=2, alpha=0.75, label='MPE price')
ax.plot(pr, 'm-', lw=2, alpha=0.75, label='RMPE price')
ax.set(ylabel="price", xlabel="time")
ax.legend(loc='upper right', frameon=0)
plt.show()
Under the dynamics associated with the baseline model, the price path is higher with the Markov perfect
equilibrium robust decision rules than it is with decision rules for the ordinary Markov perfect equilibrium
So is the industry output path
To dig a little beneath the forces driving these outcomes, we want to plot q1t and q2t in the Markov per-
fect equilibrium with robust firms and to compare them with corresponding objects in the Markov perfect
equilibrium without robust firms
fig, axes = plt.subplots(2, 1, figsize=(9, 9))
ax = axes[0]
ax.plot(q1, 'g-', lw=2, alpha=0.75, label='firm 1 MPE output')
ax.plot(qr1, 'b-', lw=2, alpha=0.75, label='firm 1 RMPE output')
ax.set(ylabel="output", xlabel="time", ylim=(1, 2))
ax = axes[1]
ax.plot(q2, 'g-', lw=2, alpha=0.75, label='firm 2 MPE output')
ax.plot(qr2, 'r-', lw=2, alpha=0.75, label='firm 2 RMPE output')
ax.set(ylabel="output", xlabel="time", ylim=(1, 2))
ax.legend(loc='upper left', frameon=0)
plt.show()
Evidently, firm 1s output path is substantially lower when firms are robust firms while firm 2s output path is
virtually the same as it would be in an ordinary Markov perfect equilibrium with no robust firms
Recall that we have set θ1 = .02 and θ2 = .04, so that firm 1 fears misspecification of the baseline model
Heterogeneous beliefs
As before, let $A^o = A - B_1 F_1^r - B_2 F_2^r $, where in a robust MPE, Fir is a robust decision rule for
firm i
Worst-case forecasts of xt starting from t = 0 differ between the two firms
This means that worst-case forecasts of industry output q1t + q2t and price pt also differ between the two
firms
To find these worst-case beliefs, we compute the following three closed loop transition matrices
• Ao
• $A^o + C K_1 $
• $A^o + C K_2 $
We call the first transition law, namely, Ao , the baseline transition under firms robust decision rules
We call the second and third worst-case transitions under robust decision rules for firms 1 and 2
From {xt } paths generated by each of these transition laws, we pull off associated price and total output
sequences
The following code plots them
# == Plot == #
fig, axes = plt.subplots(2, 1, figsize=(9, 9))
ax = axes[0]
ax.plot(qrp1, 'b--', lw=2, alpha=0.75, label='RMPE worst-case belief output
,→player 1')
ax.plot(qrp2, 'r:', lw=2, alpha=0.75, label='RMPE worst-case belief output
,→player 2')
ax.plot(qr, 'm-', lw=2, alpha=0.75, label='RMPE output')
ax.set(ylabel="output", xlabel="time", ylim=(2, 4))
ax.legend(loc='upper left', frameon=0)
ax = axes[1]
ax.plot(prp1, 'b--', lw=2, alpha=0.75, label='RMPE worst-case belief price
,→player 1')
We see from the above graph that under robustness concerns, player 1 and player 2 have heterogeneous be-
liefs about total output and the goods price even though they share the same baseline model and information
• firm 1 thinks that total output will be higher and price lower than does firm 2
• this leads firm 1 to produce less than firm 2
These beliefs justify (or rationalize) the Markov perfect equilibrium robust decision rules
This means that the robust rules are the unique optimal rules (or best responses) to the indicated worst-case
transition dynamics
([HS08] discuss how this property of robust decision rules is connected to the concept of admissibility in
Bayesian statistical decision theory)
Contents
A little knowledge of geometric series goes a long way – Robert E. Lucas, Jr.
Asset pricing is all about covariances – Lars Peter Hansen
7.6.1 Overview
Lets look at some equations that we expect to hold for prices of assets under ex-dividend contracts (we will
consider cum-dividend pricing in the exercises)
What happens if for some reason traders discount payouts differently depending on the state of the world?
Michael Harrison and David Kreps [HK79] and Lars Peter Hansen and Scott Richard [HR87] showed that
in quite general settings the price of an ex-dividend asset obeys
Recall that, from the definition of a conditional covariance covt (xt+1 , yt+1 ), we have
Aside from prices, another quantity of interest is the price-dividend ratio vt := pt /dt
Lets write down an expression that this ratio should satisfy
We can divide both sides of (7.50) by dt to get
[ ]
dt+1
vt = E t mt+1 (1 + vt+1 ) (7.53)
dt
Below well discuss the implication of this equation
What can we say about price dynamics on the basis of the models described above?
The answer to this question depends on
1. the process we specify for dividends
2. the stochastic discount factor and how it correlates with dividends
For now lets focus on the risk neutral case, where the stochastic discount factor is constant, and study how
prices depend on the dividend process
The simplest case is risk neutral pricing in the face of a constant, non-random dividend stream dt = d > 0
Removing the expectation from (7.49) and iterating forward gives
pt = β(d + pt+1 )
= β(d + β(d + pt+2 ))
..
.
= β(d + βd + β 2 d + · · · + β k−2 d + β k−1 pt+k )
βd
p̄ := (7.54)
1−β
This price is the equilibrium price in the constant dividend case
Indeed, simple algebra shows that setting pt = p̄ for all t satisfies the equilibrium condition pt = β(d+pt+1 )
Consider a growing, non-random dividend process dt+1 = gdt where 0 < gβ < 1
While prices are not usually constant when dividends grow over time, the price dividend-ratio might be
If we guess this, substituting vt = v into (7.53) as well as our other assumptions, we get v = βg(1 + v)
Since βg < 1, we have a unique positive solution:
βg
v=
1 − βg
The price is then
βg
pt = dt
1 − βg
If, in this example, we take g = 1 + κ and let ρ := 1/β − 1, then the price becomes
1+κ
pt = dt
ρ−κ
This is called the Gordon formula
gt = g(Xt ), t = 1, 2, . . .
where
1. {Xt } is a finite Markov chain with state space S and transition probabilities
P (x, y) := P{Xt+1 = y | Xt = x} (x, y ∈ S)
import numpy as np
import matplotlib.pyplot as plt
import quantecon as qe
Pricing
To obtain asset prices in this setting, lets adapt our analysis from the case of deterministic growth
In that case we found that v is constant
This encourages us to guess that, in the current case, vt is constant given the state Xt
In other words, we are looking for a fixed function v such that the price-dividend ratio satisfies vt = v(Xt )
We can substitute this guess into (7.53) to get
or
∑
v(x) = β K(x, y)(1 + v(y)) where K(x, y) := g(y)P (x, y) (7.56)
y∈S
We can then think of (7.56) as n stacked equations, one for each state, and write it in matrix form as
v = βK(⊮ + v) (7.57)
Here
• v is understood to be the column vector (v(x1 ), . . . , v(xn ))′
• K is the matrix (K(xi , xj ))1≤i,j≤n
• ⊮ is a column vector of ones
When does (7.57) have a unique solution?
From the Neumann series lemma and Gelfands formula, this will be the case if βK has spectral radius
strictly less than one
In other words, we require that the eigenvalues of K be strictly less than β −1 in modulus
The solution is then
Code
K = mc.P * np.exp(mc.state_values)
I = np.identity(n)
v = solve(I - β * K, β * K @ np.ones(n))
Now lets turn to the case where agents are risk averse
Well price several distinct assets, including
• The price of an endowment stream
• A consol (a type of bond issued by the UK government in the 19th century)
• Call options on a consol
Lets start with a version of the celebrated asset pricing model of Robert E. Lucas, Jr. [Luc78]
As in [Luc78], suppose that the stochastic discount factor takes the form
u′ (ct+1 )
mt+1 = β (7.59)
u′ (ct )
where u is a concave utility function and ct is time t consumption of a representative consumer
(A derivation of this expression is given in a later lecture)
Assume the existence of an endowment that follows (7.55)
The asset being priced is a claim on the endowment process
Following [Luc78], suppose further that in equilibrium, consumption is equal to the endowment, so that
dt = ct for all t
For utility, well assume the constant relative risk aversion (CRRA) specification
c1−γ
u(c) = with γ > 0 (7.60)
1−γ
When γ = 1 we let u(c) = ln c
Inserting the CRRA specification into (7.59) and using ct = dt gives
( )−γ
ct+1 −γ
mt+1 = β = βgt+1 (7.61)
ct
Substituting this into (7.53) gives the price-dividend ratio formula
[ ]
v(Xt ) = βEt g(Xt+1 )1−γ (1 + v(Xt+1 ))
If we let
v = βJ(⊮ + v)
Assuming that the spectral radius of J is strictly less than β −1 , this equation has the unique solution
We will define a function tree_price to solve for $v$ given parameters stored in the class AssetPriceModel
class AssetPriceModel:
"""
A class that stores the primitives of the asset pricing model.
Parameters
----------
β : scalar, float
Discount factor
mc : MarkovChain
Contains the transition matrix and set of state values for the state
process
γ : scalar(float)
Coefficient of risk aversion
g : callable
The function mapping states to growth rates
"""
def __init__(self, β=0.96, mc=None, γ=2.0, g=np.exp):
self.β, self.γ = β, γ
self.g = g
self.n = self.mc.P.shape[0]
def tree_price(ap):
"""
Computes the price-dividend ratio of the Lucas tree.
Parameters
----------
ap: AssetPriceModel
An instance of AssetPriceModel containing primitives
Returns
-------
v : array_like(float)
"""
# == Simplify names, set up matrices == #
β, γ, P, y = ap.β, ap.γ, ap.mc.P, ap.mc.state_values
J = P * ap.g(y)**(1 - γ)
# == Compute v == #
I = np.identity(ap.n)
Ones = np.ones(ap.n)
v = solve(I - β * J, β * J @ Ones)
return v
Heres a plot of v as a function of the state for several values of γ, with a positively correlated Markov process
and g(x) = exp(x)
for γ in γs:
ap.γ = γ
v = tree_price(ap)
ax.plot(states, v, lw=2, alpha=0.6, label=rf"$\gamma = {γ}$")
Special cases
Thus, with log preferences, the price-dividend ratio for a Lucas tree is constant
Alternatively, if γ = 0, then J = K and we recover the risk neutral solution (7.58)
This is as expected, since γ = 0 implies u(c) = c (and hence agents are risk neutral)
A Risk-Free Consol
pt = Et [mt+1 (ζ + pt+1 )]
[ ]
−γ
pt = Et βgt+1 (ζ + pt+1 ) (7.63)
Letting M (x, y) = P (x, y)g(y)−γ and rewriting in vector notation yields the solution
p = (I − βM )−1 βM ζ⊮ (7.64)
Parameters
----------
ap: AssetPriceModel
An instance of AssetPriceModel containing primitives
ζ : scalar(float)
Coupon of the console
Returns
-------
p : array_like(float)
Console bond prices
"""
# == Compute price == #
I = np.identity(ap.n)
Ones = np.ones(ap.n)
p = solve(I - β * M, β * ζ * M @ Ones)
return p
Lets now price options of varying maturity that give the right to purchase a consol at a price pS
The first term on the right is the value of waiting, while the second is the value of exercising now
We can also write this as
∑
w(x, pS ) = max β P (x, y)g(y)−γ w(y, pS ), p(x) − pS (7.65)
y∈S
With M (x, y) = P (x, y)g(y)−γ and w as the vector of values (w(xi ), pS )ni=1 , we can express (7.65) as the
nonlinear vector equation
w = max{βM w, p − pS ⊮} (7.66)
To solve (7.66), form the operator T mapping vector w into vector T w via
T w = max{βM w, p − pS ⊮}
Parameters
----------
ap: AssetPriceModel
An instance of AssetPriceModel containing primitives
ζ : scalar(float)
Coupon of the console
p_s : scalar(float)
Strike price
: scalar(float), optional(default=1e-8)
Tolerance for infinite horizon problem
Returns
-------
w : array_like(float)
Infinite horizon call option prices
"""
# == Simplify names, set up matrices == #
β, γ, P, y = ap.β, ap.γ, ap.mc.P, ap.mc.state_values
M = P * ap.g(y)**(- γ)
w = np.zeros(ap.n)
error = + 1
while error > :
# == Maximize across columns == #
w_new = np.maximum(β * M @ w, p - p_s)
# == Find maximal difference of each component and update == #
error = np.amax(np.abs(w - w_new))
w = w_new
return w
ap = AssetPriceModel(β=0.9)
ζ = 1.0
strike_price = 40
x = ap.mc.state_values
p = consol_price(ap, ζ)
w = call_option(ap, ζ, strike_price)
−γ
As before, the stochastic discount factor is mt+1 = βgt+1
It follows that the reciprocal Rt−1 of the gross risk-free interest rate Rt in state x is
∑
Et mt+1 = β P (x, y)g(y)−γ
y∈S
Other terms
Let mj be an n × 1 vector whose i th component is the reciprocal of the j -period gross risk-free interest
rate in state xi
Then m1 = βM , and mj+1 = M mj for j ≥ 1
7.6.5 Exercises
Exercise 1
Exercise 2
n = 5
P = 0.0125 * np.ones((n, n))
P += np.diag(0.95 - 0.0125 * np.ones(5))
s = np.array([0.95, 0.975, 1.0, 1.025, 1.05]) # state values of the Markov
,→chain
γ = 2.0
β = 0.94
Exercise 3
Lets consider finite horizon call options, which are more common than the infinite horizon variety
Finite horizon options obey functional equations closely related to (7.65)
A k period option expires after k periods
If we view today as date zero, a k period option gives the owner the right to exercise the option to purchase
the risk-free consol at the strike price pS at dates 0, 1, . . . , k − 1
The option expires at time k
Thus, for k = 1, 2, . . ., let w(x, k) be the value of a k-period option
It obeys
∑
w(x, k) = max β P (x, y)g(y)−γ w(y, k − 1), p(x) − pS
y∈S
7.6.6 Solutions
Exercise 1
pt = dt + βEt [pt+1 ]
Exercise 2
n = 5
P = 0.0125 * np.ones((n, n))
P += np.diag(0.95 - 0.0125 * np.ones(5))
s = np.array([0.95, 0.975, 1.0, 1.025, 1.05]) # state values
mc = qe.MarkovChain(P, state_values=s)
γ = 2.0
β = 0.94
ζ = 1.0
p_s = 150.0
tree_price(apm)
consol_price(apm, ζ)
call_option(apm, ζ, p_s)
fig, ax = plt.subplots()
ax.plot(s, consol_price(apm, ζ), label='consol')
ax.plot(s, call_option(apm, ζ, p_s), label='call option')
ax.legend()
plt.show()
Exercise 3
return w
fig, ax = plt.subplots()
for k in [5, 25]:
w = finite_horizon_call_option(apm, ζ, p_s, k)
Not surprisingly, the option has greater value with larger k. This is because the owner has a longer time
horizon over which he or she may exercise the option.
Contents
7.7.1 Overview
The elegant asset pricing model of Lucas [Luc78] attempts to answer this question in an equilibrium setting
with risk averse agents
While we mentioned some consequences of Lucas model earlier, it is now time to work through the model
more carefully, and try to understand where the fundamental asset pricing equation comes from
A side benefit of studying Lucas model is that it provides a beautiful illustration of model building in general
and equilibrium pricing in competitive models in particular
Another difference to our first asset pricing lecture is that the state space and shock will be continous rather
than discrete
Lucas studied a pure exchange economy with a representative consumer (or household), where
• Pure exchange means that all endowments are exogenous
• Representative consumer means that either
– there is a single consumer (sometimes also referred to as a household), or
– all consumers have identical endowments and preferences
Either way, the assumption of a representative agent means that prices adjust to eradicate desires to trade
This makes it very easy to compute competitive equilibrium prices
Basic Setup
Assets
There is a single productive unit that costlessly generates a sequence of consumption goods {yt }∞
t=0
We will assume that this endowment is Markovian, following the exogenous process
7.7. Asset Pricing II: The Lucas Asset Pricing Model 893
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Consumers
A representative consumer ranks consumption streams {ct } according to the time separable utility functional
∞
∑
E β t u(ct ) (7.67)
t=0
Here
• β ∈ (0, 1) is a fixed discount factor
• u is a strictly increasing, strictly concave, continuously differentiable period utility function
• E is a mathematical expectation
ct + πt+1 pt ≤ πt yt + πt pt
In fact the endowment process is Markovian, so that the only relevant information is the current state y ∈ R+
(dropping the time subscript)
This leads us to guess an equilibrium where price is a function p of y
Remarks on the solution method
• Since this is a competitive (read: price taking) model, the consumer will take this function p as given
• In this way we determine consumer behavior given p and then use equilibrium conditions to recover p
• This is the standard way to solve competitive equilibrum models
Using the assumption that price is a given function p of y, we write the value function and constraint as
{ ∫ }
′
v(π, y) = max ′
u(c) + β v(π , G(y, z))ϕ(dz)
c,π
subject to
We can invoke the fact that utility is increasing to claim equality in (7.68) and hence eliminate the constraint,
obtaining
{ ∫ }
′ ′
v(π, y) = max
′
u[π(y + p(y)) − π p(y)] + β v(π , G(y, z))ϕ(dz) (7.69)
π
The solution to this dynamic programming problem is an optimal policy expressing either π ′ or c as a
function of the state (π, y)
• Each one determines the other, since c(π, y) = π(y + p(y)) − π ′ (π, y)p(y)
Next steps
7.7. Asset Pricing II: The Lucas Asset Pricing Model 895
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Equilibrium constraints
Since the consumption good is not storable, in equilibrium we must have ct = yt for all t
In addition, since there is one representative consumer (alternatively, since all consumers are identical), there
should be no trade in equilibrium
In particular, the representative consumer owns the whole tree in every period, so πt = 1 for all t
Prices must adjust to satisfy these two constraints
Now observe that the first order condition for (7.69) can be written as
∫
u (c)p(y) = β v1′ (π ′ , G(y, z))ϕ(dz)
′
Next we impose the equilibrium constraints while combining the last two equations to get
∫
u′ [G(y, z)]
p(y) = β [G(y, z) + p(G(y, z))]ϕ(dz) (7.70)
u′ (y)
[ ′ ]
u (ct+1 )
pt = Et β ′ (yt+1 + pt+1 ) (7.71)
u (ct )
Instead of solving for it directly well follow Lucas indirect approach, first setting
∫
f (y) = h(y) + β f [G(y, z)]ϕ(dz) (7.73)
∫
Here h(y) := β u′ [G(y, z)]G(y, z)ϕ(dz) is a function that depends only on the primitives
Equation (7.73) is a functional equation in f
The plan is to solve out for f and convert back to p via (7.72)
To solve (7.73) well use a standard method: convert it to a fixed point problem
First we introduce the operator T mapping f into T f as defined by
∫
(T f )(y) = h(y) + β f [G(y, z)]ϕ(dz) (7.74)
The reason we do this is that a solution to (7.73) now corresponds to a function f ∗ satisfying (T f ∗ )(y) =
f ∗ (y) for all y
In other words, a solution is a fixed point of T
This means that we can use fixed point theory to obtain and compute the solution
7.7. Asset Pricing II: The Lucas Asset Pricing Model 897
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
= β∥f − g∥
Since the right hand side is an upper bound, taking the sup over all y on the left hand side gives (7.75) with
α := β
Computation – An Example
The preceding discussion tells that we can compute f ∗ by picking any arbitrary f ∈ cbR+ and then iterating
with T
The equilibrium price function p∗ can then be recovered by p∗ (y) = f ∗ (y)/u′ (y)
Lets try this when ln yt+1 = α ln yt + σϵt+1 where {ϵt } is iid and standard normal
Utility will take the isoelastic form u(c) = c1−γ /(1 − γ), where γ > 0 is the coefficient of relative risk
aversion
Some code to implement the iterative computational procedure can be found in lucastree.py
We repeat it here for convenience
import numpy as np
from scipy.stats import lognorm
from scipy.integrate import fixed_quad
class LucasTree:
"""
Class to store parameters of a the Lucas tree model, a grid for the
iteration step and some other helpful bits and pieces.
Parameters
----------
γ : scalar(float)
The coefficient of risk aversion in the household's CRRA utility
function
β : scalar(float)
The household's discount factor
α : scalar(float)
The correlation coefficient in the shock process
σ : scalar(float)
Attributes
----------
γ, β, α, σ, grid_size : see Parameters
grid : ndarray
Properties for grid upon which prices are evaluated
: scipy.stats.lognorm
The distribution for the shock process
Examples
--------
>>> tree = LucasTree(γ=2, β=0.95, α=0.90, σ=0.1)
>>> price_vals = solve_lucas_model(tree)
"""
def __init__(self,
γ=2,
β=0.95,
α=0.90,
σ=0.1,
grid_size=100):
7.7. Asset Pricing II: The Lucas Asset Pricing Model 899
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Parameters
----------
f : array_like(float)
A candidate function on R_+ represented as points on a grid
and should be flat NumPy array with len(f) = len(grid)
Tf : array_like(float)
Optional storage array for Tf
Returns
-------
Tf : array_like(float)
The updated function Tf
Notes
-----
The argument `Tf` is optional, but recommended. If it is passed
into this function, then we do not have to allocate any memory
for the array here. As this function is often called many times
in an iterative algorithm, this can save significant computation
time.
"""
grid, h = tree.grid, tree.h
α, β = tree.α, tree.β
z_vec = tree.draws
return Tf
Parameters
----------
tree : An instance of LucasTree
Contains parameters
tol : float
error tolerance
max_iter : int
the maximum number of iterations
Returns
-------
price : array_like(float)
The prices at the grid points in the attribute `grid` of the object
"""
# == simplify notation == #
grid, grid_size = tree.grid, tree.grid_size
γ = tree.γ
i = 0
f = np.empty(grid_size) # Initial guess of f
error = tol + 1
return price
tree = LucasTree()
price_vals = solve_lucas_model(tree)
plt.figure(figsize=(12, 8))
plt.plot(tree.grid, price_vals, label='$p*(y)$')
plt.xlabel('$y$')
plt.ylabel('price')
plt.legend()
plt.show()
7.7. Asset Pricing II: The Lucas Asset Pricing Model 901
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
The price is increasing, even if we remove all serial correlation from the endowment process
The reason is that a larger current endowment reduces current marginal utility
The price must therefore rise to induce the household to consume the entire endowment (and hence satisfy
the resource constraint)
What happens with a more patient consumer?
Here the orange line corresponds to the previous parameters and the green line is price when β = 0.98
We see that when consumers are more patient the asset becomes more valuable, and the price of the Lucas
tree shifts up
Exercise 1 asks you to replicate this figure
7.7.3 Exercises
Exercise 1
7.7.4 Solutions
Exercise 1
Note that this code assumes you have run the lucastree.py script embedded above
7.7. Asset Pricing II: The Lucas Asset Pricing Model 903
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
fig, ax = plt.subplots(figsize=(10,7))
ax.set_xlabel('$y$', fontsize=16)
ax.set_ylabel('price', fontsize=16)
ax.legend(loc='upper left')
ax.set_xlim(min(grid), max(grid))
plt.show()
Contents
7.8.1 Overview
References
Prior to reading the following you might like to review our lectures on
• Markov chains
• Asset pricing with finite state space
Bubbles
The model simplifies by ignoring alterations in the distribution of wealth among investors having different
beliefs about the fundamentals that determine asset payouts
There is a fixed number A of shares of an asset
Each share entitles its owner to a stream of dividends {dt } governed by a Markov chain defined on a state
space S ∈ {0, 1}
The dividend obeys
{
0 if st = 0
dt =
1 if st = 1
The owner of a share at the beginning of time t is entitled to the dividend paid at time t
The owner of the share at the beginning of time t is also entitled to sell the share to another investor during
time t
Two types h = a, b of investors differ only in their beliefs about a Markov transition matrix P with typical
element
P (i, j) = P{st+1 = j | st = i}
The stationary (i.e., invariant) distributions of these two matrices can be calculated as follows:
import numpy as np
import quantecon as qe
mcB.stationary_distributions
Ownership Rights
An owner of the asset at the end of time t is entitled to the dividend at time t + 1 and also has the right to
sell the asset at time t + 1
Both types of investors are risk-neutral and both have the same fixed discount factor β ∈ (0, 1)
In our numerical example, well set β = .75, just as Harrison and Kreps did
Well eventually study the consequences of two different assumptions about the number of shares A relative
to the resources that our two types of investors can invest in the stock
1. Both types of investors have enough resources (either wealth or the capacity to borrow) so that they
can purchase the entire available stock of the asset1
2. No single type of investor has sufficient resources to purchase the entire stock
Case 1 is the case studied in Harrison and Kreps
In case 2, both types of investor always hold at least some of the asset
The above specifications of the perceived transition matrices Pa and Pb , taken directly from Harrison and
Kreps, build in stochastically alternating temporary optimism and pessimism
Remember that state 1 is the high dividend state
• In state 0, a type a agent is more optimistic about next periods dividend than a type b agent
• In state 1, a type b agent is more optimistic about next periods dividend
[ ] [ ]
However, the stationary distributions πA = .57 .43 and πB = .43 .57 tell us that a type B person is
more optimistic about the dividend process in the long run than is a type A person
Transition matrices for the temporarily optimistic and pessimistic investors are constructed as follows
1
By assuming that both types of agent always have deep enough pockets to purchase all of the asset, the model takes wealth
dynamics off the table. The Harrison-Kreps model generates high trading volume when the state changes either from 0 to 1 or from
1 to 0.
Temporarily optimistic investors (i.e., the investor with the most optimistic beliefs in each state) believe the
transition matrix
[1 1]
Po = 21 23
4 4
Information
Investors know a price function mapping the state st at t into the equilibrium price p(st ) that prevails in that
state
This price function is endogenous and to be determined below
When investors choose whether to purchase or sell the asset at t, they also know st
Summary Table
The following table gives a summary of the findings obtained in the remainder of the lecture (you will be
asked to recreate the table in an exercise)
It records implications of Harrison and Krepss specifications of Pa , Pb , β
st 0 1
pa 1.33 1.22
pb 1.45 1.91
po 1.85 2.08
pp 1 1
p̂a 1.85 1.69
p̂b 1.69 2.08
Here
• pa is the equilibrium price function under homogeneous beliefs Pa
• pb is the equilibrium price function under homogeneous beliefs Pb
• po is the equilibrium price function under heterogeneous beliefs with optimistic marginal investors
• pp is the equilibrium price function under heterogeneous beliefs with pessimistic marginal investors
• p̂a is the amount type a investors are willing to pay for the asset
• p̂b is the amount type b investors are willing to pay for the asset
Well explain these values and how they are calculated one row at a time
[ ] [ ]
ph (0) −1 0
= β[I − βPh ] Ph (7.76)
ph (1) 1
The first two rows of of the table report pa (s) and pb (s)
Heres a function that can be used to compute these values
"""
"""
import scipy.linalg as la
return prices
These equilibrium prices under homogeneous beliefs are important benchmarks for the subsequent analysis
• ph (s) tells what investor h thinks is the fundamental value of the asset
• Here fundamental value means the expected discounted present value of future dividends
We will compare these fundamental values of the asset with equilibrium values when traders have different
beliefs
p̄(s) = β max {Pa (s, 0)p̄(0) + Pa (s, 1)(1 + p̄(1)), Pb (s, 0)p̄(0) + Pb (s, 1)(1 + p̄(1))} (7.77)
for s = 0, 1
The marginal investor who prices the asset in state s is of type a if
Pa (s, 0)p̄(0) + Pa (s, 1)(1 + p̄(1)) > Pb (s, 0)p̄(0) + Pb (s, 1)(1 + p̄(1))
Pa (s, 1)p̄(0) + Pa (s, 1)(1 + p̄(1)) < Pb (s, 1)p̄(0) + Pb (s, 1)(1 + p̄(1))
Insufficient Funds
Outcomes differ when the more optimistic type of investor has insufficient wealth or insufficient ability to
borrow enough to hold the entire stock of the asset
In this case, the asset price must adjust to attract pessimistic investors
Instead of equation (7.77), the equilibrium price satisfies
p̌(s) = β min {Pa (s, 1)p̌(0) + Pa (s, 1)(1 + p̌(1)), Pb (s, 1)p̌(0) + Pb (s, 1)(1 + p̌(1))} (7.79)
and the marginal investor who prices the asset is always the one that values it less highly than does the other
type
Now the marginal investor is always the (temporarily) pessimistic type
Notice from the sixth row of that the pessimistic price p is lower than the homogeneous belief prices pa and
pb in both states
When pessimistic investors price the asset according to (7.79), optimistic investors think that the asset is
underpriced
If they could, optimistic investors would willingly borrow at the one-period gross interest rate β −1 to pur-
chase more of the asset
Implicit constraints on leverage prohibit them from doing so
When optimistic investors price the asset as in equation (7.77), pessimistic investors think that the asset is
overpriced and would like to sell the asset short
Constraints on short sales prevent that
return p_new
Further Interpretation
[Sch14] interprets the Harrison-Kreps model as a model of a bubble a situation in which an asset price
exceeds what every investor thinks is merited by the assets underlying dividend stream
Scheinkman stresses these features of the Harrison-Kreps model:
• Compared to the homogeneous beliefs setting leading to the pricing formula, high volume occurs
when the Harrison-Kreps pricing formula prevails
Type a investors sell the entire stock of the asset to type b investors every time the state switches from st = 0
to st = 1
Type b investors sell the asset to type a investors every time the state switches from st = 1 to st = 0
Scheinkman takes this as a strength of the model because he observes high volume during famous bubbles
• If the supply of the asset is increased sufficiently either physically (more houses are built) or artificially
(ways are invented to short sell houses), bubbles end when the supply has grown enough to outstrip
optimistic investors resources for purchasing the asset
• If optimistic investors finance purchases by borrowing, tightening leverage constraints can extinguish
a bubble
Scheinkman extracts insights about effects of financial regulations on bubbles
He emphasizes how limiting short sales and limiting leverage have opposite effects
7.8.4 Exercises
Exercise 1
Recreate the summary table using the functions we have built above
st 0 1
pa 1.33 1.22
pb 1.45 1.91
po 1.85 2.08
pp 1 1
p̂a 1.85 1.69
p̂b 1.69 2.08
You will first need to define the transition matrices and dividend payoff vector
7.8.5 Solutions
Exercise 1
First we will obtain equilibrium price vectors with homogeneous beliefs, including when all investors are
optimistic or pessimistic
p_a
====================
State 0: [ 1.33]
State 1: [ 1.22]
--------------------
p_b
====================
State 0: [ 1.45]
State 1: [ 1.91]
--------------------
p_optimistic
====================
State 0: [ 1.85]
State 1: [ 2.08]
--------------------
p_pessimistic
====================
State 0: [ 1.]
State 1: [ 1.]
--------------------
We will use the price_optimistic_beliefs function to find the price under heterogeneous beliefs
p_optimistic
====================
State 0: [ 1.85]
State 1: [ 2.08]
--------------------
p_hat_a
====================
State 0: [ 1.85]
State 1: [ 1.69]
--------------------
p_hat_b
====================
State 0: [ 1.69]
State 1: [ 2.08]
--------------------
Notice that the equilibrium price with heterogeneous beliefs is equal to the price under single beliefs with
optimistic investors - this is due to the marginal investor being the temporarily optimistic type
7.9.1 Overview
In this lecture we study a simplified version of an uncertainty traps model of Fajgelbaum, Schaal and
Taschereau-Dumouchel [FSTD15]
The model features self-reinforcing uncertainty that has big impacts on economic activity
In the model,
• Fundamentals vary stochastically and are not fully observable
• At any moment there are both active and inactive entrepreneurs; only active entrepreneurs produce
• Agents – active and inactive entrepreuneurs – have beliefs about the fundamentals expressed as prob-
ability distributions
• Greater uncertainty means greater dispersions of these distributions
• Entrepreneurs are risk averse and hence less inclined to be active when uncertainty is high
• The output of active entrepreneurs is observable, supplying a noisy signal that helps everyone inside
the model infer fundamentals
• Entrepreneurs update their beliefs about fundamentals using Bayes Law, implemented via Kalman
filtering
Uncertainty traps emerge because:
• High uncertainty discourages entrepreneurs from becoming active
• A low level of participation – i.e., a smaller number of active entrepreneurs – diminishes the flow of
information about fundamentals
• Less information translates to higher uncertainty, further discouraging entrepreneurs from choosing
to be active, and so on
Uncertainty traps stem from a positive externality: high aggregate economic activity levels generates valu-
able information
The original model described in [FSTD15] has many interesting moving parts
Here we examine a simplified version that nonetheless captures many of the key ideas
Fundamentals
where
Output
( )
xm = θ + ϵm where ϵm ∼ N 0, γx−1 (7.80)
With this notation and primes for next period values, we can write the updating of the mean and precision
via
γµ + M γx X
µ′ = ρ (7.81)
γ + M γx
( )−1
ρ2
γ′ = + σθ2 (7.82)
γ + M γx
These are standard Kalman filtering results applied to the current setting
Exercise 1 provides more details on how (7.81) and (7.82) are derived, and then asks you to fill in remaining
steps
The next figure plots the law of motion for the precision in (7.82) as a 45 degree diagram, with one curve
for each M ∈ {0, . . . , 6}
The other parameter values are ρ = 0.99, γx = 0.5, σθ = 0.5
Points where the curves hit the 45 degree lines are long run steady states for precision for different values of
M
Thus, if one of these values for M remains fixed, a corresponding steady state is the equilibrium level of
precision
• high values of M correspond to greater information about the fundamental, and hence more precision
in steady state
• low values of M correspond to less information and more uncertainty in steady state
In practice, as well see, the number of active firms fluctuates stochastically
Participation
Omitting time subscripts once more, entrepreneurs enter the market in the current period if
Here
• the mathematical expectation of xm is based on (7.80) and beliefs N (µ, γ −1 ) for θ
• Fm is a stochastic but previsible fixed cost, independent across time and firms
• c is a constant reflecting opportunity costs
The statement that Fm is previsible means that it is realized at the start of the period and treated as a constant
in (7.83)
The utility function has the constant absolute risk aversion form
1
u(x) = (1 − exp(−ax)) (7.84)
a
where a is a positive parameter
Combining (7.83) and (7.84), entrepreneur m participates in the market (or is said to be active) when
1
{1 − E[exp (−a(θ + ϵm − Fm ))]} > c
a
Using standard formulas for expectations of lognormal random variables, this is equivalent to the condition
( )
1 1
1 a2 γ + γx
ψ(µ, γ, Fm ) := 1 − exp −aµ + aFm + − c > 0 (7.85)
a 2
7.9.3 Implementation
• methods to update θ, µ and γ, as well as to determine the number of active firms and their outputs
The updating methods follow the laws of motion for θ, µ and γ given above
The method to evaluate the number of active firms generates F1 , . . . , FM̄ and tests condition (7.85) for each
firm
The __init__ method encodes as default values the parameters well use in the simulations below
import numpy as np
class UncertaintyTrapEcon:
def __init__(self,
a=1.5, # Risk aversion
γ_x=0.5, # Production shock precision
ρ=0.99, # Correlation coefficient for θ
σ_θ=0.5, # Standard dev of θ shock
num_firms=100, # Number of firms
σ_F=1.5, # Standard dev of fixed costs
c=-420, # External opportunity cost
µ_init=0, # Initial value for µ
γ_init=4, # Initial value for γ
θ_init=0): # Initial value for θ
# == Record values == #
self.a, self.γ_x, self.ρ, self.σ_θ = a, γ_x, ρ, σ_θ
self.num_firms, self.σ_F, self.c, = num_firms, σ_F, c
self.σ_x = np.sqrt(1/γ_x)
# == Initialize states == #
self.γ, self.µ, self.θ = γ_init, µ_init, θ_init
def gen_aggregates(self):
"""
Generate aggregates based on current beliefs (µ, γ). This
is a simulation step that depends on the draws for F.
"""
F_vals = self.σ_F * np.random.randn(self.num_firms)
M = np.sum(self.ψ(F_vals) > 0) # Counts number of active firms
if M > 0:
x_vals = self.θ + self.σ_x * np.random.randn(M)
X = x_vals.mean()
else:
X = 0
return X, M
In the results below we use this code to simulate time series for the major variables
7.9.4 Results
Lets look first at the dynamics of µ, which the agents use to track θ
We see that µ tracks θ well when there are sufficient firms in the market
However, there are times when µ tracks θ poorly due to insufficient information
These are episodes where the uncertainty traps take hold
During these episodes
• precision is low and uncertainty is high
• few firms are in the market
To get a clearer idea of the dynamics, lets look at all the main time series at once, for a given set of shocks
Notice how the traps only take hold after a sequence of bad draws for the fundamental
Thus, the model gives us a propagation mechanism that maps bad random draws into long downturns in
economic activity
7.9.5 Exercises
Exercise 1
Fill in the details behind (7.81) and (7.82) based on the following standard result (see, e.g., p. 24 of [YS05])
Fact Let x = (x1 , . . . , xM ) be a vector of IID draws from common distribution N (θ, 1/γx ) and let x̄ be the
sample mean. If γx is known and the prior for θ is N (µ, 1/γ), then the posterior distribution of θ given x is
where
µγ + M x̄γx
µ0 = and γ0 = γ + M γx
γ + M γx
Exercise 2
7.9.6 Solutions
Exercise 1
This exercise asked you to validate the laws of motion for γ and µ given in the lecture, based on the stated
result about Bayesian updating in a scalar Gaussian setting. The stated result tells us that after observing
average output X of the M firms, our posterior beliefs will be
N (µ0 , 1/γ0 )
where
µγ + M Xγx
µ0 = and γ0 = γ + M γx
γ + M γx
If we take a random variable θ with this distribution and then evaluate the distribution of ρθ + σθ w where w
is independent and standard normal, we get the expressions for µ′ and γ ′ given in the lecture.
Exercise 2
First lets replicate the plot that illustrates the law of motion for precision, which is
( )−1
ρ2
γt+1 = + σθ2
γt + M γ x
Here M is the number of active firms. The next figure plots γt+1 against γt on a 45 degree diagram for
different values of M
econ = UncertaintyTrapEcon()
ρ, σ_θ, γ_x = econ.ρ, econ.σ_θ, econ.γ_x # simplify names
γ = np.linspace(1e-10, 3, 200) # γ grid
fig, ax = plt.subplots(figsize=(9, 9))
ax.plot(γ, γ, 'k-') # 45 degree line
for M in range(7):
γ_next = 1 / (ρ**2 / (γ + M * γ_x) + σ_θ**2)
label_string = f"$M = {M}$"
ax.plot(γ, γ_next, lw=2, label=label_string)
ax.legend(loc='lower right', fontsize=14)
ax.set_xlabel(r'$\gamma$', fontsize=16)
ax.set_ylabel(r"$\gamma'$", fontsize=16)
ax.grid()
plt.show()
The points where the curves hit the 45 degree lines are the long run steady states corresponding to each M ,
if that value of M was to remain fixed. As the number of firms falls, so does the long run steady state of
precision
Next lets generate time series for beliefs and the aggregates – that is, the number of active firms and average
output
sim_length=2000
µ_vec = np.empty(sim_length)
θ_vec = np.empty(sim_length)
γ_vec = np.empty(sim_length)
X_vec = np.empty(sim_length)
M_vec = np.empty(sim_length)
µ_vec[0] = econ.µ
γ_vec[0] = econ.γ
θ_vec[0] = 0
w_shocks = np.random.randn(sim_length)
for t in range(sim_length-1):
X, M = econ.gen_aggregates()
X_vec[t] = X
M_vec[t] = M
econ.update_beliefs(X, M)
econ.update_θ(w_shocks[t])
µ_vec[t+1] = econ.µ
γ_vec[t+1] = econ.γ
θ_vec[t+1] = econ.θ
plt.show()
If you run the code above youll get different plots, of course
Try experimenting with different parameters to see the effects on the time series
(It would also be interesting to experiment with non-Gaussian distributions for the shocks, but this is a big
exercise since it takes us outside the world of the standard Kalman filter)
7.10.1 Overview
In this lecture we describe the structure of a class of models that build on work by Truman Bewley [Bew77]
We begin by discussing an example of a Bewley model due to Rao Aiyagari
The model features
• Heterogeneous agents
• A single exogenous vehicle for borrowing and lending
• Limits on amounts individual agents may borrow
The Aiyagari model has been used to investigate many topics, including
• precautionary savings and the effect of liquidity constraints [Aiy94]
• risk sharing and asset pricing [HL96]
• the shape of the wealth distribution [BBZ15]
• etc., etc., etc.
References
Households
subject to
where
• ct is current consumption
• at is assets
• zt is an exogenous component of labor income capturing stochastic unemployment risk, etc.
• w is a wage rate
• r is a net interest rate
• B is the maximum amount that the agent is allowed to borrow
The exogenous process {zt } follows a finite state Markov chain with given stochastic matrix P
The wage and interest rate are fixed over time
In this simple version of the model, households supply labor inelastically because they do not value leisure
7.10.3 Firms
Yt = AKtα N 1−α
where
• A and α are parameters with A > 0 and α ∈ (0, 1)
• Kt is aggregate capital
• N is total labor supply (which is constant in this simple version of the model)
The firms problem is
{ }
maxK,N AKtα N 1−α − (r + δ)K − wN
( )1−α
N
r = Aα −δ (7.86)
K
Using this expression and the firms first-order condition for labor, we can pin down the equilibrium wage
rate as a function of r as
Equilibrium
7.10.4 Code
• Q needs to be a three dimensional array where Q[s, a, s'] is the probability of transitioning to
state s' when the current state is s and the current action is a
(For a detailed discussion of DiscreteDP see this lecture)
Here we take the state to be st := (at , zt ), where at is assets and zt is the shock
The action is the choice of next period asset level at+1
We use Numba to speed up the loops so we can update the matrices efficiently when the parameters change
The class also includes a default set of parameters that well adopt unless otherwise specified
import numpy as np
from numba import jit
class Household:
"""
This class takes the parameters that define a household asset accumulation
problem and computes the corresponding reward and transition matrices R
and Q required to generate an instance of DiscreteDP, and thereby solve
for the optimal policy.
"""
def __init__(self,
r=0.01, # interest rate
w=1.0, # wages
β=0.96, # discount factor
a_min=1e-10,
Π=[[0.9, 0.1], [0.1, 0.9]], # Markov chain
z_vals=[0.1, 1.0], # exogenous states
a_max=18,
a_size=200):
self.Π = np.asarray(Π)
self.z_vals = np.asarray(z_vals)
self.z_size = len(z_vals)
def build_Q(self):
populate_Q(self.Q, self.a_size, self.z_size, self.Π)
def build_R(self):
self.R.fill(-np.inf)
populate_R(self.R, self.a_size, self.z_size, self.a_vals, self.z_vals,
,→ self.r, self.w)
@jit(nopython=True)
def populate_R(R, a_size, z_size, a_vals, z_vals, r, w):
n = a_size * z_size
for s_i in range(n):
a_i = s_i // z_size
z_i = s_i % z_size
a = a_vals[a_i]
z = z_vals[z_i]
for new_a_i in range(a_size):
a_new = a_vals[new_a_i]
c = w * z + (1 + r) * a - a_new
if c > 0:
R[s_i, new_a_i] = np.log(c) # Utility
@jit(nopython=True)
def populate_Q(Q, a_size, z_size, Π):
n = a_size * z_size
for s_i in range(n):
z_i = s_i % z_size
for a_i in range(a_size):
for next_z_i in range(z_size):
Q[s_i, a_i, a_i * z_size + next_z_i] = Π[z_i, next_z_i]
@jit(nopython=True)
def asset_marginal(s_probs, a_size, z_size):
a_probs = np.zeros(a_size)
for a_i in range(a_size):
for z_i in range(z_size):
a_probs[a_i] += s_probs[a_i * z_size + z_i]
return a_probs
As a first example of what we can do, lets compute and plot an optimal accumulation policy at fixed prices
import quantecon as qe
import matplotlib.pyplot as plt
from quantecon.markov import DiscreteDP
# Example prices
r = 0.03
w = 0.956
# Simplify names
z_size, a_size = am.z_size, am.a_size
z_vals, a_vals = am.z_vals, am.a_vals
n = a_size * z_size
# Get all optimal actions across the set of a indices with z fixed in each row
a_star = np.empty((z_size, a_size))
for s_i in range(n):
a_i = s_i // z_size
z_i = s_i % z_size
a_star[z_i, a_i] = a_vals[results.sigma[s_i]]
plt.show()
The plot shows asset accumulation policies at different values of the exogenous state
Now we want to calculate the equilibrium
Lets do this visually as a first pass
The following code draws aggregate supply and demand curves
The intersection gives equilibrium interest rates and capital
A = 1.0
N = 1.0
α = 0.33
β = 0.96
δ = 0.05
def r_to_w(r):
"""
Equilibrium wages associated with a given interest rate r.
"""
return A * (1 - α) * (A * α / (r + δ))**(α / (1 - α))
def rd(K):
"""
Inverse demand curve for capital. The interest rate associated with a
given demand for capital K.
"""
return A * α * (N / K)**(1 - α) - δ
Parameters:
----------
am : Household
An instance of an aiyagari_household.Household
r : float
The interest rate
"""
w = r_to_w(r)
am.set_prices(r, w)
aiyagari_ddp = DiscreteDP(am.R, am.Q, β)
# Compute the optimal policy
results = aiyagari_ddp.solve(method='policy_iteration')
# Compute the stationary distribution
stationary_probs = results.mc.stationary_distributions[0]
# Extract the marginal distribution for assets
asset_probs = asset_marginal(stationary_probs, am.a_size, am.z_size)
# Return K
return np.sum(asset_probs * am.a_vals)
plt.show()
Contents
– Overview
– Structure
– Equilibrium
– Computation
– Results
– Exercises
– Solutions
7.11.1 Overview
7.11.2 Structure
A small open economy is endowed with an exogenous stochastically fluctuating potential output stream {yt }
Potential output is realized only in periods in which the government honors its sovereign debt
The output good can be traded or consumed
The sequence {yt } is described by a Markov process with stochastic density kernel p(y, y ′ )
Households within the country are identical and rank stochastic consumption streams according to
∞
∑
E β t u(ct ) (7.88)
t=0
Here
• 0 < β < 1 is a time discount factor
• u is an increasing and strictly concave utility function
Consumption sequences enjoyed by households are affected by the governments decision to borrow or lend
internationally
The government is benevolent in the sense that its aim is to maximize (7.88)
The government is the only domestic actor with access to foreign credit
Because household are averse to consumption fluctuations, the government will try to smooth consumption
by borrowing from (and lending to) foreign creditors
Asset Markets
The only credit instrument available to the government is a one-period bond traded in international credit
markets
The bond market has the following features
• The bond matures in one period and is not state contingent
• A purchase of a bond with face value B ′ is a claim to B ′ units of the consumption good next period
• To purchase B ′ next period costs qB ′ now, or, what is equivalent
• For selling −B ′ units of next period goods the seller earns −qB ′ of todays goods
– if B ′ < 0, then −qB ′ units of the good are received in the current period, for a promise to repay
−B ′ units next period
– there is an equilibrium price function q(B ′ , y) that makes q depend on both B ′ and y
Earnings on the government portfolio are distributed (or, if negative, taxed) lump sum to households
When the government is not excluded from financial markets, the one-period national budget constraint is
Here and below, a prime denotes a next period value or a claim maturing next period
To rule out Ponzi schemes, we also require that B ≥ −Z in every period
• Z is chosen to be sufficiently large that the constraint never binds in equilibrium
Financial Markets
Foreign creditors
• are risk neutral
• know the domestic output stochastic process {yt } and observe yt , yt−1 , . . . , at time t
• can borrow or lend without limit in an international credit market at a constant international interest
rate r
• receive full payment if the government chooses to pay
• receive zero if the government defaults on its one-period debt due
When a government is expected to default next period with probability δ, the expected value of a promise to
pay one unit of consumption next period is 1 − δ.
Therefore, the discounted expected value of a promise to pay B next period is
1−δ
q= (7.90)
1+r
Next we turn to how the government in effect chooses the default probability δ
Governments decisions
• it returns to y only after the country regains access to international credit markets
2. The country loses access to foreign credit markets
While in a state of default, the economy regains access to foreign credit in each subsequent period with
probability θ
7.11.3 Equilibrium
Informally, an equilibrium is a sequence of interest rates on its sovereign debt, a stochastic sequence of
government default decisions and an implied flow of household consumption such that
1. Consumption and assets satisfy the national budget constraint
2. The government maximizes household utility taking into account
• the resource constraint
• the effect of its choices on the price of bonds
• consequences of defaulting now for future net output and future borrowing and lending oppor-
tunities
3. The interest rate on the governments debt includes a risk-premium sufficient to make foreign creditors
expect on average to earn the constant risk-free international interest rate
To express these ideas more precisely, consider first the choices of the government, which
1. enters a period with initial assets B, or what is the same thing, initial debt to be repaid now of −B
2. observes current output y, and
3. chooses either
(a) to default, or
(b) to pay −B and set next periods debt due to −B ′
In a recursive formulation,
• state variables for the government comprise the pair (B, y)
• v(B, y) is the optimum value of the governments problem when at the beginning of a period it faces
the choice of whether to honor or default
• vc (B, y) is the value of choosing to pay obligations falling due
• vd (y) is the value of choosing to default
vd (y) does not depend on B because, when access to credit is eventually regained, net foreign assets equal
0
∫
′
δ(B , y) := ⊮{vc (B ′ , y ′ ) < vd (y ′ )}p(y, y ′ )dy ′ (7.91)
Given zero profits for foreign creditors in equilibrium, we can combine (7.90) and (7.91) to pin down the
bond price function:
1 − δ(B ′ , y)
q(B ′ , y) = (7.92)
1+r
Definition of equilibrium
An equilibrium is
• a pricing function q(B ′ , y),
• a triple of value functions (vc (B, y), vd (y), v(B, y)),
• a decision rule telling the government when to default and when to pay as a function of the state
(B, y), and
• an asset accumulation rule that, conditional on choosing not to default, maps (B, y) into B ′
such that
• The three Bellman equations for (vc (B, y), vd (y), v(B, y)) are satisfied
• Given the price function q(B ′ , y), the default decision rule and the asset accumulation decision rule
attain the optimal value function v(B, y), and
• The price function q(B ′ , y) satisfies equation (7.92)
7.11.4 Computation
"""
"""
import numpy as np
import random
import quantecon as qe
from numba import jit
class Arellano_Economy:
"""
Arellano 2008 deals with a small open economy whose government
invests in foreign assets in order to smooth the consumption of
domestic households. Domestic households receive a stochastic
path of income.
Parameters
----------
β : float
Time discounting parameter
γ : float
Risk-aversion parameter
r : float
int lending rate
ρ : float
Persistence in the income process
η : float
Standard deviation of the income process
θ : float
Probability of re-entering financial markets in each period
ny : int
Number of points in y grid
nB : int
Number of points in B grid
tol : float
Error tolerance in iteration
maxit : int
Maximum number of iterations
"""
def __init__(self,
β=.953, # time discount rate
γ=2., # risk aversion
r=0.017, # international interest rate
ρ=.945, # persistence in output
η=0.025, # st dev of output shock
θ=0.282, # prob of regaining access
ny=21, # number of points in y grid
nB=251, # number of points in B grid
tol=1e-8, # error tolerance in iteration
maxit=10000):
# Save parameters
self.β, self.γ, self.r = β, γ, r
self.ρ, self.η, self.θ = ρ, η, θ
self.ny, self.nB = ny, nB
# Allocate memory
self.Vd = np.zeros(ny)
self.Vc = np.zeros((ny, nB))
self.V = np.zeros((ny, nB))
self.Q = np.ones((ny, nB)) * .95 # Initial guess for prices
self.default_prob = np.empty((ny, nB))
# == Main loop == #
while dist > tol and maxit > it:
# Update prices
Vd_compat = np.repeat(self.Vd, self.nB).reshape(self.ny, self.nB)
default_states = Vd_compat > self.Vc
self.default_prob[:, :] = self.Py @ default_states
self.Q[:, :] = (1 - self.default_prob)/(1 + self.r)
it += 1
if it % 25 == 0:
print(f"Running iteration {it} with dist of {dist}")
return None
def compute_savings_policy(self):
"""
Compute optimal savings B' conditional on not defaulting.
The policy is recorded as an index value in Bgrid.
"""
# Allocate memory
self.next_B_index = np.empty((self.ny, self.nB))
EV = self.Py @ self.V
if y_init is None:
# Set to index near the mean of the ygrid
y_init = np.searchsorted(self.ygrid, self.ygrid.mean())
if B_init is None:
B_init = zero_B_index
# Start off not in default
in_default = False
for t in range(T-1):
yi, Bi = y_sim_indices[t], B_sim_indices[t]
if not in_default:
if self.Vc[yi, Bi] < self.Vd[yi]:
in_default = True
Bi_next = zero_B_index
else:
new_index = self.next_B_index[yi, Bi]
Bi_next = new_index
else:
in_default_series[t] = 1
Bi_next = zero_B_index
if random.uniform(0, 1) < self.θ:
in_default = False
B_sim_indices[t+1] = Bi_next
q_sim[t] = self.Q[yi, int(Bi_next)]
return return_vecs
@jit(nopython=True)
def u(c, γ):
return c**(1-γ)/(1-γ)
@jit(nopython=True)
def _inner_loop(ygrid, def_y, Bgrid, Vd, Vc, EVc,
EVd, EV, qq, β, θ, γ):
"""
This is a numba version of the inner loop of the solve in the
Arellano class. It updates Vd and Vc in place.
"""
ny, nB = len(ygrid), len(Bgrid)
zero_ind = nB // 2 # Integer division
for iy in range(ny):
y = ygrid[iy] # Pull out current y
# Compute Vd
Vd[iy] = u(def_y[iy], γ) + \
β * (θ * EVc[iy, zero_ind] + (1 - θ) * EVd[iy])
# Compute Vc
for ib in range(nB):
B = Bgrid[ib] # Pull out current B
current_max = -1e14
for ib_next in range(nB):
c = max(y - qq[iy, ib_next] * Bgrid[ib_next] + B, 1e-14)
m = u(c, γ) + β * EV[iy, ib_next]
if m > current_max:
current_max = m
Vc[iy, ib] = current_max
return None
@jit(nopython=True)
def _compute_savings_policy(ygrid, Bgrid, Q, EV, γ, β, next_B_index):
# Compute best index in Bgrid given iy, ib
ny, nB = len(ygrid), len(Bgrid)
for iy in range(ny):
y = ygrid[iy]
for ib in range(nB):
B = Bgrid[ib]
current_max = -1e10
for ib_next in range(nB):
c = max(y - Q[iy, ib_next] * Bgrid[ib_next] + B, 1e-14)
m = u(c, γ) + β * EV[iy, ib_next]
if m > current_max:
current_max = m
current_max_index = ib_next
next_B_index[iy, ib] = current_max_index
return None
7.11.5 Results
We can use the results of the computation to study the default probability δ(B ′ , y) defined in (7.91)
The next plot shows these default probabilities over (B ′ , y) as a heat map
As anticipated, the probability that the government chooses to default in the following period increases with
indebtedness and falls with income
Next lets run a time series simulation of {yt }, {Bt } and q(Bt+1 , yt )
The grey vertical bars correspond to periods when the economy is excluded from financial markets because
of a past default
One notable feature of the simulated data is the nonlinear response of interest rates
Periods of relative stability are followed by sharp spikes in the discount rate on government debt
7.11.6 Exercises
Exercise 1
To the extent that you can, replicate the figures shown above
• Use the parameter values listed as defaults in the __init__ method of the Arellano_Economy
• The time series will of course vary depending on the shock draws
7.11.7 Solutions
# Create "Y High" and "Y Low" values as 5% devs from mean
high, low = np.mean(ae.ygrid) * 1.05, np.mean(ae.ygrid) * .95
iy_high, iy_low = (np.searchsorted(ae.ygrid, x) for x in (high, low))
# Create figure
fig, ax = plt.subplots(figsize=(10, 6.5))
hm = ax.pcolormesh(xx, yy, zz)
cax = fig.add_axes([.92, .1, .02, .8])
fig.colorbar(hm, cax=cax)
ax.axis([xx.min(), 0.05, yy.min(), yy.max()])
ax.set(xlabel="$B'$", ylabel="$y$", title="Probability of Default")
plt.show()
T = 250
y_vec, B_vec, q_vec, default_vec = ae.simulate(T)
plt.show()
Contents
7.12.1 Overview
In this lecture, we review the paper Globalization and Synchronization of Innovation Cycles by Kiminori
Matsuyama, Laura Gardini and Iryna Sushko
This model helps us understand several interesting stylized facts about the world economy
One of these is synchronized business cycles across different countries
Most existing models that generate synchronized business cycles do so by assumption, since they tie output
in each country to a common shock
They also fail to explain certain features of the data, such as the fact that the degree of synchronization tends
to increase with trade ties
By contrast, in the model we consider in this lecture, synchronization is both endogenous and increasing
with the extent of trade integration
In particular, as trade costs fall and international competition increases, innovation incentives become
aligned and countries synchronize their innovation cycles
Background
The model builds on work by Judd [Jud85], Deneckner and Judd [DJ92] and Helpman and Krugman
[HK85] by developing a two country model with trade and innovation
On the technical side, the paper introduces the concept of coupled oscillators to economic modeling
As we will see, coupled oscillators arise endogenously within the model
Below we review the model and replicate some of the results on synchronization of innovation across coun-
tries
Innovation Cycles
As discussed above, two countries produce and trade with each other
In each country, firms innovate, producing new varieties of goods and, in doing so, receiving temporary
monopoly power
Imitators follow and, after one period of monopoly, what had previously been new varieties now enter
competitive production
Firms have incentives to innovate and produce new goods when the mass of varieties of goods currently in
production is relatively low
In addition, there are strategic complementarities in the timing of innovation
Firms have incentives to innovate in the same period, so as to avoid competing with substitutes that are
competitively produced
This leads to temporal clustering in innovations in each country
After a burst of innovation, the mass of goods currently in production increases
However, goods also become obsolete, so that not all survive from period to period
This mechanism generates a cycle, where the mass of varieties increases through simultaneous innovation
and then falls through obsolescence
Synchronization
In the absence of trade, the timing of innovation cycles in each country is decoupled
This will be the case when trade costs are prohibitively high
If trade costs fall, then goods produced in each country penetrate each others markets
As illustrated below, this leads to synchonization of business cycles across the two countries
7.12.3 Model
Here xk,t (ν) is the total amount of a differentiated good ν ∈ Ωt that is produced
The parameter σ > 1 is the direct partial elasticity of substitution between a pair of varieties and Ωt is the
set of varieties available in period t
We can split the varieties into those which are supplied competitively and those supplied monopolistically;
that is, Ωt = Ωct + Ωm
t
Prices
where
∑ ρj,k Lk
Aj,t := and ρj,k = (τj,k )1−σ ≤ 1
(Pk,t )1−σ
k
It is assumed that τ1,1 = τ2,2 = 1 and τ1,2 = τ2,1 = τ for some τ > 1, so that
Monopolists will have the same marked-up price, so, for all ν ∈ Ωm ,
ψ −σ
pj,t (ν) = pm
j,t :=
m
and Dj,t = yj,t := αAj,t (pm
j,t )
1− 1
σ
Define
( )
c
pcj,t yj,t 1 1−σ
θ := m m = 1−
pj,t yj,t σ
Using the preceding definitions and some algebra, the price indices can now be rewritten as
( )1−σ m
Pk,t c
Nj,t
= Mk,t + ρMj,t where Mj,t := Nj,t +
ψ θ
c and N m will denote the measures of Ωc and Ωm respectively
The symbols Nj,t j,t
New Varieties
To introduce a new variety, a firm must hire f units of labor per variety in each country
Monopolist profits must be less than or equal to zero in expectation, so
m
Nj,t ≥ 0, m
πj,t j,t − ψ)yj,t − f ≤ 0 and πj,t Nj,t = 0
:= (pm m m m
Law of Motion
With δ as the exogenous probability of a variety becoming obsolete, the dynamic equation for the measure
of firms becomes
c
Nj,t+1 c
= δ(Nj,t m
+ Nj,t c
) = δ(Nj,t + θ(Mj,t − Nj,t
c
))
Here
DLL := {(n1 , n2 ) ∈ R2+ |nj ≤ sj (ρ)}
DHH := {(n1 , n2 ) ∈ R2+ |nj ≥ hj (ρ)}
DHL := {(n1 , n2 ) ∈ R2+ |n1 ≥ s1 (ρ) and n2 ≤ h2 (n1 )}
DLH := {(n1 , n2 ) ∈ R2+ |n1 ≤ h1 (n2 ) and n2 ≥ s2 (ρ)}
while
{ }
s1 − ρs2
s1 (ρ) = 1 − s2 (ρ) = min ,1
1−ρ
7.12.4 Simulation
The computational burden of testing synchronization across many initial conditions is not trivial
In order to make our code fast, we will use just in time compiled functions that will get called and handled
by our class
These are the @jit statements that you see below (review this lecture if you dont recall how to use JIT
compilation)
Heres the main body of code
@jit(nopython=True)
def _hj(j, nk, s1, s2, θ, δ, ρ):
"""
If we expand the implicit function for h_j(n_k) then we find that
it is a quadratic. We know that h_j(n_k) > 0 so we can get its
value by using the quadratic form
"""
# Find out who's h we are evaluating
if j == 1:
sj = s1
sk = s2
else:
sj = s2
sk = s1
return root
@jit(nopython=True)
def DLL(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"Determine whether (n1, n2) is in the set DLL"
return (n1 <= s1_ρ) and (n2 <= s2_ρ)
@jit(nopython=True)
def DHH(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"Determine whether (n1, n2) is in the set DHH"
return (n1 >= _hj(1, n2, s1, s2, θ, δ, ρ)) and (n2 >= _hj(2, n1, s1, s2,
,→θ, δ, ρ))
@jit(nopython=True)
@jit(nopython=True)
def DLH(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"Determine whether (n1, n2) is in the set DLH"
return (n1 <= _hj(1, n2, s1, s2, θ, δ, ρ)) and (n2 >= s2_ρ)
@jit(nopython=True)
def one_step(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"""
Takes a current value for (n_{1, t}, n_{2, t}) and returns the
values (n_{1, t+1}, n_{2, t+1}) according to the law of motion.
"""
# Depending on where we are, evaluate the right branch
if DLL(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
n1_tp1 = δ * (θ * s1_ρ + (1 - θ) * n1)
n2_tp1 = δ * (θ * s2_ρ + (1 - θ) * n2)
elif DHH(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
n1_tp1 = δ * n1
n2_tp1 = δ * n2
elif DHL(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
n1_tp1 = δ * n1
n2_tp1 = δ * (θ * _hj(2, n1, s1, s2, θ, δ, ρ) + (1 - θ) * n2)
elif DLH(n1, n2, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
n1_tp1 = δ * (θ * _hj(1, n2, s1, s2, θ, δ, ρ) + (1 - θ) * n1)
n2_tp1 = δ * n2
@jit(nopython=True)
def n_generator(n1_0, n2_0, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ):
"""
Given an initial condition, continues to yield new values of
n1 and n2
"""
n1_t, n2_t = n1_0, n2_0
while True:
n1_tp1, n2_tp1 = one_step(n1_t, n2_t, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ)
yield (n1_tp1, n2_tp1)
n1_t, n2_t = n1_tp1, n2_tp1
@jit(nopython=True)
def _pers_till_sync(n1_0, n2_0, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ, maxiter, npers):
"""
Takes initial values and iterates forward to see whether
the histories eventually end up in sync.
If countries are symmetric then as soon as the two countries have the
same measure of firms then they will by synchronized -- However, if
they are not symmetric then it is possible they have the same measure
of firms but are not yet synchronized. To address this, we check whether
Parameters
----------
n1_0 : scalar(Float)
Initial normalized measure of firms in country one
n2_0 : scalar(Float)
Initial normalized measure of firms in country two
maxiter : scalar(Int)
Maximum number of periods to simulate
npers : scalar(Int)
Number of periods we would like the countries to have the
same measure for
Returns
-------
synchronized : scalar(Bool)
Did they two economies end up synchronized
pers_2_sync : scalar(Int)
The number of periods required until they synchronized
"""
# Initialize the status of synchronization
synchronized = False
pers_2_sync = maxiter
iters = 0
# Initialize generator
n_gen = n_generator(n1_0, n2_0, s1_ρ, s2_ρ, s1, s2, θ, δ, ρ)
@jit(nopython=True)
def _create_attraction_basis(s1_ρ, s2_ρ, s1, s2, θ, δ, ρ, maxiter, npers,
,→npts):
return time_2_sync
class MSGSync:
"""
The paper "Globalization and Synchronization of Innovation Cycles"
,→presents
a two country model with endogenous innovation cycles. Combines elements
from Deneckere Judd (1985) and Helpman Krugman (1985) to allow for a
model with trade that has firms who can introduce new varieties into
the economy.
Parameters
----------
s1 : scalar(Float)
Amount of total labor in country 1 relative to total worldwide labor
θ : scalar(Float)
A measure of how mcuh more of the competitive variety is used in
production of final goods
δ : scalar(Float)
Percentage of firms that are not exogenously destroyed every period
ρ : scalar(Float)
Measure of how expensive it is to trade between countries
"""
def __init__(self, s1=0.5, θ=2.5, δ=0.7, ρ=0.2):
# Store model parameters
self.s1, self.θ, self.δ, self.ρ = s1, θ, δ, ρ
def _unpack_params(self):
return self.s1, self.s2, self.θ, self.δ, self.ρ
def _calc_s1_ρ(self):
# Unpack params
s1, s2, θ, δ, ρ = self._unpack_params()
# s_1(ρ) = min(val, 1)
val = (s1 - ρ * s2) / (1 - ρ)
return min(val, 1)
Parameters
----------
n1_0 : scalar(Float)
Initial normalized measure of firms in country one
n2_0 : scalar(Float)
Initial normalized measure of firms in country two
T : scalar(Int)
Number of periods to simulate
Returns
-------
n1 : Array(Float64, ndim=1)
A history of normalized measures of firms in country one
n2 : Array(Float64, ndim=1)
A history of normalized measures of firms in country two
"""
# Unpack parameters
s1, s2, θ, δ, ρ = self._unpack_params()
s1_ρ, s2_ρ = self.s1_ρ, self.s2_ρ
# Allocate space
n1 = np.empty(T)
n2 = np.empty(T)
# Store in arrays
n1[t] = n1_tp1
n2[t] = n2_tp1
return n1, n2
If countries are symmetric then as soon as the two countries have the
same measure of firms then they will by synchronized -- However, if
they are not symmetric then it is possible they have the same measure
of firms but are not yet synchronized. To address this, we check
,→whether
firms stay synchronized for `npers` periods with Euclidean norm
Parameters
----------
n1_0 : scalar(Float)
Initial normalized measure of firms in country one
n2_0 : scalar(Float)
Initial normalized measure of firms in country two
maxiter : scalar(Int)
Maximum number of periods to simulate
npers : scalar(Int)
Number of periods we would like the countries to have the
same measure for
Returns
-------
synchronized : scalar(Bool)
Did they two economies end up synchronized
pers_2_sync : scalar(Int)
The number of periods required until they synchronized
"""
# Unpack parameters
s1, s2, θ, δ, ρ = self._unpack_params()
s1_ρ, s2_ρ = self.s1_ρ, self.s2_ρ
"""
# Unpack parameters
s1, s2, θ, δ, ρ = self._unpack_params()
s1_ρ, s2_ρ = self.s1_ρ, self.s2_ρ
return ab
We write a short function below that exploits the preceding code and plots two time series
Each time series gives the dynamics for the two countries
The time series share parameters but differ in their initial condition
Heres the function
ax.legend()
ax.set(title=title, ylim=(0.15, 0.8))
return ax
# Create figure
fig, ax = plt.subplots(2, 1, figsize=(10, 8))
fig.tight_layout()
plt.show()
In the first case, innovation in the two countries does not synchronize
In the second case different initial conditions are chosen, and the cycles become synchronized
Basin of Attraction
Next lets study the initial conditions that lead to synchronized cycles more systematically
We generate time series from a large collection of different initial conditions and mark those conditions with
different colors according to whether synchronization occurs or not
The next display shows exactly this for four different parameterizations (one for each subfigure)
Dark colors indicate synchronization, while light colors indicate failure to synchronize
7.12.5 Exercises
Exercise 1
Replicate the figure shown above by coloring initial conditions according to whether or not synchronization
occurs from those conditions
7.12.6 Solutions
return ab, cf
Interactive Version
Additionally, instead of just seeing 4 plots at once, we might want to manually be able to change ρ and see
how it affects the plot in real time. Below we use an interactive plot to do this
Note, interactive ploting requires the ipywidgets module to be installed and enabled
fig = interact(interact_attraction_basis,
ρ=(0.0, 1.0, 0.05),
maxiter=(50, 5000, 50),
npts=(25, 750, 25))
EIGHT
These lectures look at important concepts in time series that are used in economics
Contents
8.1.1 Overview
In this lecture we study covariance stationary linear stochastic processes, a class of models routinely used to
study economic and financial time series
This class has the advantage of being
1. simple enough to be described by an elegant and comprehensive theory
2. relatively broad in terms of the kinds of dynamics it can represent
We consider these models in both the time and frequency domain
ARMA Processes
We will focus much of our attention on linear covariance stationary models with a finite number of parame-
ters
In particular, we will study stationary ARMA processes, which form a cornerstone of the standard theory of
time series analysis
979
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Spectral Analysis
Other Reading
8.1.2 Introduction
Definitions
2. For all k in Z, the k-th autocovariance γ(k) := E(Xt − µ)(Xt+k − µ) is finite and depends only on k
The function γ : Z → R is called the autocovariance function of the process
Throughout this lecture, we will work exclusively with zero-mean (i.e., µ = 0) covariance stationary pro-
cesses
The zero-mean assumption costs nothing in terms of generality, since working with non-zero-mean pro-
cesses involves no more than adding a constant
Perhaps the simplest class of covariance stationary processes is the white noise processes
A process {ϵt } is called a white noise process if
1. Eϵt = 0
2. γ(k) = σ 2 1{k = 0} for some σ > 0
(Here 1{k = 0} is defined to be 1 if k = 0 and zero otherwise)
White noise processes play the role of building blocks for processes with more complicated dynamics
From the simple building block provided by white noise, we can construct a very flexible family of covari-
ance stationary processes the general linear processes
∞
∑
Xt = ψj ϵt−j , t∈Z (8.1)
j=0
where
• {ϵt } is white noise
∑∞
• {ψt } is a square summable sequence in R (that is, 2
t=0 ψt < ∞)
The sequence {ψt } is often called a linear filter
Equation (8.1) is said to present a moving average process or a moving average representation
With some manipulations it is possible to confirm that the autocovariance function for (8.1) is
∞
∑
γ(k) = σ 2 ψj ψj+k (8.2)
j=0
By the Cauchy-Schwartz inequality one can show that γ(k) satisfies equation (8.2)
Evidently, γ(k) does not depend on t
Wolds Decomposition
Remarkably, the class of general linear processes goes a long way towards describing the entire class of
zero-mean covariance stationary processes
In particular, Wolds decomposition theorem states that every zero-mean covariance stationary process {Xt }
can be written as
∞
∑
Xt = ψj ϵt−j + ηt
j=0
where
• {ϵt } is white noise
• {ψt } is square summable
• ηt can be expressed as a linear function of Xt−1 , Xt−2 , . . . and is perfectly predictable over arbitrarily
long horizons
For intuition and further discussion, see [Sar87], p. 286
AR and MA
σ2
γ(k) = ϕk , k = 0, 1, . . . (8.4)
1 − ϕ2
The next figure plots an example of this function for ϕ = 0.8 and ϕ = −0.8 with σ = 1
import numpy as np
import matplotlib.pyplot as plt
num_rows, num_cols = 2, 1
ax.legend(loc='upper right')
ax.set(xlabel='time', xlim=(0, 15))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)
plt.show()
Another very simple process is the MA(1) process (here MA means moving average)
Xt = ϵt + θϵt−1
ARMA Processes
A stochastic process {Xt } is called an autoregressive moving average process, or ARMA(p, q), if it can be
written as
L0 Xt − ϕ1 L1 Xt − · · · − ϕp Lp Xt = L0 ϵt + θ1 L1 ϵt + · · · + θq Lq ϵt (8.6)
If we let ϕ(z) and θ(z) be the polynomials
Autocovariance functions provide a great deal of information about covariance stationary processes
In fact, for zero-mean Gaussian processes, the autocovariance function characterizes the entire joint distri-
bution
Even for non-Gaussian processes, it provides a significant amount of information
It turns out that there is an alternative representation of the autocovariance function of a covariance stationary
process, called the spectral density
At times, the spectral density is easier to derive, easier to manipulate, and provides additional intuition
Complex Numbers
Before discussing the spectral density, we invite you to recall the main properties of complex numbers (or
skip to the next section)
It can be helpful to remember that, in a formal sense, complex numbers are just points (x, y) ∈ R2 endowed
with a specific notion of multiplication
When (x, y) is regarded as a complex number, x is called the real part and y is called the imaginary part
The modulus or absolute value of a complex number z = (x, y) is just its Euclidean norm in R2 , but is
usually written as |z| instead of ∥z∥
The product of two complex numbers (x, y) and (u, v) is defined to be (xu − vy, xv + yu), while addition
is standard pointwise vector addition
When endowed with these notions of multiplication and addition, the set of complex numbers forms a field
addition and multiplication play well together, just as they do in R
The complex number (x, y) is often written as x + iy, where i is called the imaginary unit, and is understood
to obey i2 = −1
The x + iy notation provides an easy way to remember the definition of multiplication given above, because,
proceeding naively,
Converted back to our first notation, this becomes (xu − vy, xv + yu) as promised
Complex numbers can be represented in the polar form reiω where
Spectral Densities
∑
Let {Xt } be a covariance stationary process with autocovariance function γ satisfying k γ(k)2 < ∞
The spectral density f of {Xt } is defined as the discrete time Fourier transform of its autocovariance func-
tion γ
∑
f (ω) := γ(k)e−iωk , ω∈R
k∈Z
(Some authors normalize the expression on the right by constants such as 1/π the convention chosen makes
little difference provided you are consistent)
Using the fact that γ is even, in the sense that γ(t) = γ(−t) for all t, we can show that
∑
f (ω) = γ(0) + 2 γ(k) cos(ωk) (8.9)
k≥1
It is an exercise to show that the MA(1) process Xt = θϵt−1 + ϵt has spectral density
With a bit more effort, its possible to show (see, e.g., p. 261 of [Sar87]) that the spectral density of the
AR(1) process Xt = ϕXt−1 + ϵt is
σ2
f (ω) = (8.11)
1 − 2ϕ cos(ω) + ϕ2
More generally, it can be shown that the spectral density of the ARMA process (8.5) is
θ(eiω ) 2 2
f (ω) = σ (8.12)
ϕ(eiω )
where
• σ is the standard deviation of the white noise process {ϵt }
• the polynomials ϕ(·) and θ(·) are as defined in (8.7)
The derivation of (8.12) uses the fact that convolutions become products under Fourier transformations
The proof is elegant and can be found in many places see, for example, [Sar87], chapter 11, section 4
Its a nice exercise to verify that (8.10) and (8.11) are indeed special cases of (8.12)
Plotting (8.11) reveals the shape of the spectral density for the AR(1) model when ϕ takes the values 0.8 and
-0.8 respectively
ax.legend(loc='upper center')
ax.set(xlabel='frequency', xlim=(0, np.pi))
plt.show()
These spectral densities correspond to the autocovariance functions for the AR(1) process shown above
Informally, we think of the spectral density as being large at those ω ∈ [0, π] at which the autocovariance
function seems approximately to exhibit big damped cycles
To see the idea, lets consider why, in the lower panel of the preceding figure, the spectral density for the case
ϕ = −0.8 is large at ω = π
Recall that the spectral density can be expressed as
∑ ∑
f (ω) = γ(0) + 2 γ(k) cos(ωk) = γ(0) + 2 (−0.8)k cos(ωk) (8.13)
k≥1 k≥1
When we evaluate this at ω = π, we get a large number because cos(πk) is large and positive when (−0.8)k
is positive, and large in absolute value and negative when (−0.8)k is negative
Hence the product is always large and positive, and hence the sum of the products on the right-hand side of
(8.13) is large
These ideas are illustrated in the next figure, which has k on the horizontal axis
= -0.8
times = list(range(16))
y1 = [**k / (1 - **2) for k in times]
y2 = [np.cos(np.pi * k) for k in times]
y3 = [a * b for a, b in zip(y1, y2)]
num_rows, num_cols = 3, 1
fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 8))
plt.subplots_adjust(hspace=0.25)
# Cycles at frequence π
ax = axes[1]
ax.plot(times, y2, 'bo-', alpha=0.6, label='$\cos(\pi k)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), yticks=(-1, 0, 1))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)
# Product
ax = axes[2]
ax.stem(times, y3, label='$\gamma(k) \cos(\pi k)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), ylim=(-3, 3), yticks=(-1, 0, 1, 2, 3))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)
ax.set_xlabel("k")
plt.show()
On the other hand, if we evaluate f (ω) at ω = π/3, then the cycles are not matched, the sequence
γ(k) cos(ωk) contains both positive and negative terms, and hence the sum of these terms is much smaller
= -0.8
times = list(range(16))
y1 = [**k / (1 - **2) for k in times]
y2 = [np.cos(np.pi * k/3) for k in times]
y3 = [a * b for a, b in zip(y1, y2)]
num_rows, num_cols = 3, 1
fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 8))
plt.subplots_adjust(hspace=0.25)
# Cycles at frequence π
ax = axes[1]
ax.plot(times, y2, 'bo-', alpha=0.6, label='$\cos(\pi k/3)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), yticks=(-1, 0, 1))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)
# Product
ax = axes[2]
ax.stem(times, y3, label='$\gamma(k) \cos(\pi k/3)$')
ax.legend(loc='upper right')
ax.set(xlim=(0, 15), ylim=(-3, 3), yticks=(-1, 0, 1, 2, 3))
ax.hlines(0, 0, 15, linestyle='--', alpha=0.5)
ax.set_xlabel("$k$")
plt.show()
In summary, the spectral density is large at frequencies ω where the autocovariance function exhibits damped
cycles
We have just seen that the spectral density is useful in the sense that it provides a frequency-based perspective
on the autocovariance structure of a covariance stationary process
Another reason that the spectral density is useful is that it can be inverted to recover the autocovariance
function via the inverse Fourier transform
In particular, for all k ∈ Z, we have
∫ π
1
γ(k) = f (ω)eiωk dω (8.14)
2π −π
This is convenient in situations where the spectral density is easier to calculate and manipulate than the
autocovariance function
(For example, the expression (8.12) for the ARMA spectral density is much easier to work with than the
expression for the ARMA autocovariance)
Mathematical Theory
This section is loosely based on [Sar87], p. 249-253, and included for those who
• would like a bit more insight into spectral densities
• and have at least some background in Hilbert space theory
Others should feel free to skip to the next section none of this material is necessary to progress to computa-
tion
Recall that every separable Hilbert space H has a countable orthonormal basis {hk }
The nice thing about such a basis is that every f ∈ H satisfies
∑
f= αk hk where αk := ⟨f, hk ⟩ (8.15)
k
Summarizing these results, we say that any separable Hilbert space is isometrically isomorphic to ℓ2
In essence, this says that each separable Hilbert space we consider is just a different way of looking at the
fundamental space ℓ2
With this in mind, lets specialize to a setting where
• γ ∈ ℓ2 is the autocovariance function of a covariance stationary process, and f is the spectral density
• H = L2 , ∫where L2 is the set of square summable functions on the interval [−π, π], with inner product
π
⟨g, h⟩ = −π g(ω)h(ω)dω
• {hk } = the orthonormal basis for L2 given by the set of trigonometric functions
eiωk
hk (ω) = √ , k ∈ Z, ω ∈ [−π, π]
2π
Using the definition of T from above and the fact that f is even, we now have
∑ eiωk 1
Tγ = γ(k) √ = √ f (ω) (8.16)
k∈Z
2π 2π
In other words, apart from a scalar multiple, the spectral density is just an transformation of γ ∈ ℓ2 under a
certain linear isometry a different way to view γ
In particular, it is an expansion of the autocovariance function with respect to the trigonometric basis func-
tions in L2
As discussed above, the Fourier coefficients of T γ are given by the sequence γ, and, in particular, γ(k) =
⟨T γ, hk ⟩
Transforming this inner product into its integral expression and using (8.16) gives (8.14), justifying our
earlier expression for the inverse transform
8.1.4 Implementation
Most code for working with covariance stationary models deals with ARMA models
Python code for studying ARMA models can be found in the tsa submodule of statsmodels
Since this code doesnt quite cover our needs particularly vis-a-vis spectral analysis weve put together the
module arma.py, which is part of QuantEcon.py package
The module provides functions for mapping ARMA(p, q) models into their
1. impulse response function
2. simulated time series
3. autocovariance function
4. spectral density
Application
Lets use this code to replicate the plots on pages 68–69 of [LS18]
Here are some functions to generate the plots
def plot_impulse_response(arma, ax=None):
if ax is None:
ax = plt.gca()
yi = arma.impulse_response()
ax.stem(list(range(len(yi))), yi)
ax.set(xlim=(-0.5), ylim=(min(yi)-0.1, max(yi)+0.1),
title='Impulse response', xlabel='time', ylabel='response')
return ax
def quad_plot(arma):
"""
Plots the impulse response, spectral_density, autocovariance,
and one realization of the process.
"""
num_rows, num_cols = 2, 2
fig, axes = plt.subplots(num_rows, num_cols, figsize=(12, 8))
plot_functions = [plot_impulse_response,
plot_spectral_density,
plot_autocovariance,
plot_simulation]
for plot_func, ax in zip(plot_functions, axes.flatten()):
plot_func(arma, ax)
plt.tight_layout()
plt.show()
= 0.5
θ = 0, -0.8
arma = qe.ARMA(, θ)
quad_plot(arma)
Explanation
The call
arma = ARMA(, θ, σ)
creates an instance arma that represents the ARMA(p, q) model
Xt = ϕXt−1 + ϵt + θϵt−1
The two numerical packages most useful for working with ARMA models are scipy.signal and
numpy.fft
The package scipy.signal expects the parameters to be passed into its functions in a manner consistent
with the alternative ARMA notation (8.8)
For example, the impulse response sequence {ψt } discussed above can be obtained using scipy.
signal.dimpulse, and the function call should be of the form
times, ψ = dimpulse((ma_poly, ar_poly, 1), n=impulse_length)
where ma_poly and ar_poly correspond to the polynomials in (8.7) that is,
• ma_poly is the vector (1, θ1 , θ2 , . . . , θq )
• ar_poly is the vector (1, −ϕ1 , −ϕ2 , . . . , −ϕp )
To this end, we also maintain the arrays ma_poly and ar_poly as instance data, with their values com-
puted automatically from the values of phi and theta supplied by the user
If the user decides to change the value of either theta or phi ex-post by assignments such as arma.phi
= (0.5, 0.2) or arma.theta = (0, -0.1)
then ma_poly and ar_poly should update automatically to reflect these new parameters
This is achieved in our implementation by using descriptors
As discussed above, for ARMA processes the spectral density has a simple representation that is relatively
easy to calculate
Given this fact, the easiest way to obtain the autocovariance function is to recover it from the spectral density
via the inverse Fourier transform
Here we use NumPys Fourier transform package np.fft, which wraps a standard Fortran-based package called
FFTPACK
A look at the np.fft documentation shows that the inverse transform np.fft.ifft takes a given sequence
A0 , A1 , . . . , An−1 and returns the sequence a0 , a1 , . . . , an−1 defined by
1∑
n−1
ak = At eik2πt/n
n
t=0
Thus, if we set At = f (ωt ), where f is the spectral density and ωt := 2πt/n, then
1∑ 1 2π ∑
n−1 n−1
ak = f (ωt )eiωt k = f (ωt )eiωt k , ωt := 2πt/n
n 2π n
t=0 t=0
Contents
• Estimation of Spectra
– Overview
– Periodograms
– Smoothing
– Exercises
– Solutions
8.2.1 Overview
In a previous lecture we covered some fundamental properties of covariance stationary linear stochastic
processes
One objective for that lecture was to introduce spectral densities a standard and very useful technique for
analyzing such processes
In this lecture we turn to the problem of estimating spectral densities and other related quantities from data
Estimates of the spectral density are computed using what is known as a periodogram which in turn is
computed via the famous fast Fourier transform
Once the basic technique has been explained, we will apply it to the analysis of several key macroeconomic
time series
For supplementary reading, see [Sar87] or [CC08].
8.2.2 Periodograms
Recall that the spectral density f of a covariance stationary process with autocorrelation function γ can be
written
∑
f (ω) = γ(0) + 2 γ(k) cos(ωk), ω∈R
k≥1
Now consider the problem of estimating the spectral density of a given time series, when γ is unknown
In particular, let X0 , . . . , Xn−1 be n consecutive observations of a single time series that is assumed to be
covariance stationary
The most common estimator of the spectral density of this process is the periodogram of X0 , . . . , Xn−1 ,
which is defined as
n−1 2
1 ∑
I(ω) := Xt eitω , ω∈R (8.17)
n
t=0
It is straightforward to show that the function I is even and 2π-periodic (i.e., I(ω) = I(−ω) and I(ω+2π) =
I(ω) for all ω ∈ R)
From these two results, you will be able to verify that the values of I on [0, π] determine the values of I on
all of R
The next section helps to explain the connection between the periodogram and the spectral density
Interpretation
To interpret the periodogram, it is convenient to focus on its values at the Fourier frequencies
2πj
ωj := , j = 0, . . . , n − 1
n
In what sense is I(ωj ) an estimate of f (ωj )?
The answer is straightforward, although it does involve some algebra
With a bit of effort one can show that, for any integer j > 0,
∑
n−1 ∑
n−1 { }
itωj t
e = exp i2πj =0
n
t=0 t=0
∑n−1
Letting X̄ denote the sample mean n−1 t=0 Xt , we then have
2
n−1
∑ ∑ ∑
n−1 n−1
itωj
nI(ωj ) = (Xt − X̄)e = (Xt − X̄)e itωj
(Xr − X̄)e−irωj
t=0 t=0 r=0
∑
n−1 ∑ n−1
n−1 ∑
nI(ωj ) = (Xt − X̄)2 + 2 (Xt − X̄)(Xt−k − X̄) cos(ωj k)
t=0 k=1 t=k
Now let
1∑
n−1
γ̂(k) := (Xt − X̄)(Xt−k − X̄), k = 0, 1, . . . , n − 1
n
t=k
This is the sample autocovariance function, the natural plug-in estimator of the autocovariance function γ
(Plug-in estimator is an informal term for an estimator found by replacing expectations with sample means)
With this notation, we can now write
∑
n−1
I(ωj ) = γ̂(0) + 2 γ̂(k) cos(ωj k)
k=1
Recalling our expression for f given above, we see that I(ωj ) is just a sample analog of f (ωj )
Calculation
∑
n−1 { }
tj
Aj := at exp i2π , j = 0, . . . , n − 1
n
t=0
With numpy.fft.fft imported as fft and a0 , . . . , an−1 stored in NumPy array a, the function call
fft(a) returns the values A0 , . . . , An−1 as a NumPy array
It follows that, when the data X0 , . . . , Xn−1 are stored in array X, the values I(ωj ) at the Fourier frequencies,
which are given by
n−1 { }2
1 ∑ tj
Xt exp i2π , j = 0, . . . , n − 1
n n
t=0
where {ϵt } is white noise with unit variance, and compares the periodogram to the actual spectral density
n = 40 # Data size
, θ = 0.5, (0, -0.8) # AR and MA parameters
lp = ARMA(, θ)
X = lp.simulation(ts_length=n)
fig, ax = plt.subplots()
x, y = periodogram(X)
ax.plot(x, y, 'b-', lw=2, alpha=0.5, label='periodogram')
x_sd, y_sd = lp.spectral_density(two_pi=False, res=120)
ax.plot(x_sd, y_sd, 'r-', lw=2, alpha=0.8, label='spectral density')
ax.legend()
plt.show()
This estimate looks rather disappointing, but the data size is only 40, so perhaps its not surprising that the
estimate is poor
However, if we try again with n = 1200 the outcome is not much better
The periodogram is far too irregular relative to the underlying spectral density
This brings us to our next topic
8.2.3 Smoothing
In other words, the value I(ωj ) is replaced with a weighted average of the adjacent values
∑
p
IS (ωj ) := w(ℓ)I(ωj+ℓ ) (8.19)
ℓ=−p
where the weights w(−p), . . . , w(p) are a sequence of 2p + 1 nonnegative values summing to one
In generally, larger values of p indicate more smoothing more on this below
The next figure shows the kind of sequence typically used
Note the smaller weights towards the edges and larger weights in the center, so that more distant values from
I(ωj ) have less weight than closer ones in the sum (8.19)
import numpy as np
def hanning_window(M):
w = [0.5 - 0.5 * np.cos(2 * np.pi * n/(M-1)) for n in range(M)]
return w
Our next step is to provide code that will not only estimate the periodogram but also provide smoothing as
required
Such functions have been written in estspec.py and are available once youve installed QuantEcon.py
The GitHub listing displays three functions, smooth(), periodogram(), ar_periodogram(). We
will discuss the first two here and the third one below
The periodogram() function returns a periodogram, optionally smoothed via the smooth() function
Regarding the smooth() function, since smoothing adds a nontrivial amount of computation, we have
applied a fairly terse array-centric method based around np.convolve
Readers are left either to explore or simply to use this code according to their interests
The next three figures each show smoothed and unsmoothed periodograms, as well as the population or true
spectral density
(The model is the same as before see equation (8.18) and there are 400 observations)
From top figure to bottom, the window length is varied from small to large
In looking at the figure, we can see that for this model and data size, the window length chosen in the middle
figure provides the best fit
Relative to this value, the first window length provides insufficient smoothing, while the third gives too
much smoothing
Of course in real estimation problems the true spectral density is not visible and the choice of appropriate
smoothing will have to be made based on judgement/priors or some other theory
In the code listing we showed three functions from the file estspec.py
The third function in the file (ar_periodogram()) adds a pre-processing step to periodogram smoothing
First we describe the basic idea, and after that we give the code
The essential idea is to
1. Transform the data in order to make estimation of the spectral density more efficient
2. Compute the periodogram associated with the transformed data
3. Reverse the effect of the transformation on the periodogram, so that it now estimates the spectral
density of the original process
Step 1 is called pre-filtering or pre-whitening, while step 3 is called recoloring
The first step is called pre-whitening because the transformation is usually designed to turn the data into
something closer to white noise
Why would this be desirable in terms of spectral density estimation?
The reason is that we are smoothing our estimated periodogram based on estimated values at nearby points
recall (8.19)
The underlying assumption that makes this a good idea is that the true spectral density is relatively regular
the value of I(ω) is close to that of I(ω ′ ) when ω is close to ω ′
This will not be true in all cases, but it is certainly true for white noise
For white noise, I is as regular as possible it is a constant function
In this case, values of I(ω ′ ) at points ω ′ near to ω provided the maximum possible amount of information
about the value I(ω)
Another way to put this is that if I is relatively constant, then we can use a large amount of smoothing
without introducing too much bias
Lets examine this idea more carefully in a particular setting where the data are assumed to be generated by
an AR(1) process
(More general ARMA settings can be handled using similar techniques to those described below)
Suppose in particular that {Xt } is covariance stationary and AR(1), with
where µ and ϕ ∈ (−1, 1) are unknown parameters and {ϵt } is white noise
It follows that if we regress Xt+1 on Xt and an intercept, the residuals will approximate white noise
Let
• g be the spectral density of {ϵt } a constant function, as discussed above
• I0 be the periodogram estimated from the residuals an estimate of g
• f be the spectral density of {Xt } the object we are trying to estimate
In view of an earlier result we obtained while discussing ARMA processes, f and g are related by
2
1
f (ω) = g(ω) (8.21)
1 − ϕeiω
This suggests that the recoloring step, which constructs an estimate I of f from I0 , should set
2
1
I(ω) = I0 (ω)
1 − ϕ̂eiω
In all cases, periodograms are fit with the hamming window and window length of 65
Overall, the fit of the AR smoothed periodogram is much better, in the sense of being closer to the true
spectral density
8.2.4 Exercises
Exercise 1
Exercise 2
Exercise 3
To be written. The exercise will be to use the code from this lecture to download FRED data and generate
periodograms for different kinds of macroeconomic data.
8.2.5 Solutions
Exercise 1
## Data
n = 400
= 0.5
θ = 0, -0.8
lp = ARMA(, θ)
X = lp.simulation(ts_length=n)
x, y = periodogram(X)
ax[i].plot(x, y, 'b-', lw=2, alpha=0.5, label='periodogram')
ax[i].legend()
ax[i].set_title(f'window length = {wl}')
plt.show()
Exercise 2
lp = ARMA(-0.9)
wl = 65
for i in range(3):
X = lp.simulation(ts_length=150)
ax[i].set_xlim(0, np.pi)
ax[i].legend(loc='upper left')
plt.show()
Contents
• Additive Functionals
– Overview
– A Particular Additive Functional
– Dynamics
– Code
8.3.1 Overview
This lecture focuses on a particular type of additive functional: a scalar process {yt }∞
t=0 whose increments
are driven by a Gaussian vector autoregression
It is simple to construct, simulate, and analyze
This additive functional consists of two components, the first of which is a first-order vector autoregres-
sion (VAR)
Here
• xt is an n × 1 vector,
• A is an n × n stable matrix (all eigenvalues lie within the open unit circle),
• zt+1 ∼ N (0, I) is an m × 1 i.i.d. shock,
• B is an n × m matrix, and
• x0 ∼ N (µ0 , Σ0 ) is a random initial condition for x
The second component is an equation that expresses increments of {yt }∞
t=0 as linear functions of
• a scalar constant ν,
• the vector xt , and
• the same Gaussian vector zt+1 that appears in the VAR (8.22)
In particular,
One way to represent the overall dynamics is to use a linear state space system
To do this, we set up state and observation vectors
1 [ ]
x
x̂t = xt and ŷt = t
yt
yt
8.3.3 Dynamics
Simulation
"""
@authors: Chase Coleman, Balint Szoke, Tom Sargent
"""
import numpy as np
import scipy as sp
import scipy.linalg as la
import quantecon as qe
import matplotlib.pyplot as plt
from scipy.stats import norm, lognorm
class AMF_LSS_VAR:
"""
This class transforms an additive (multiplicative)
functional into a QuantEcon linear state space system.
"""
# Set F
if not np.any(F):
self.F = np.zeros((self.nk, 1))
else:
self.F = F
# Set ν
if not np.any(ν):
self.ν = np.zeros((self.nm, 1))
elif type(ν) == float:
self.ν = np.asarray([[ν]])
elif len(ν.shape) == 1:
self.ν = np.expand_dims(ν, 1)
else:
self.ν = ν
if self.ν.shape[0] != self.D.shape[0]:
raise ValueError("The dimension of ν is inconsistent with D!")
def construct_ss(self):
"""
This creates the state space representation that can be passed
into the quantecon LSS class.
"""
# Pull out useful info
nx, nk, nm = self.nx, self.nk, self.nm
A, B, D, F, ν = self.A, self.B, self.D, self.F, self.ν
if self.add_decomp:
ν, H, g = self.add_decomp
else:
ν, H, g = self.additive_decomp()
# Auxiliary blocks with 0's and 1's to fill out the lss matrices
nx0c = np.zeros((nx, 1))
nx0r = np.zeros(nx)
nx1 = np.ones(nx)
nk0 = np.zeros(nk)
ny0c = np.zeros((nm, 1))
ny0r = np.zeros(nm)
ny1m = np.eye(nm)
ny0m = np.zeros((nm, nm))
nyx0m = np.zeros_like(D)
return lss
def additive_decomp(self):
"""
Return values for the martingale decomposition
- ν : unconditional mean difference in Y
- H : coefficient for the (linear) martingale component
,→(κ_a)
- g : coefficient for the stationary component g(x)
- Y_0 : it should be the function of X_0 (for now set it to
,→0.0)
"""
I = np.identity(self.nx)
A_res = la.solve(I - self.A, I)
g = self.D @ A_res
H = self.F + self.D @ A_res @ self.B
return self.ν, H, g
def multiplicative_decomp(self):
"""
Return values for the multiplicative decomposition (Example 5.4.4.)
- ν_tilde : eigenvalue
- H : vector for the Jensen term
"""
ν, H, g = self.additive_decomp()
ν_tilde = ν + (.5)*np.expand_dims(np.diag(H @ H.T), 1)
return ν_tilde, H, g
llh = self.loglikelihood_path(x, y)
return llh[-1]
"""
# Pull out right sizes so we know how to increment
nx, nk, nm = self.nx, self.nk, self.nm
add_figs = []
for ii in range(nm):
return add_figs
"""
# Pull out right sizes so we know how to increment
nx, nk, nm = self.nx, self.nk, self.nm
# Matrices for the multiplicative decomposition
ν_tilde, H, g = self.multiplicative_decomp()
mult_figs = []
for ii in range(nm):
li, ui = npaths*(ii), npaths*(ii+1)
LI, UI = 2*(ii), 2*(ii+1)
mult_figs.append(self.plot_given_paths(T, ypath_mult[li:ui,:],
,→mpath_mult[li:ui,:],
spath_mult[li:ui,:], tpath_
,→mult[li:ui,:],
mbounds_mult[LI:UI,:],
,→sbounds_mult[LI:UI,:], 1,
show_trend=show_trend))
mult_figs[ii].suptitle(f'Multiplicative decomposition of $y_{ii+1}
,→$', fontsize=14)
return mult_figs
for ii in range(nm):
li, ui = ii*2, (ii+1)*2
Mdist = lognorm(np.asscalar(np.sqrt(yvar[nx+nm+ii,
,→nx+nm+ii])),
scale=np.asscalar( np.exp( ymeans[nx+nm+ii]- \
t*(.5)*np.expand_dims(np.
,→diag(H @ H.T),1)[ii])))
mart_figs = []
for ii in range(nm):
li, ui = npaths*(ii), npaths*(ii+1)
LI, UI = 2*(ii), 2*(ii+1)
mart_figs.append(self.plot_martingale_paths(T, mpath_mult[li:ui,
,→ :],
mbounds_mult[LI:UI,
,→ :],
horline=1))
mart_figs[ii].suptitle(f'Martingale components for many paths of
,→$y_{ii+1}$', fontsize=14)
return mart_figs
# Allocate space
trange = np.arange(T)
# Create figure
fig, ax = plt.subplots(2, 2, sharey=True, figsize=(15, 8))
return fig
# Create figure
fig, ax = plt.subplots(1, 1, figsize=(10, 6))
return fig
For now, we just plot yt and xt , postponing until later a description of exactly how we compute them
# A matrix should be n x n
A = np.array([[_1, _2, _3, _4],
[ 1, 0, 0, 0],
[ 0, 1, 0, 0],
[ 0, 0, 1, 0]])
# B matrix should be n x k
B = np.array([[σ, 0, 0, 0]]).T
D = np.array([1, 0, 0, 0]) @ A
F = np.array([1, 0, 0, 0]) @ B
T = 150
x, y = amf.lss.simulate(T)
Decomposition
Hansen and Sargent [HS17] describe how to construct a decomposition of an additive functional into four
parts:
• a constant inherited from initial values x0 and y0
• a linear trend
• a martingale
• an (asymptotically) stationary component
To attain this decomposition for the particular class of additive functionals defined by (8.22) and (8.23), we
At this stage you should pause and verify that yt+1 − yt satisfies (8.23)
It is convenient for us to introduce the following notation:
• τt = νt , a linear, deterministic trend
∑
• mt = tj=1 Hzj , a martingale with time t + 1 increment Hzt+1
• st = gxt , an (asymptotically) stationary component
We want to characterize and simulate components τt , mt , st of the decomposition.
A convenient way to do this is to construct an appropriate instance of a linear state space system by using
LinearStateSpace from QuantEcon.py
This will allow us to use the routines in LinearStateSpace to study dynamics
To start, observe that, under the dynamics in (8.22) and (8.23) and with the definitions just given,
1 1 0 0 0 0 1 0
t + 1 1 1 0 0 0 t 0
xt+1 = 0 0 A 0 0 xt + B zt+1
yt+1 ν 0 D′ 1 0 yt F ′
mt+1 0 0 0 0 1 mt H′
and
xt 0 0 I 0 0 1
yt 0 0 0 1
0 t
τt = 0 ν 0 0 0
xt
mt 0 0 0 0 1 yt
st 0 0 −g 0 0 mt
With
1 xt
t yt
x̃ :=
xt and ỹ :=
τt
yt mt
mt st
we can write this as the linear state space system
x̃t+1 = Ãx̃t + B̃zt+1
ỹt = D̃x̃t
By picking out components of ỹt , we can track all variables of interest
8.3.4 Code
The class AMF_LSS_VAR mentioned above does all that we want to study our additive functional
In fact AMF_LSS_VAR does more, as we shall explain below
(A hint that it does more is the name of the class – here AMF stands for additive and multiplicative functional
– the code will do things for multiplicative functionals too)
Lets use this code (embedded above) to explore the example process described above
If you run the code that first simulated that example again and then the method call
amf.plot_additive(T)
plt.show()
When we plot multiple realizations of a component in the 2nd, 3rd, and 4th panels, we also plot population
95% probability coverage sets computed using the LinearStateSpace class
We have chosen to simulate many paths, all starting from the same nonrandom initial conditions x0 , y0 (you
can tell this from the shape of the 95% probability coverage shaded areas)
Notice tell-tale signs of these probability coverage shaded areas
√
• the purple one for the martingale component mt grows with t
• the green one for the stationary component st converges to a constant band
or
( )( )
Mt ft
M ẽ(X0 )
= exp (ν̃t)
M0 f0
M ẽ(xt )
where
(∑
t ( ))
H ·H ft = exp H ·H f0 = 1
ν̃ = ν + , M H · zj − , M
2 2
j=1
and
[ ]
ẽ(x) = exp[g(x)] = exp D′ (I − A)−1 x
amf.plot_multiplicative(T)
plt.show()
As before, when we plotted multiple realizations of a component in the 2nd, 3rd, and 4th panels, we also
plotted population 95% confidence bands computed using the LinearStateSpace class
Comparing this figure and the last also helps show how geometric growth differs from arithmetic growth
np.random.seed(10021987)
amf.plot_martingales(12000)
plt.show()
Contents
• Multiplicative Functionals
– Overview
– A Log-Likelihood Process
– Benefits from Reduced Aggregate Fluctuations
8.4.1 Overview
• A version of Robert E. Lucass [Luc03] and Thomas Tallarinis [Tal00] approaches to measuring the
benefits of moderating aggregate fluctuations
1∑
t
log Lt (θ) = − (yj − yj−1 − Dxj−1 )′ (F F ′ )−1 (yj − yj−1 − Dxj−1 )
2
j=1
t kt
− log det(F F ′ ) − log(2π)
2 2
Lets consider the case of a scalar process in which A, B, D, F are scalars and zt+1 is a scalar stochastic
process
We let θo denote the true values of θ, meaning the values that generate the data
For the purposes of this exercise, set θo = (A, B, D, F ) = (0.8, 1, 0.5, 0.2)
Set x0 = y0 = 0
Well do this by formulating the additive functional as a linear state space model and putting the LinearStateS-
pace class to work
"""
"""
import numpy as np
import scipy as sp
import scipy.linalg as la
import quantecon as qe
import matplotlib.pyplot as plt
from scipy.stats import lognorm
class AMF_LSS_VAR:
"""
This class is written to transform a scalar additive functional
into a linear state space system.
"""
def __init__(self, A, B, D, F=0.0, ν=0.0):
# Unpack required elements
self.A, self.B, self.D, self.F, self.ν = A, B, D, F, ν
def construct_ss(self):
"""
This creates the state space representation that can be
,→passed
into the quantecon LSS class.
"""
# Pull out useful info
A, B, D, F, ν = self.A, self.B, self.D, self.F, self.ν
nx, nk, nm = 1, 1, 1
if self.add_decomp:
ν, H, g = self.add_decomp
else:
ν, H, g = self.additive_decomp()
return lss
def additive_decomp(self):
"""
Return values for the martingale decomposition (Proposition
,→4.3.3.)
return self.ν, H, g
def multiplicative_decomp(self):
"""
Return values for the multiplicative decomposition (Example
,→5.4.4.)
- ν_tilde : eigenvalue
- H : vector for the Jensen term
"""
ν, H, g = self.additive_decomp()
ν_tilde = ν + (.5) * H**2
return ν_tilde, H, g
return llh[-1]
return x, y
# Allocate space
storeX = np.empty((I, T))
storeY = np.empty((I, T))
for i in range(I):
# Do specific simulation
x, y = simulate_xy(amf, T)
Now that we have these functions in our took kit, lets apply them to run some simulations
In particular, lets use our program to generate I = 5000 sample paths of length T = 150, labeled {xit , yti }∞
t=0
for i = 1, ..., I
∑ ∑
Then we compute averages of I1 i xit and I1 i yti across the sample paths and compare them with the
population means of xt and yt
Here goes
A, B, D, F = [0.8, 1.0, 0.5, 0.2]
amf = AMF_LSS_VAR(A, B, D, F=F)
T = 150
I = 5000
ax[1].set_xlim((0, T))
ax[1].legend(loc=0)
plt.show()
Simulating log-likelihoods
# Allocate space
LLit = np.empty((I, T-1))
for i in range(I):
LLit[i, :] = amf.loglikelihood_path(Xit[i, :], Yit[i, :])
return LLit
fig, ax = plt.subplots()
ax.hist(LLT)
ax.vlines(LLmean_t, ymin=0, ymax=I//3, color="k", linestyle="--", alpha=0.6)
plt.title(r"Distribution of $\frac{1}{T} \log L_{T} \mid \theta_0$")
plt.show()
Notice that the log likelihood is almost always nonnegative, implying that Lt is typically bigger than 1
Recall that the likelihood function is a pdf (probability density function) and not a probability measure, so
it can take values larger than 1
In the current case, the conditional variance of ∆yt+1 , which equals F F T = 0.04, is so small that the
maximum value of the pdf is 2 (see the figure below)
This implies that approximately 75% of the time (a bit more than one sigma deviation), we should expect
the increment of the log likelihood to be nonnegative
Lets see this in a simulation
normdist = sp.stats.norm(0, F)
mult = 1.175
print(f'The pdf at +/- {mult} sigma takes the value: {normdist.pdf(mult * F)}
,→')
print(f'Probability of dL being larger than 1 is approx: {normdist.cdf(mult *
,→F) - normdist.cdf(-mult * F)}')
Now consider alternative parameter vector θ1 = [A, B, D, F ] = [0.9, 1.0, 0.55, 0.25]
We want to compute {log Lt | θ1 }Tt=1
The xt , yt inputs to this program should be exactly the same sample paths {xit , yti }Tt=0 that we we computed
above
This is because we want to generate data under the θo probability model but evaluate the likelihood under
the θ1 model
So our task is to use our program to simulate I = 5000 paths of {log Lit | θ1 }Tt=1
• For each path, compute T1 log LiT
∑
• Then compute I1 Ii=1 T1 log LiT
We want to compare these objects with each other and with the analogous objects that we computed above
Then we want to interpret outcomes
A function that we constructed can handle these tasks
The only innovation is that we must create an alternative model to feed in
We will creatively call the new model amf2
We make three graphs
fig, ax = plt.subplots()
ax.hist(LLT2)
ax.vlines(LLmean_t2, ymin=0, ymax=1400, color="k", linestyle="--", alpha=0.6)
Lets see a histogram of the log-likelihoods under the true and the alternative model (same sample paths)
plt.show()
Now well plot the histogram of the difference in log likelihood ratio
ax.hist(LLT_diff, bins=50)
plt.title(r"$\frac{1}{T}\left[\log (L_T^i \mid \theta_0) - \log (L_T^i \mid
,→\theta_1)\right]$")
plt.show()
Interpretation
These histograms of log likelihood ratios illustrate important features of likelihood ratio tests as tools for
discriminating between statistical models
• The log likeklihood is higher on average under the true model – obviously a very useful property
• Nevertheless, for a positive fraction of realizations, the log likelihood is higher for the incorrect than
for the true model
• in these instances, a likelihood ratio test mistakenly selects the wrong model
• These mechanics underlie the statistical theory of mistake probabilities associated with model selec-
tion tests based on likelihood ratio
(In a subsequent lecture, well use some of the code prepared in this lecture to illustrate mistake probabilities)
where
Here {zt+1 }∞
t=0 is an i.i.d. sequence of N (0, I) random vectors
where
[ ]−1
U = exp(−δ) I − exp(−δ)A′ D
and
2
exp(−δ) (1 − γ) exp(−δ) ′
u= ν+ D [I − exp(−δ)A] B + F ,
−1
1 − exp(−δ) 2 1 − exp(−δ)
( )( )
ct M̃t ẽ(x0 )
= exp(ν̃t) (8.26)
c0 M̃0 ẽ(xt )
( )
M̃t
where M̃0
is a likelihood ratio process and M̃0 = 1
At this point, as an exercise, we ask the reader please to verify the follow formulas for ν̃ and ẽ(xt ) as
functions of A, B, D, F :
H ·H
ν̃ = ν +
2
and
[ ]
ẽ(x) = exp[g(x)] = exp D′ (I − A)−1 x
In particular, we want to simulate 5000 sample paths of length T = 1000 for the case in which x is a scalar
and [A, B, D, F ] = [0.8, 0.001, 1.0, 0.01] and ν = 0.005
After accomplishing this, we want to display a histogram of M̃Ti for T = 1000
Here is code that accomplishes these tasks
def simulate_martingale_components(amf, T=1000, I=5000):
# Get the multiplicative decomposition
ν, H, g = amf.multiplicative_decomp()
# Allocate space
add_mart_comp = np.empty((I, T))
# Build model
amf_2 = AMF_LSS_VAR(0.8, 0.001, 1.0, 0.01,.005)
Comments
• The preceding min, mean, and max of the cross-section of the date T realizations of the multiplicative
martingale component of ct indicate that the sample mean is close to its population mean of 1
– This outcome prevails for all values of the horizon T
• The cross-section distribution of the multiplicative martingale component of c at date T approximates
a log normal distribution well
• The histogram of the additive martingale component of log ct at date T approximates a normal distri-
bution well
Heres a histogram of the additive martingale component
plt.show()
ft }∞ can be represented as
The likelihood ratio process {M t=0
(∑
t ( ))
ft = exp H ·H f0 = 1,
M H · zj − , M
2
j=1
where H = [F + B ′ (I − A′ )−1 D]
ft ∼ N (− tH·H , tH · H) and that consequently M
It follows that log M ft is log normal
2
# The distribution
mdist = lognorm(np.sqrt(t * H2), scale=np.exp(-t * H2 / 2))
x = np.linspace(xmin, xmax, npts)
pdf = mdist.pdf(x)
return x, pdf
# The distribution
lmdist = norm(-t * H2 / 2, np.sqrt(t * H2))
x = np.linspace(xmin, xmax, npts)
pdf = lmdist.pdf(x)
return x, pdf
plt.tight_layout()
plt.show()
These probability density functions illustrate a peculiar property of log likelihood ratio processes:
• With respect to the true model probabilities, they have mathematical expectations equal to 1 for all
t≥0
• They almost surely converge to zero
Suppose in the tradition of a strand of macroeconomics (for example Tallarini [Tal00], [Luc03]) we want to
estimate the welfare benefits from removing random fluctuations around trend growth
We shall compute how much initial consumption c0 a representative consumer who ranks consumption
streams according to (8.25) would be willing to sacrifice to enjoy the consumption stream
ct
= exp(ν̃t)
c0
rather than the stream described by equation (8.26)
We want to compute the implied percentage reduction in c0 that the representative consumer would accept
To accomplish this, we write a function that computes the coefficients U and u for the original values of
A, B, D, F, ν, but also for the case that A, B, D, F = [0, 0, 0, 0] and ν = ν̃
Heres our code
resolv = 1 / (1 - np.exp(-δ) * A)
vect = F + D * resolv * B
U_det = 0
u_det = (np.exp(-δ) / (1 - np.exp(-δ))) * ν_tilde
# Get coeffs
U_r, u_r, U_d, u_d = Uu(amf_2, δ, γ)
cr0 −cd0
We look for the ratio cr0 that makes log V0r − log V0d = 0
1.0809878812017448
We find that the consumer would be willing to take a percentage reduction of initial consumption equal to
around 1.081
Contents
8.5.1 Overview
In an earlier lecture Linear Quadratic Dynamic Programming Problems we have studied how to solve a
special class of dynamic optimization and prediction problems by applying the method of dynamic pro-
gramming. In this class of problems
References
Let L be the lag operator, so that, for sequence {xt } we have Lxt = xt−1
More generally, let Lk xt = xt−k with L0 xt = xt and
d(L) = d0 + d1 L + . . . + dm Lm
∑
N { }
1 2 1 2
max lim β t
at yt − hyt − [d(L)yt ] , (8.27)
{yt } N →∞ 2 2
t=0
where
Example
The formulation of the LQ problem given above is broad enough to encompass many useful models
As a simple illustration, recall that in LQ Dynamic Programming Problems we consider a monopolist facing
stochastic demand shocks and adjustment costs
Lets consider a deterministic version of this problem, where the monopolist maximizes the discounted sum
∞
∑
β t πt
t=0
and
∑
N
J= β t [d(L)yt ][d(L)yt ]
t=0
∑N
= β t (d0 yt + d1 yt−1 + · · · + dm yt−m ) (d0 yt + d1 yt−1 + · · · + dm yt−m )
t=0
∂J
= 2β t d0 d(L)yt + 2β t+1 d1 d(L)yt+1 + · · · + 2β t+m dm d(L)yt+m
∂yt
( )
= 2β t d0 + d1 βL−1 + d2 β 2 L−2 + · · · + dm β m L−m d(L)yt
∂J
= 2β t d(βL−1 ) d(L)yt (8.28)
∂yt
Differentiating J with respect to yt for t = N − m + 1, . . . , N gives
∂J
= 2β N d0 d(L)yN
∂yN
∂J [ ]
= 2β N −1 d0 + β d1 L−1 d(L)yN −1
∂yN −1 (8.29)
.. ..
. .
∂J [ ]
= 2β N −m+1 d0 + βL−1 d1 + · · · + β m−1 L−m+1 dm−1 d(L)yN −m+1
∂yN −m+1
With these preliminaries under our belts, we are ready to differentiate (8.27)
Differentiating (8.27) with respect to yt for t = 0, . . . , N − m gives the Euler equations
[ ]
h + d (βL−1 ) d(L) yt = at , t = 0, 1, . . . , N − m (8.30)
The system of equations (8.30) form a 2 × m order linear difference equation that must hold for the values
of t indicated.
Differentiating (8.27) with respect to yt for t = N − m + 1, . . . , N gives the terminal conditions
Matrix Methods
Lets look at how linear algebra can be used to tackle and shed light on the finite horizon LQ control problem
[
h + d (βL−1 ) d (L)]yt = at , t = 0, 1, . . . , N − 1
[ ] (8.32)
β N aN − h yN − d0 d (L)yN = 0
where d(L) = d0 + d1 L
These equations are to be solved for y0 , y1 , . . . , yN as functions of a0 , a1 , . . . , aN and y−1
Let
(ϕ0 − d21 ) ϕ1 0 0 ... ... 0 yN aN
βϕ1 ϕ0 ϕ1 0 . . . . . . 0
yN −1 aN −1
0 βϕ1 ϕ0 ϕ1 . . . . . . 0 yN −2 aN −2
.. .. .. . . .. .. .. .. = .. (8.33)
. . . . . . . . .
0 . . . . . . . . . βϕ1 ϕ0 ϕ1 y1 a1
0 . . . . . . . . . 0 βϕ1 ϕ0 y0 a0 − ϕ1 y−1
or
W ȳ = ā (8.34)
ȳ = W −1 ā (8.35)
An Alternative Representation
An alternative way to express the solution to (8.33) or (8.34) is in so called feedback-feedforward form
The idea here is to find a solution expressing yt as a function of past ys and current and future as
To achieve this solution, one can use an LU decomposition of W
There always exists a decomposition of W of the form W = LU where
• L is an (N + 1) × (N + 1) lower trangular matrix
• U is an (N + 1) × (N + 1) upper trangular matrix.
The factorization can be normalized so that the diagonal elements of U are unity
Using the LU representation in (8.35), we obtain
U ȳ = L−1 ā (8.36)
where L−1
ij is the (i, j) element of L
−1 and U is the (i, j) element of U
ij
Note how the left side for a given t involves yt and one lagged value yt−1 while the right side involves all
future values of the forcing process at , at+1 , . . . , aN
We briefly indicate how this approach extends to the problem with m > 1
Assume that β = 1 and let Dm+1 be the (m+1)×(m+1) symmetric matrix whose elements are determined
from the following formula:
where M is (m + 1) × m and
{
Di−j, m+1 for i > j
Mij =
0 for i ≤ j
U ȳ = L−1 ā (8.37)
∑
t ∑
N −t
U−t+N +1, −t+N +j+1 yt−j = L−t+N +1, −t+N +1−j āt+j ,
j=0 j=0
t = 0, 1, . . . , N
where L−1
t,s is the element in the (t, s) position of L, and similarly for U
The left side of equation (8.37) is the feedback part of the optimal control law for yt , while the right-hand
side is the feedforward part
We note that there is a different control law for each t
Thus, in the finite horizon case, the optimal control law is time dependent
It is natural to suspect that as N → ∞, (8.37) becomes equivalent to the solution of our infinite horizon
problem, which below we shall show can be expressed as
This suspicion is true under general conditions that we shall study later
For now, we note that by creating the matrix W for large N and factoring it into the LU form, good
approximations to c(L) and c(βL−1 )−1 can be obtained
For the infinite horizon problem, we propose to discover first-order necessary conditions by taking the limits
of (8.30) and (8.31) as N → ∞
This approach is valid, and the limits of (8.30) and (8.31) as N approaches infinity are first-order necessary
conditions for a maximum
However, for the infinite horizon problem with β < 1, the limits of (8.30) and (8.31) are, in general, not
sufficient for a maximum
That is, the limits of (8.31) do not provide enough information uniquely to determine the solution of the
Euler equation (8.30) that maximizes (8.27)
As we shall see below, a side condition on the path of yt that together with (8.30) is sufficient for an optimum
is
∞
∑
β t hyt2 < ∞ (8.38)
t=0
All paths that satisfy the Euler equations, except the one that we shall select below, violate this condition
and, therefore, evidently lead to (much) lower values of (8.27) than does the optimal path selected by the
solution procedure below
Consider the characteristic equation associated with the Euler equation
where z0 is a constant
In (8.40), we substitute (z − zj ) = −zj (1 − 1
zj z) and (z − βzj−1 ) = z(1 − β −1
zj z ) for j = 1, . . . , m to get
1 1 1 1
h + d(βz −1 )d(z) = (−1)m (z0 z1 · · · zm )(1 − z) · · · (1 − z)(1 − βz −1 ) · · · (1 − βz −1 )
z1 zm z1 zm
∑m
Now define c(z) = j=0 cj z j as
[ ]1/2 z z z
c (z) = (−1)m z0 z1 · · · zm (1 − ) (1 − ) · · · (1 − ) (8.41)
z1 z2 zm
Notice that (8.40) can be written
c(z) = c0 (1 − λ1 z) . . . (1 − λm z) (8.43)
where
1
c0 = [(−1)m z0 z1 · · · zm ]1/2 ; λj = , j = 1, . . . , m
zj
√ √
Since |zj | > β for j = 1, . . . , m it follows that |λj | < 1/ β for j = 1, . . . , m
Using (8.43), we can express the factorization (8.42) as
In sum, we have constructed a factorization (8.42) of the characteristic polynomial for the Euler equation in
which the zeros of c(z) exceed β 1/2 in modulus, and the zeros of c (βz −1 ) are less than β 1/2 in modulus
Using (8.42), we now write the Euler equation as
c(βL−1 ) c (L) yt = at
The unique solution of the Euler equation that satisfies condition (8.38) is
c−2
0 at
(1 − λ1 L) · · · (1 − λm L)yt = (8.45)
(1 − βλ1 L ) · · · (1 − βλm L−1 )
−1
Using partial fractions, we can write the characteristic polynomial on the right side of (8.45) as
∑
m
Aj c−2
0
where Aj := ∏
1 − λj βL−1 i̸=j (1 − λi
λj )
j=1
or
∑
m ∞
∑
(1 − λ1 L) · · · (1 − λm L)yt = Aj (λj β)k at+k (8.46)
j=1 k=0
Equation (8.46) expresses the optimum sequence for yt in terms of m lagged ys, and m weighted infinite
geometric sums of future at s
Furthermore, (8.46) is the unique solution of the Euler equation that satisfies the initial conditions and
condition (8.38)
In effect, condition (8.38) compels us to solve the unstable roots of h + d(βz −1 )d(z) forward (see [Sar87])
The step of factoring√the polynomial h + d(βz −1 ) d(z) into c (βz −1 )c (z), where the zeros of c (z) all have
modulus exceeding β, is central to solving the problem
We note two features of the solution (8.46)
√ √
• Since |λj | < 1/ β for all j, it follows that (λj β) < β
√
• The assumption that {at } is of exponential order less than 1/ β is sufficient to guarantee that the
geometric sums of future at s on the right side of (8.46) converge
We immediately see that those sums will converge under the weaker condition that {at } is of exponential
order less than ϕ−1 where ϕ = max {βλi , i = 1, . . . , m}
Note that with at identically zero, (8.46) implies that in general |yt | eventually grows exponentially at a rate
given by maxi |λi |
√
The condition maxi |λi | < 1/ β guarantees that condition (8.38) is satisfied
√
In fact, maxi |λi | < 1/ β is a necessary condition for (8.38) to hold
Were (8.38) not satisfied, the objective function would diverge to −∞, implying that the yt path could not
be optimal
For example, with at = 0, for all t, it is easy to describe a naive (nonoptimal) policy for {yt , t ≥ 0} that
gives a finite value of (8.43)
We can simply let yt = 0 for t ≥ 0
This policy involves at most m nonzero values of hyt2 and [d(L)yt ]2 , and so yields a finite value of (8.27)
Therefore it is easy to dominate a path that violates (8.38)
It is worthwhile focusing on a special case of the LQ problems above: the undiscounted problem that
emerges when β = 1
where
c (z) = c0 (1 − λ1 z) . . . (1 − λm z)
[ ]
c0 = (−1)m z0 z1 . . . zm
|λj | < 1 for j = 1, . . . , m
1
λj = for j = 1, . . . , m
zj
z0 = constant
The solution of the problem becomes
∑
m ∞
∑
(1 − λ1 L) · · · (1 − λm L)yt = Aj λkj at+k
j=1 k=0
Discounted problems can always be converted into undiscounted problems via a simple transformation
Consider problem (8.27) with 0 < β < 1
Define the transformed variables
∑
N
1 1
lim {ãt ỹt − h ỹt2 − [d˜(L) ỹt ]2 } (8.48)
N →∞ 2 2
t=0
which is to be maximized over sequences {ỹt , t = 0, . . .} subject to ỹ−1 , · · · , ỹ−m given and {ãt , t =
1, . . .} a known bounded sequence
The Euler equation for this problem is [h + d˜(L−1 ) d˜(L)] ỹt = ãt
The solution is
∑
m ∞
∑
(1 − λ̃1 L) · · · (1 − λ̃m L) ỹt = Ãj λ̃kj ãt+k
j=1 k=0
or
∑
m ∞
∑
ỹt = f˜1 ỹt−1 + · · · + f˜m ỹt−m + Ãj λ̃kj ãt+k , (8.49)
j=1 k=0
We leave it to the reader to show that (8.49) implies the equivalent form of the solution
∑
m ∞
∑
yt = f1 yt−1 + · · · + fm yt−m + Aj (λj β)k at+k
j=1 k=0
where
The transformations (8.47) and the inverse formulas (8.50) allow us to solve a discounted problem by first
solving a related undiscounted problem
8.5.6 Implementation
Code that computes solutions to the LQ problem using the methods described above can be found in file
control_and_filter.py
Heres how it looks
"""
"""
import numpy as np
import scipy.stats as spst
import scipy.linalg as la
class LQFilter:
Parameters
----------
d : list or numpy.array (1-D or a 2-D column vector)
The order of the coefficients: [d_0, d_1, ..., d_m]
h : scalar
self.h = h
self.d = np.asarray(d)
self.m = self.d.shape[0] - 1
self.y_m = np.asarray(y_m)
if self.m == self.y_m.shape[0]:
self.y_m = self.y_m.reshape(self.m, 1)
else:
raise ValueError("y_m must be of length m = {self.m:d}")
#---------------------------------------------
# Define the coefficients of up front
#---------------------------------------------
= np.zeros(2 * self.m + 1)
for i in range(- self.m, self.m + 1):
[self.m - i] = np.sum(np.diag(self.d.reshape(self.m + 1, 1) @ \
self.d.reshape(1, self.m +
,→1), k=-i))
#-----------------------------------------------------
# If r is given calculate the vector _r
#-----------------------------------------------------
if r is None:
pass
else:
self.r = np.asarray(r)
self.k = self.r.shape[0] - 1
_r = np.zeros(2 * self.k + 1)
for i in range(- self.k, self.k + 1):
_r[self.k - i] = np.sum(np.diag(self.r.reshape(self.k + 1, 1)
,→ @ \
self.r.reshape(1, self.k +
,→ 1), k=-i))
if h_eps is None:
self._r = _r
else:
_r[self.k] = _r[self.k] + h_eps
self._r = _r
#-----------------------------------------------------
# If β is given, define the transformed variables
#-----------------------------------------------------
if β is None:
self.β = 1
else:
self.β = β
self.d = self.β **(np.arange(self.m + 1)/2) * self.d
self.y_m = self.y_m * (self.β **(- np.arange(1, self.m + 1)/2)).
,→reshape(self.m, 1)
m = self.m
d = self.d
W = np.zeros((N + 1, N + 1))
W_m = np.zeros((N + 1, m))
#---------------------------------------
# Terminal conditions
#---------------------------------------
for j in range(m):
for i in range(j + 1, m + 1):
M[i, j] = D_m1[i - j - 1, m]
#----------------------------------------------
# Euler equations for t = 0, 1, ..., N-(m+1)
#----------------------------------------------
= self.
for i in range(m):
W_m[N - i, :(m - i)] = [(m + 1 + i):]
return W, W_m
def roots_of_characteristic(self):
"""
This function calculates z_0 and the 2m roots of the characteristic
,→equation
associated with the Euler equation (1.7)
Note:
------
numpy.poly1d(roots, True) defines a polynomial using its roots that
,→ can be
evaluated at any point. If x_1, x_2, ... , x_m are the roots then
p(x) = (x - x_1)(x - x_2)...(x - x_m)
"""
m = self.m
= self.
λ = 1 / z_1_to_m
def coeffs_of_c(self):
'''
This function computes the coefficients {c_j, j = 0, 1, ..., m} for
c(z) = sum_{j = 0}^{m} c_j z^j
return c_coeffs[::-1]
def solution(self):
"""
This function calculates {λ_j, j=1,...,m} and {A_j, j=1,...,m}
of the expression (1.15)
"""
λ = self.roots_of_characteristic()[2]
c_0 = self.coeffs_of_c()[-1]
A = np.zeros(self.m, dtype=complex)
for j in range(self.m):
denom = 1 - λ/λ[j]
A[j] = c_0**(-2) / np.prod(denom[np.arange(self.m) != j])
return λ, A
for i in range(N):
for j in range(N):
if abs(i-j) <= self.k:
V[i, j] = _r[self.k + abs(i-j)]
return V
return d.rvs()
N = np.asarray(a_hist).shape[0] - 1
a_hist = np.asarray(a_hist).reshape(N + 1, 1)
V = self.construct_V(N + 1)
return Ea_hist
Note:
------
scipy.linalg.lu normalizes L, U so that L has unit diagonal elements
To make things cosistent with the lecture, we need an auxiliary
,→diagonal
N = np.asarray(a_hist).shape[0] - 1
W, W_m = self.construct_W_and_Wm(N)
L, U = la.lu(W, permute_l=True)
D = np.diag(1 / np.diag(U))
U = D @ U
L = L @ np.diag(1 / np.diag(D))
J = np.fliplr(np.eye(N + 1))
a_hist = J @ np.asarray(a_hist).reshape(N + 1, 1)
#--------------------------------------------
# Transform the a sequence if β is given
#--------------------------------------------
if self.β != 1:
a_hist = a_hist * (self.β **(np.arange(N + 1) / 2))[::-1].
,→reshape(N + 1, 1)
#--------------------------------------------
# Transform the optimal sequence back if β is given
#--------------------------------------------
if self.β != 1:
y_hist = y_hist * (self.β **(- np.arange(-self.m, N + 1)/2)).
,→reshape(N + 1 + self.m, 1)
Example
d = γ * np.asarray([1, -1])
y_m = np.asarray(y_m).reshape(m, 1)
ax.legend()
ax.grid()
plt.show()
plot_simulation()
plot_simulation(γ=5)
And heres γ = 10
plot_simulation(γ=10)
8.5.7 Exercises
Exercise 1
or
∑
m ∞
∑
ỹt = f˜1 ỹt−1 + · · · + f˜m ỹt−m + Ãj λ̃kj ãt+k (8.51)
j=1 k=0
Here
˜ −1 )d(z)
• h + d(z ˜ = c̃(z −1 )c̃(z)
˜ −1 ) d(z)
where the z̃j are the zeros of h + d(z ˜
Prove that (8.51) implies that the solution for yt in feedback form is
∑
m ∞
∑
yt = f1 yt−1 + . . . + fm yt−m + Aj β k λkj at+k
j=1 k=0
Exercise 2
Exercise 3
∑
N
1
lim − [(1 − 2L)yt ]2 ,
N →∞ 2
t=0
Exercise 4
∑
N
1
lim (.0000001) yt2 − [(1 − 2L)yt ]2
N →∞ 2
t=0
subject to y−1 given. Prove that the solution yt = 2yt−1 violates condition (8.38), and so is not optimal
Prove that the optimal solution is approximately yt = .5yt−1
Contents
8.6.1 Overview
This is a sequel to the earlier lecture Classical Control with Linear Algebra
That lecture used linear algebra – in particular, the LU decomposition – to formulate and solve a class of
linear-quadratic optimal control problems
In this lecture, well be using a closely related decomposition, the Cholesky decomposition , to solve linear
prediction and filtering problems
We exploit the useful fact that there is an intimate connection between two superficially different classes of
problems:
• deterministic linear-quadratic (LQ) optimal control problems
• linear least squares prediction and filtering problems
The first class of problems involves no randomness, while the second is all about randomness
Nevertheless, essentially the same mathematics solves both type of problem
This connection, which is often termed duality, is present whether one uses classical or recursive solution
procedures
In fact we saw duality at work earlier when we formulated control and prediction problems recursively in
lectures LQ dynamic programming problems, A first look at the Kalman filter, and The permanent income
model
A useful consequence of duality is that
• With every LQ control problem there is implicitly affiliated a linear least squares prediction or filtering
problem
• With every linear least squares prediction or filtering problem there is implicitly affiliated a LQ control
problem
An understanding of these connections has repeatedly proved useful in cracking interesting applied problems
For example, Sargent [Sar87] [chs. IX, XIV] and Hansen and Sargent [HS80] formulated and solved control
and filtering problems using z-transform methods
In this lecture we investigate these ideas using mostly elementary linear algebra
References
Yt = d(L)ut (8.52)
∑m j
where d(L) = j=0 dj L , and ut is a serially uncorrelated stationary random process satisfying
Eut = 0
{
1 if t = s (8.53)
Eut us =
0 otherwise
Xt = Yt + εt (8.54)
where εt is a serially uncorrelated stationary random process with Eεt = 0 and Eεt εs = 0 for all distinct t
and s
We also assume that Eεt us = 0 for all t and s
The linear least squares prediction problem is to find the L2 random variable X̂t+j among linear combi-
nations of {Xt , Xt−1 , . . .} that minimizes E(X̂t+j − Xt+j )2
∑ ∑∞
That is, the problem is to find a γj (L) = ∞ k
k=0 γjk L such that k=0 |γjk | < ∞ and E[γj (L)Xt −Xt+j ]
2 2
is minimized
∑ ∑∞
The linear least squares filtering problem is to find a b (L) = ∞ j
j=0 bj L such that j=0 |bj | < ∞ and
2
Problem formulation
CX (τ ) = EXt Xt−τ
CY (τ ) = EYt Yt−τ τ = 0, ±1, ±2, . . . (8.55)
CY,X (τ ) = EYt Xt−τ
The covariance and cross covariance generating functions are defined as
∞
∑
gX (z) = CX (τ )z τ
τ =−∞
∑∞
gY (z) = CY (τ )z τ (8.56)
τ =−∞
∑∞
gY X (z) = CY X (τ )z τ
τ =−∞
gY (z) = d(z)d(z −1 )
gX (z) = d(z)d(z −1 ) + h (8.58)
−1
gY X (z) = d(z)d(z )
The key step in obtaining solutions to our problems is to factor the covariance generating function gX (z) of
X
The solutions of our problems are given by formulas due to Wiener and Kolmogorov
These formulas utilize the Wold moving average representation of the Xt process,
Xt = c (L) ηt (8.59)
∑m
where c(L) = j=0 cj Lj , with
Therefore, we have already showed constructively how to factor the covariance generating function gX (z) =
d(z) d (z −1 ) + h
We now introduce the annihilation operator:
∞
∑ ∞
∑
fj L j
≡ fj Lj (8.63)
j=−∞ j=0
+
[ ]
c(L)
γj (L) = c (L)−1 (8.64)
Lj +
We have defined the solution of the filtering problem as Ê[Yt | Xt , Xt−1 , . . .] = b(L)Xt
The Wiener-Kolomogorov formula for b(L) is
( )
gY X (L)
b(L) = c(L)−1
c(L−1 ) +
or
[ ]
d(L)d(L−1 )
b(L) = c(L)−1 (8.65)
c(L−1 ) +
Formulas (8.64) and (8.65) are discussed in detail in [Whi83] and [Sar87]
The interested reader can there find several examples of the use of these formulas in economics Some classic
examples using these formulas are due to [Mut60]
As an example of the usefulness of formula (8.65), we let Xt be a stochastic process with Wold moving
average representation
Xt = c(L)ηt
∑m
where Eηt2 = 1, and c0 ηt = Xt − Ê[Xt |Xt−1 , . . .], c(L) = j=0 cj L
Suppose that at time t, we wish to predict a geometric sum of future Xs, namely
∞
∑ 1
yt ≡ δ j Xt+j = Xt
1 − δL−1
j=0
[ ]
c(L)
b(L) = c(L)−1 (8.66)
1 − δL−1 +
In order to evaluate the term in the annihilation operator, we use the following result from [HS80]
Proposition Let
∑∞ ∑∞
z j where j=0 |gj |
• g(z) = 2
j=0 gj < +∞
• h (z −1 ) = (1 − δ1 z −1 ) . . . (1 − δn z −1 ), where |δj | < 1, for j = 1, . . . , n
Then
[ ] ∑ n ( )
g(z) g(z) δ g(δ ) 1
= − ∏n j j (8.67)
h(z −1 ) +
−1
h(z ) k=1 (δj − δk ) z − δj
j=1 k̸=j
and, alternatively,
[ ] ∑
n ( )
g(z) zg(z) − δj g(δj )
= Bj (8.68)
h(z −1 ) + z − δj
j=1
∏n
where Bj = 1/ k=1 (1 − δk /δj )
k+j
Applying formula (8.68) of the proposition to evaluating (8.66) with g(z) = c(z) and h(z −1 ) = 1 − δz −1
gives
[ ]
Lc(L) − δc(δ)
b(L) = c(L)−1
L−δ
or
[ ]
1 − δc(δ)L−1 c(L)−1
b(L) =
1 − δL−1
Thus, we have
∑∞ [ ]
1 − δc(δ)L−1 c(L)−1
Ê δ Xt+j |Xt , xt−1 , . . . =
j
Xt (8.69)
1 − δL−1
j=0
This formula is useful in solving stochastic versions of problem 1 of lecture Classical Control with Linear
Algebra in which the randomness emerges because {at } is a stochastic process
The problem is to maximize
∑
N [ ]
1 1
E0 lim β t
at yt − hyt2 − [d(L)yt ]2 (8.70)
N →∞ 2 2
t−0
where Et is mathematical expectation conditioned on information known at t, and where {at } is a covariance
stationary stochastic process with Wold moving average representation
at = c(L) ηt
where
∑
ñ
c(L) = cj Lj
j=0
and
ηt = at − Ê[at |at−1 , . . .]
The problem is to maximize (8.70) with respect to a contingency plan expressing yt as a function of infor-
mation known at t, which is assumed to be (yt−1 , yt−2 , . . . , at , at−1 , . . .)
The solution of this problem can be achieved in two steps
First, ignoring the uncertainty, we can solve the problem assuming that {at } is a known sequence
The solution is, from above,
or
∑
m ∞
∑
(1 − λ1 L) . . . (1 − λm L)yt = Aj (λj β)k at+k (8.71)
j=1 k=0
Second, the solution of the problem under uncertainty is obtained by replacing the terms on the right-hand
side of the above expressions with their linear least squares predictors.
Using (8.69) and (8.71), we have the following solution
∑
m [ ]
1 − βλj c(βλj )L−1 c(L)−1
(1 − λ1 L) . . . (1 − λm L)yt = Aj at
1 − βλj L−1
j=1
Let (x1 , x2 , . . . , xT )′ = x be a T × 1 vector of random variables with mean Ex = 0 and covariance matrix
Exx′ = V
Here V is a T × T positive definite matrix
We shall regard the random variables as being ordered in time, so that xt is thought of as the value of some
economic variable at time t
For example, xt could be generated by the random process described by the Wold representation presented
in equation (8.59)
In this case,∑Vij is given by the coefficient on z |i−j| in the expansion of gx (z) = d(z) d(z −1 ) + h, which
equals h + ∞ k=0 dk dk+|i−j|
We shall be interested in constructing j step ahead linear least squares predictors of the form
V = L−1 (L−1 )′
or
L V L′ = I
L11 x1 = ε1
L21 x1 + L22 x2 = ε2
.. (8.72)
.
LT 1 x1 . . . + LT T xT = εT
or
∑
t−1
Lt,t−j xt−j = εt , t = 1, 2, . . . T (8.73)
j=0
We also have
∑
t−1
xt = L−1
t,t−j εt−j . (8.74)
j=0
Notice from (8.74) that xt is in the space spanned by εt , εt−1 , . . . , ε1 , and from (8.73) that εt is in the space
spanned by xt , xt−1 , . . . , x1
Therefore, we have that for t − 1 ≥ m ≥ 1
∑
m−1 ∑
t−1
xt = L−1
t,t−j εt−j + L−1
t,t−j εt−j (8.76)
j=0 j=m
∑ −1
Representation (8.76) is an orthogonal decomposition of xt into a part t−1 j=m Lt,t−j εt−j that lies in the
space spanned by [xt−m , xt−m+1 , . . . , x1 ], and an orthogonal component not in this space
Implementation
Code that computes solutions to LQ control and filtering problems using the methods described here and in
Classical Control with Linear Algebra can be found in the file control_and_filter.py
Heres how it looks
"""
"""
import numpy as np
import scipy.stats as spst
import scipy.linalg as la
class LQFilter:
Parameters
----------
d : list or numpy.array (1-D or a 2-D column vector)
The order of the coefficients: [d_0, d_1, ..., d_m]
h : scalar
Parameter of the objective function (corresponding to the
quadratic term)
y_m : list or numpy.array (1-D or a 2-D column vector)
Initial conditions for y
r : list or numpy.array (1-D or a 2-D column vector)
The order of the coefficients: [r_0, r_1, ..., r_k]
(optional, if not defined -> deterministic problem)
β : scalar
Discount factor (optional, default value is one)
"""
self.h = h
self.d = np.asarray(d)
self.m = self.d.shape[0] - 1
self.y_m = np.asarray(y_m)
if self.m == self.y_m.shape[0]:
self.y_m = self.y_m.reshape(self.m, 1)
else:
raise ValueError("y_m must be of length m = {self.m:d}")
#---------------------------------------------
# Define the coefficients of up front
#---------------------------------------------
= np.zeros(2 * self.m + 1)
for i in range(- self.m, self.m + 1):
[self.m - i] = np.sum(np.diag(self.d.reshape(self.m + 1, 1) @ \
self.d.reshape(1, self.m +
,→1), k=-i))
#-----------------------------------------------------
# If r is given calculate the vector _r
#-----------------------------------------------------
if r is None:
pass
else:
self.r = np.asarray(r)
self.k = self.r.shape[0] - 1
_r = np.zeros(2 * self.k + 1)
for i in range(- self.k, self.k + 1):
_r[self.k - i] = np.sum(np.diag(self.r.reshape(self.k + 1, 1)
,→ @ \
self.r.reshape(1, self.k +
,→ 1), k=-i))
if h_eps is None:
self._r = _r
else:
_r[self.k] = _r[self.k] + h_eps
self._r = _r
#-----------------------------------------------------
# If β is given, define the transformed variables
#-----------------------------------------------------
if β is None:
self.β = 1
else:
self.β = β
self.d = self.β **(np.arange(self.m + 1)/2) * self.d
self.y_m = self.y_m * (self.β **(- np.arange(1, self.m + 1)/2)).
,→reshape(self.m, 1)
m = self.m
d = self.d
W = np.zeros((N + 1, N + 1))
W_m = np.zeros((N + 1, m))
#---------------------------------------
# Terminal conditions
#---------------------------------------
for j in range(m):
for i in range(j + 1, m + 1):
M[i, j] = D_m1[i - j - 1, m]
#----------------------------------------------
# Euler equations for t = 0, 1, ..., N-(m+1)
#----------------------------------------------
= self.
for i in range(m):
W_m[N - i, :(m - i)] = [(m + 1 + i):]
return W, W_m
def roots_of_characteristic(self):
"""
This function calculates z_0 and the 2m roots of the characteristic
,→equation
Note:
------
numpy.poly1d(roots, True) defines a polynomial using its roots that
,→ can be
evaluated at any point. If x_1, x_2, ... , x_m are the roots then
p(x) = (x - x_1)(x - x_2)...(x - x_m)
"""
m = self.m
= self.
λ = 1 / z_1_to_m
def coeffs_of_c(self):
'''
This function computes the coefficients {c_j, j = 0, 1, ..., m} for
c(z) = sum_{j = 0}^{m} c_j z^j
return c_coeffs[::-1]
def solution(self):
"""
This function calculates {λ_j, j=1,...,m} and {A_j, j=1,...,m}
of the expression (1.15)
"""
λ = self.roots_of_characteristic()[2]
c_0 = self.coeffs_of_c()[-1]
A = np.zeros(self.m, dtype=complex)
for j in range(self.m):
denom = 1 - λ/λ[j]
A[j] = c_0**(-2) / np.prod(denom[np.arange(self.m) != j])
return λ, A
'''
This function constructs the covariance matrix for x^N (see section 6)
for a given period N
'''
V = np.zeros((N, N))
_r = self._r
for i in range(N):
for j in range(N):
if abs(i-j) <= self.k:
V[i, j] = _r[self.k + abs(i-j)]
return V
return d.rvs()
N = np.asarray(a_hist).shape[0] - 1
a_hist = np.asarray(a_hist).reshape(N + 1, 1)
V = self.construct_V(N + 1)
return Ea_hist
Note:
------
scipy.linalg.lu normalizes L, U so that L has unit diagonal elements
To make things cosistent with the lecture, we need an auxiliary
,→diagonal
N = np.asarray(a_hist).shape[0] - 1
W, W_m = self.construct_W_and_Wm(N)
L, U = la.lu(W, permute_l=True)
D = np.diag(1 / np.diag(U))
U = D @ U
L = L @ np.diag(1 / np.diag(D))
J = np.fliplr(np.eye(N + 1))
a_hist = J @ np.asarray(a_hist).reshape(N + 1, 1)
#--------------------------------------------
# Transform the a sequence if β is given
#--------------------------------------------
if self.β != 1:
a_hist = a_hist * (self.β **(np.arange(N + 1) / 2))[::-1].
,→reshape(N + 1, 1)
#--------------------------------------------
# Transform the optimal sequence back if β is given
#--------------------------------------------
if self.β != 1:
y_hist = y_hist * (self.β **(- np.arange(-self.m, N + 1)/2)).
,→reshape(N + 1 + self.m, 1)
Example 1
xt = (1 − 2L)εt
where εt is a serially uncorrelated random process with mean zero and variance unity
We want to use the Wiener-Kolmogorov formula (8.64) to compute the linear least squares forecasts E[xt+j |
xt , xt−1 , . . .], for j = 1, 2
We can do everything we want by setting d = r, generating an instance of LQFilter, then invoking pertinent
methods of LQFilter
m = 1
y_m = np.asarray([.0]).reshape(m, 1)
d = np.asarray([1, -2])
r = np.asarray([1, -2])
h = 0.0
example = LQFilter(d, h, y_m, r=d)
example.coeffs_of_c()
example.roots_of_characteristic()
Now lets form the covariance matrix of a time series vector of length N and put it in V
Then well take a Cholesky decomposition of V = L−1 L−1 = LiLi′ and use it to form the vector of moving
average representations x = Liε and the vector of autoregressive representations Lx = ε
V = example.construct_V(N=5)
print(V)
[[ 5. -2. 0. 0. 0.]
[-2. 5. -2. 0. 0.]
[ 0. -2. 5. -2. 0.]
[ 0. 0. -2. 5. -2.]
[ 0. 0. 0. -2. 5.]]
Notice how the lower rows of the moving average representations are converging to the appropriate infinite
history Wold representation
Li = np.linalg.cholesky(V)
print(Li)
[[ 2.23606798 0. 0. 0. 0. ]
[-0.89442719 2.04939015 0. 0. 0. ]
[ 0. -0.97590007 2.01186954 0. 0. ]
[ 0. 0. -0.99410024 2.00293902 0. ]
[ 0. 0. 0. -0.99853265 2.000733 ]]
Notice how the lower rows of the autoregressive representations are converging to the appropriate infinite
history autoregressive representation
L = np.linalg.inv(Li)
print(L)
[[ 0.4472136 0. 0. 0. 0. ]
[ 0.19518001 0.48795004 0. 0. 0. ]
[ 0.09467621 0.23669053 0.49705012 0. 0. ]
[ 0.04698977 0.11747443 0.2466963 0.49926632 0. ]
[ 0.02345182 0.05862954 0.12312203 0.24917554 0.49981682]]
∑m j
Remark Let π(z) = j=0 πj z and let z1 , . . . , zk be the zeros of π(z) that are inside the unit circle, k < m
Then define
( )( ) ( )
(z1 z − 1) (z2 z − 1) (zk z − 1)
θ(z) = π(z) ...
(z − z1 ) (z − z2 ) (z − zk )
and that the zeros of θ(z) are not inside the unit circle
Example 2
example.roots_of_characteristic()
V = example.construct_V(N=8)
print(V)
[[ 3. 0. -1.41421356 0. 0. 0. 0.
,→ 0. ]
[ 0. 3. 0. -1.41421356 0. 0. 0.
,→ 0. ]
[-1.41421356 0. 3. 0. -1.41421356 0. 0.
,→ 0. ]
[ 0. -1.41421356 0. 3. 0. -1.41421356 0.
,→ 0. ]
[ 0. 0. -1.41421356 0. 3. 0. -1.
,→41421356 0. ]
[ 0. 0. 0. -1.41421356 0. 3. 0.
,→-1.41421356]
[ 0. 0. 0. 0. -1.41421356 0. 3.
,→ 0. ]
[ 0. 0. 0. 0. 0. -1.41421356 0.
,→ 3. ]]
Li = np.linalg.cholesky(V)
print(Li[-3:, :])
[[ 0. 0. 0. -0.9258201 0. 1.46385011 0.
,→ 0. ]
[ 0. 0. 0. 0. -0.96609178 0. 1.
,→ 43759058 0. ]
[ 0. 0. 0. 0. 0. -0.96609178 0.
,→ 1.43759058]]
L = np.linalg.inv(Li)
print(L)
[[ 0.57735027 0. 0. 0. 0. 0. 0.
,→ 0. ]
[ 0. 0.57735027 0. 0. 0. 0. 0.
,→ 0. ]
[ 0.3086067 0. 0.65465367 0. 0. 0. 0.
,→ 0. ]
[ 0. 0.3086067 0. 0.65465367 0. 0. 0.
,→ 0. ]
[ 0.19518001 0. 0.41403934 0. 0.68313005 0. 0.
,→ 0. ]
[ 0. 0.19518001 0. 0.41403934 0. 0.68313005 0.
,→ 0. ]
[ 0.13116517 0. 0.27824334 0. 0.45907809 0. 0.69560834
,→ 0. ]
[ 0. 0.13116517 0. 0.27824334 0. 0.45907809 0.
,→ 0.69560834]]
Prediction
It immediately follows from the orthogonality principle of least squares (see [AP91] or [Sar87] [ch. X])
that
∑
t−1
Ê[xt | xt−m , xt−m+1 , . . . x1 ] = L−1
t,t−j εt−j
j=m (8.77)
= [L−1 −1 −1
t,1 Lt,2 , . . . , Lt,t−m 0 0 . . . 0]L x
This can be interpreted as a finite-dimensional version of the Wiener-Kolmogorov m-step ahead prediction
formula
We can use (8.77) to represent the linear least squares projection of the vector x conditioned on the first s
observations [xs , xs−1 . . . , x1 ]
We have
[ ]
−1 Is 0
Ê[x | xs , xs−1 , . . . , x1 ] = L Lx (8.78)
0 0(t−s)
This formula will be convenient in representing the solution of control problems under uncertainty
Equation (8.74) can be recognized as a finite dimensional version of a moving average representation
Equation (8.73) can be viewed as a finite dimension version of an autoregressive representation
Notice that even if the xt process is covariance stationary, so that V is such that Vij depends only on |i − j|,
the coefficients in the moving average representation are time-dependent, there being a different moving
average for each t
If xt is a covariance stationary process, the last row of L−1 converges to the coefficients in the Wold moving
average representation for {xt } as T → ∞
Further, if xt is covariance stationary, for fixed k and j > 0, L−1 −1
T,T −j converges to LT −k,T −k−j as T → ∞
That is, the bottom rows of L−1 converge to each other and to the Wold moving average coefficients as
T →∞
This last observation gives one simple and widely-used practical way of forming a finite T approximation
to a Wold moving average representation
′
First, form the covariance matrix Exx′ = V , then obtain the Cholesky decomposition L−1 L−1 of V , which
can be accomplished quickly on a computer
The last row of L−1 gives the approximate Wold moving average coefficients
This method can readily be generalized to multivariate systems.
We saw in the lecture Classical Control with Linear Algebra that the solution of this problem under certainty
could be represented in feedback-feedforward form
y−1
U ȳ = L−1 ā + K ...
y−m
8.6.5 Exercises
Exercise 1
Let Yt = (1 − 2L)ut where ut is a mean zero white noise with Eu2t = 1. Let
Xt = Yt + εt
where εt is a serially uncorrelated white noise with Eε2t = 9, and Eεt us = 0 for all t and s
Find the Wold moving average representation for Xt
Find a formula for the A1j s in
∞
∑
bt+1 | Xt , Xt−1 , . . . =
EX A1j Xt−j
j=0
Exercise 2
(Multivariable Prediction) Let Yt be an (n×1) vector stochastic process with moving average representation
Yt = D(L)Ut
∑
where D(L) = m j=0 Dj L , Dj an n × n matrix, Ut an (n × 1) vector white noise with :math: mathbb{E}
J
U_t =0 for all t, EUt Us′ = 0 for all s ̸= t, and EUt Ut′ = I for all t
Let εt be an n × 1 vector white noise with mean 0 and contemporaneous covariance matrix H, where H is
a positive definite matrix
Let Xt = Yt + εt
′ , C (τ ) = EY Y ′ , C
Define the covariograms as CX (τ ) = EXt Xt−τ ′
Y t t−τ Y X (τ ) = EYt Xt−τ
Then define the matrix covariance generating function, as in (8.47), only interpret all the objects in (8.47) as
matrices
Show that the covariance generating functions are given by
gy (z) = D(z)D(z −1 )′
gX (z) = D(z)D(z −1 )′ + H
gY X (z) = D(z)D(z −1 )′
∑
m
D(z)D(z −1 )′ + H = C(z)C(z −1 )′ , C(z) = Cj z j
j=0
where the zeros of |C(z)| do not lie inside the unit circle
A vector Wold moving average representation of Xt is then
Xt = C(L)ηt
If C(L) is invertible, i.e., if the zeros of det C(z) lie strictly outside the unit circle, then this formula can be
written
[ ]
C(L)
Ê [Xt+j | Xt , Xt−1 , . . .] = C(L)−1 Xt
LJ +
NINE
Here we look at models in which a value function for one Bellman equation has as an argument the value
function for another Bellman equation
Contents
9.1.1 Overview
Previous lectures including LQ dynamic programming, rational expectations equilibrium, and Markov per-
fect equilibrium lectures have studied decision problems that are recursive in what we can call natural state
variables, such as
• stocks of capital (fiscal, financial and human)
• wealth
• information that helps forecast future prices and quantities that impinge on future payoffs
1097
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
Optimal decision rules are functions of the natural state variables in problems that are recursive in the natural
state variables
In this lecture, we describe problems that are not recursive in the natural state variables
Kydland and Prescott [KP77], [Pre77] and Calvo [Cal78] gave examples of such decision problems
These problems have the following features
• Time t ≥ 0 actions of decision makers called followers depend on time s ≥ t decisions of another
decision maker called a Stackelberg leader
• At time t = 0, the Stackelberg leader chooses his actions for all times s ≥ 0
• In choosing actions for all times at time 0, the Stackelberg leader can be said to commit to a plan
• The Stackelberg leader has distinct optimal decision rules at time t = 0, on the one hand, and at times
t ≥ 1, on the other hand
• The Stackelberg leaders decision rules for t = 0 and t ≥ 1 have distinct state variables
• Variables that encode history dependence appear in optimal decision rules of the Stackelberg leader
at times t ≥ 1
• These properties of the Stackelberg leaders decision rules are symptoms of the time inconsistency of
optimal government plans
An example of a time inconsistent optimal rule is that of a
• a large agent (e.g., a government) that confronts a competitive market composed of many small private
agents, and in which
• private agents decisions at each date are influenced by their forecasts of the large agents future actions
The rational expectations equilibrium concept plays an essential role
A rational expectations restriction implies that when it chooses its future actions, the Stackelberg leader also
chooses the followers expectations about those actions
The Stackelberg leader understands and exploits that situation
In a rational expectations equilibrium, the Stackelberg leaders time t actions confirm private agents forecasts
of those actions
The requirement to confirm prior followers forecasts puts constraints on the Stackelberg leaders time t
decisions that prevent its problem from being recursive in natural state variables
These additional constraints make the Stackelberg leaders decision rule at t depend on the entire history of
the natural state variables from time 0 to time t
This lecture displays these principles within the tractable framework of linear quadratic problems
It is based on chapter 19 of [LS18]
We use the optimal linear regulator (a.k.a. the linear-quadratic dynamic programming problem described
in LQ Dynamic Programming problems) to solve a linear quadratic version of what is known as a dynamic
Stackelberg problem
For now we refer to the Stackelberg leader as the government and the Stackelberg follower as the represen-
tative agent or private sector
Soon well give an application with another interpretation of these two decision makers
Let zt be an nz × 1 vector of natural state variables
Let xt be an nx × 1 vector of endogenous forward-looking variables that are physically free to jump at t
Let ut be a vector of government instruments
The zt vector is inherited physically from the past
But xt is inherited as a consequence of decisions made by the Stackelberg planner at time t = 0
Included in xt might be prices and quantities that adjust instantaneously to clear markets at time t
[ ]
z
Let yt = t
xt
Define the governments one-period loss function1
r(y, u) = y ′ Ry + u′ Qu (9.1)
Subject to an initial condition for z0 , but not for x0 , a government wants to maximize
∞
∑
− β t r(yt , ut ) (9.2)
t=0
[ ][ ] [ ][ ]
I 0 zt+1 Â11 Â12 zt
= + B̂ut (9.3)
G21 G22 xt+1 Â21 Â22 xt
We assume that the matrix on the left is invertible, so that we can multiply both sides of (9.3) by its inverse
to obtain
[ ] [ ][ ]
zt+1 A11 A12 zt
= + But (9.4)
xt+1 A21 A22 xt
or
1
The problem assumes that there are no cross products between states and controls in the return function. A simple transfor-
mation converts a problem whose return function has cross products into an equivalent problem that has no cross products. For
example, see [HS08] (chapter 4, pp. 72-73).
The private sectors behavior is summarized by the second block of equations of (9.4) or (9.5)
These equations typically include the first-order conditions of private agents optimization problem (i.e., their
Euler equations)
These Euler equations summarize the forward-looking aspect of private agents behavior and express how
their time t decisions depend on government actions at times s ≥ t
When combined with a stability condition to be imposed below, the Euler equations summarize the private
sectors best response to the sequence of actions by the government.
The government maximizes (9.2) by choosing sequences {ut , xt , zt+1 }∞
t=0 subject to (9.5) and an initial
condition for z0
Note that we have an initial condition for z0 but not for x0
x0 is among the variables to be chosen at time 0 by the Stackelberg leader
The government uses its understanding of the responses restricted by (9.5) to manipulate private sector
actions
To indicate the features of the Stackelberg leaders problem that make xt a vector of forward-looking vari-
ables, write the second block of system (9.3) as
where ϕ0 = Â−1
22 G22 .
∞
∑
xt = ϕj0 [ϕ1 zt+j + ϕ2 zt+j+1 + ϕ3 ut+j ] . (9.7)
j=0
In choosing ut for t ≥ 1 at time 0, the government takes into account how future z and u affect earlier x
through equation (9.7).
The lecture on history dependent policies analyzes an example about Ramsey taxation in which, as is typical
in such problems, the last nx equations of (9.4) or (9.5) constitute implementability constraints that are
formed by the Euler equations of a competitive fringe or private sector
A certainty equivalence principle allows us to work with a nonstochastic model (see LQ dynamic program-
ming)
That is, we would attain the same decision rule if we were to replace xt+1 with the forecast Et xt+1 and to
add a shock process Cϵt+1 to the right side of (9.5), where ϵt+1 is an IID random vector with mean zero and
identity covariance matrix
Let st denote the history of any variable s from 0 to t
[MS85], [HR85], [PL92], [Sar87], [Pea92], and others have all studied versions of the following problem:
Problem S: The Stackelberg problem is to maximize (9.2) by choosing an x0 and a sequence of decision
rules, the time t component of which maps a time t history of the natural state z t into a time t decision ut of
the Stackelberg leader
The Stackelberg leader chooses this sequence of decision rules once and for all at time t = 0
Another way to say this is that he commits to this sequence of decision rules at time 0
The maximization is subject to a given initial condition for z0
But x0 is among the objects to be chosen by the Stackelberg leader
The optimal decision rule is history dependent, meaning that ut depends not only on zt but at t ≥ 1 also on
lags of z
History dependence has two sources: (a) the governments ability to commit2 to a sequence of rules at time
0 as in the lecture on history dependent policies, and (b) the forward-looking behavior of the private sector
embedded in the second block of equations (9.4) as exhibited by (9.7)
Two Subproblems
Subproblem 1
∞
∑
v(y0 ) = max − β t r(yt , ut ) (9.8)
(⃗
y1 ,⃗
u0 )∈Ω(y0 )
t=0
Subproblem 2
Subproblem 1
v(y) = max
∗
{−r(y, u) + βv(y ∗ )} (9.10)
u,y
y ∗ = Ay + Bu (9.11)
which as in lecture linear regulator gives rise to the algebraic matrix Riccati equation
ut = −F yt . (9.14)
Subproblem 2
where
[ ]
P11 P12
P =
P21 P22
−2P21 z0 − 2P22 x0 = 0,
−1
x0 = −P22 P21 z0 . (9.16)
Summary
We have seen that for t ≥ 0 the optimal decision rule for the Stackelberg leader has the form
ut = −F yt
or
ut = f11 zt + f12 xt
where for t ≥ 1, xt is effectively a state variable, albeit not a natural one, inherited from the past
The means that for t ≥ 1, xt is not a function of zt only (though it is at t = 0) and that xt exerts an
independent influence on ut
The situation is different at t = 0
−1
For t = 0, the optimal choice of x0 = −P22 P21 z0 described in equation (9.16) implies that
−1
u0 = (f11 − f12 P22 P2,1 )z0 (9.17)
The history dependence of the governments plan can be expressed in the dynamics of Lagrange multipliers
µx on the last nx equations of (9.3) or (9.4)
These multipliers measure the cost today of honoring past government promises about current and future
settings of u
We shall soon show that as a result of optimally choosing x0 , it is appropriate to initialize the multipliers to
zero at time t = 0
This is true because at t = 0, there are no past promises about u to honor
But the multipliers µx take nonzero values thereafter, reflecting future costs to the government of confirming
the private sectors earlier expectations about its time t actions
From the linear regulator lecture, the formula µt = P yt for the vector of shadow prices on the transition
equations is
[ ]
µzt
µt =
µxt
µx0 = 0. (9.19)
As an example, this section studies the equilibrium of an industry with a large firm that acts as a Stackelberg
leader with respect to a competitive fringe
Sometimes the large firm is called the monopolist even though there are actually many firms in the industry
The industry produces a single nonstorable homogeneous good, the quantity of which is chosen in the
previous period
One large firm produces Qt and a representative firm in a competitive fringe produces qt
The representative firm in the competitive fringe acts as a price taker and chooses sequentially
The large firm commits to a policy at time 0, taking into account its ability to manipulate the price sequence,
both directly through the effects of its quantity choices on prices, and indirectly through the responses of the
competitive fringe to its forecasts of prices3
The costs of production are Ct = eQt + .5gQ2t + .5c(Qt+1 − Qt )2 for the large firm and σt = dqt + .5hqt2 +
.5c(qt+1 − qt )2 for the competitive firm, where d > 0, e > 0, c > 0, g > 0, h > 0 are cost parameters
There is a linear inverse demand curve
pt = A0 − A1 (Qt + q t ) + vt , (9.20)
where |ρ| < 1 and ϵ̌t+1 is an IID sequence of random variables with mean zero and variance 1
In (9.20), q t is equilibrium output of the representative competitive firm
In equilibrium, q t = qt , but we must distinguish between qt and q t in posing the optimum problem of a
competitive firm
3
[HS08] (chapter 16), uses this model as a laboratory to illustrate an equilibrium concept featuring robustness in which at least
one of the agents has doubts about the stochastic specification of the demand shock process.
∞
∑
E0 β t {pt qt − σt } , β ∈ (0, 1) (9.22)
t=0
for t ≥ 0
We appeal to a certainty equivalence principle to justify working with a non-stochastic version of (9.23)
formed by dropping the expectation operator and the random term ϵ̌t+1 from (9.21)
We use a method of [Sar79] and [Tow83]4
We shift (9.20) forward one period, replace conditional expectations with realized values, use (9.20) to
substitute for pt+1 in (9.23), and set qt = q t and it = it for all t ≥ 0 to get
it = βit+1 − c−1 βhq t+1 + c−1 β(A0 − d) − c−1 βA1 q t+1 − c−1 βA1 Qt+1 + c−1 βvt+1 (9.24)
Given sufficiently stable sequences {Qt , vt }, we could solve (9.24) and it = q t+1 − q t to express the
competitive fringes output sequence as a function of the (tail of the) monopolists output sequence
(This would be a version of representation (9.7))
It is this feature that makes the monopolists problem fail to be recursive in the natural state variables q, Q
The monopolist arrives at period t > 0 facing the constraint that it must confirm the expectations about its
time t decision upon which the competitive fringe based its decisions at dates before t
The monopolist views the sequence of the competitive firms Euler equations as constraints on its own op-
portunities
They are implementability constraints on the monopolists choices
4
They used this method to compute a rational expectations competitive equilibrium. Their key step was to eliminate price
and output by substituting from the inverse demand curve and the production function into the firms first-order conditions to get a
difference equation in capital.
Including the implementability constraints, we can represent the constraints in terms of the transition law
facing the monopolist:
1 0 0 0 0 1 1 0 0 0 0 1 0
0 1 0 0
0 vt+1 0 ρ 0 0 0 vt 0
0 0 0
0 1 0 Qt+1 = 0 0 1 0 Qt + 1 ut (9.25)
0 0 0 1 0 q t+1 0 0 0 1 1 q t 0
c
A0 − d 1 −A1 −A1 − h c it+1 0 0 0 0 β it 0
Although we have included the competitive fringes choice variable it as a component of the state yt in the
monopolists transition law (9.26), it is actually a jump variable
Nevertheless, the analysis above implies that the solution of the large firms problem is encoded in the Riccati
equation associated with (9.26) as the transition law
Lets decode it
[ ] [ ]
To match our general setup, we partition yt as yt′ = zt′ x′t where zt′ = 1 vt Qt q t and xt = it
The monopolists problem is
∞
∑
max β t {pt Qt − Ct }
{ut ,pt+1 ,Qt+1 ,q t+1 ,it }
t=0
subject to the given initial condition for z0 , equations (9.20) and (9.24) and it = q t+1 − q t , as well as the
laws of motion of the natural state variables z
Notice that the monopolist in effect chooses the price sequence, as well as the quantity sequence of the
competitive fringe, albeit subject to the restrictions imposed by the behavior of consumers, as summarized
by the demand curve (9.20) and the implementability constraint (9.24) that describes the best responses of
firms in the competitive fringe
By substituting (9.20) into the above objective function, the monopolists problem can be expressed as
∞
∑ { }
max β t (A0 − A1 (q t + Qt ) + vt )Qt − eQt − .5gQ2t − .5cu2t (9.27)
{ut }
t=0
subject to (9.26)
This can be written
∞
∑ { }
max − β t yt′ Ryt + u′t Qut (9.28)
{ut }
t=0
Under the Stackelberg plan, ut = −F yt , which implies that the evolution of y under the Stackelberg plan as
y t+1 = (A − BF )y t (9.29)
[ ]′
where y t = 1 vt Qt q t it
We now make use of a Big K, little k trick (see rational expectations equilibrium) to formulate a recursive
version of a followers problem cast in terms of an ordinary Bellman equation
The individual firm faces {pt } as a price taker and believes
pt = a0 − a1 Qt − a1 q t + vt
[ ] (9.30)
≡ Ep y t
[ ] [ ][ ] [ ]
y t+1 A − BF 0 yt 0
= + i (9.31)
qt+1 0 1 qt 1 t
subject to q0 given the law of motion (9.29) and the price function (9.30) and where the costs are still
σt = dqt + .5hqt2 + .5c(qt+1 − qt )2
The representative firms problem is a linear-quadratic dynamic programming problem with matrices
As , Bs , Qs , Rs that can be constructed easily from the above information.
The representative firms decision rule can be represented as
1
vt
Qt
it = −Fs
q
(9.32)
t
it
qt
Now lets stare at the decision rule (9.32) for it , apply Big K, little k logic again, and ask what we want in
order to verify a recursive representation of a representative followers choice problem
• We want decision rule (9.32) to have the property that it = it when we evaluate it at qt = q t
We inherit these desires from a Big K, little k logic
Here we apply a Big K, little k logic in two parts to make the representative firm be representative after
solving the representative firms optimization problem
• We want qt = q t
• We want it = it
Numerical example
for t ≥ 0
and
1
[ ] v0
x0 ≡ i0 = 31.08 0.29 −0.15 −0.56 Q0
q0
[ ] [ ]
For this example, starting from z0 = 1 v0 Q0 q 0 = 1 0 25 46 , the monopolist chooses to set
i0 = 1.43
That choice implies that
• i1 = 0.25, and
[ ] [ ]
• z1 = 1 v1 Q1 q 1 = 1 0 21.83 47.43
A monopolist who started from the initial conditions z̃0 = z1 would set i0 = 1.10 instead of .25 as called
for under the original optimal plan
The preceding little calculation reflects the time inconsistency of the monopolists optimal plan
The recursive representation of the decision rule for a representative fringe firm is
1
vt
[ ] Qt
it = 0 0 0 .34 1 −.34 q ,
t
it
qt
which we have computed by solving the appropriate linear-quadratic dynamic programming problem de-
scribed above
Notice that, as expected, it = it when we evaluate this decision rule at qt = q t
Please see Ramsey plans, time Inconsistency, sustainable Plans for a Stackelberg plan computed using
methods described here
This lecture is our first encounter with a class of problems in which optimal decision rules are history
dependent6
We shall encounter other examples in lectures optimal taxation with state-contingent debt and optimal tax-
ation without state-contingent debt
Many more examples of such problems are described in chapters 20-24 of [LS18]
9.1.8 Exercises
Exercise 1
There is no uncertainty
For t ≥ 0, a monetary authority sets the growth of (the log of) money according to
mt+1 = mt + ut (9.33)
mt − pt = −α(pt+1 − pt ) (9.34)
∞
∑ [ ]
− .95t (pt − p)2 + u2t + .00001m2t (9.35)
t=0
b. Please briefly interpret this problem as one where the monetary authority wants to stabilize the price level,
subject to costs of adjusting the money supply and some implementability constraints. (We include the term
.00001m2t for purely technical reasons that you need not discuss.)
c. Please write and run a Python program to find the optimal sequence {ut }∞
t=0
Exercise 2
∞
∑ { }
β t −.5(b − ct )2 (9.36)
t=0
where
• at is the households holdings of an asset at the beginning of t
• r > 0 is a constant net interest rate satisfying β(1 + r) < 1
• yt is the consumers endowment at t
∑∞ t a2
The consumers plan for (ct , at+1 ) has to obey the boundary condition t=0 β t < +∞
Assume that y0 , a0 are given initial conditions and that yt obeys
yt = ρyt−1 , t ≥ 1, (9.38)
∞
∑ { }
β t −.5(ct − b)2 − τt2 (9.39)
t=0
over {ct , τt }∞
t=0 subject to the implementability constraints in (9.37) for t ≥ 0 and
for t ≥ 0, where λt ≡ (b − ct )
a. Argue that (9.40) is the Euler equation for a consumer who maximizes (9.36) subject to (9.37), taking
{τt } as a given sequence
b. Formulate the planners problem as a Stackelberg problem
c. For β = .95, b = 30, β(1 + r) = .95, formulate an artificial optimal linear regulator problem and use it
to solve the Stackelberg problem
d. Give a recursive representation of the Stackelberg plan for τt
Contents
9.2.1 Overview
This lecture describes a linear-quadratic version of a model that Guillermo Calvo [Cal78] used to illustrate
the time inconsistency of optimal government plans
Like Chang [Cha98], we use the model as a laboratory in which to explore consequences of different timing
protocols for government decision making
The model focuses attention on intertemporal tradeoffs between
• welfare benefits that anticipated deflation generates by increasing a representative agents liquidity as
measured by his or her real money balances, and
• costs associated with distorting taxes that must be used to withdraw money from the economy in order
to generate anticipated deflation
The model features
• rational expectations
• costly government actions at all dates t ≥ 1 that increase household utilities at dates before t
• two Bellman equations, one that expresses the private sectors expectation of future inflation as a
function of current and future government actions, another that describes the value function of a
planner
A theme of this lecture is that timing protocols affect outcomes
Well use ideas from papers by Cagan [Cag56], Calvo [Cal78], Stokey [Sto89], [Sto91], Chari and Kehoe
[CK90], Chang [Cha98], and Abreu [Abr88] as well as from chapter 19 of [LS18]
In addition, well use ideas from linear-quadratic dynamic programming described in Linear Quadratic Con-
trol as applied to Ramsey problems in Stackelberg problems
In particular, we have specified the model in a way that allows us to use linear-quadratic dynamic program-
ming to compute an optimal government plan under a timing protocol in which a government chooses an
infinite sequence of money supply growth rates once and for all at time 0
There is no uncertainty
Let:
• pt be the log of the price level
• mt be the log of nominal money balances
• θt = pt+1 − pt be the rate of inflation between t and t + 1
• µt = mt+1 − mt be the rate of growth of nominal balances
The demand for real balances is governed by a perfect foresight version of the Cagan [Cag56] demand
function:
for t ≥ 0
Equation (9.41) asserts that the demand for real balances is inversely related to the publics expected rate of
inflation, which here equals the actual rate of inflation
(When there is no uncertainty, an assumption of rational expectations simplifies to perfect foresight)
(See [Sar77] for a rational expectations version of the model when there is uncertainty)
Subtracting the demand function at time t from the demand function at t + 1 gives:
µt − θt = −αθt+1 + αθt
or
α 1
θt = θt+1 + µt (9.42)
1+α 1+α
α
Because α > 0, 0 < 1+α <1
Definition: For a scalar xt , let L2 be the space of sequences {xt }∞
t=0 satisfying
∞
∑
x2t < +∞
t=0
∞ ( )j
1 ∑ α
θt = µt+j (9.43)
1+α 1+α
j=0
Insight: In the spirit of Chang [Cha98], note that equations (9.41) and (9.43) show that θt intermediates
how choices of µt+j , j = 0, 1, . . . impinge on time t real balances mt − pt = −αθt
We shall use this insight to help us simplify and analyze government policy problems
That future rates of money creation influence earlier rates of inflation creates optimal government policy
problems in which timing protocols matter
We can rewrite the model as:
[ ] [ ][ ] [ ]
1 1 0 1 0
= + 1 µt
θt+1 0 1+α
α θt − α
or
We write the model in the state-space form (9.44) even though θ0 is to be determined and so is not an initial
condition as it ordinarily would be in the state-space model described in Linear Quadratic Control.
We write the model in the form (9.44) because we want to apply an approach described in Stackelberg
problems
Assume that a representative households utility of real balances at time t is:
a2
U (mt − pt ) = a0 + a1 (mt − pt ) − (mt − pt )2 , a0 > 0, a1 > 0, a2 > 0 (9.45)
2
a1
The bliss level of real balances is then a2 .
The money demand function (9.41) and the utility function (9.45) imply that the bliss level of real balances
is attained when:
a1
θt = θ∗ = −
a2 α
Below, we introduce the discount factor β ∈ (0, 1) that a representative household and a benevolent govern-
ment both use to discount future utilities
(If we set parameters so that θ∗ = log(β), then we can regard a recommendation to set θt = θ∗ as a poor
mans Friedman rule that attains Milton Friedmans optimal quantity of money)
⃗ = {µt }∞
Via equation (9.43), a government plan µ t=0 leads to an equilibrium sequence of inflation outcomes
⃗ ∞
θ = {θt }t=0
We assume that social costs 2c µ2t are incurred at t when the government changes the stock of nominal money
balances at rate µt
Therefore, the one-period welfare function of a benevolent government is:
[ ]′ [ ][ ]
1 a0 − a12α 1 c
−s(θt , µt ) ≡ −r(xt , µt ) = 2 − µ2t = −x′t Rxt − Qµ2t (9.46)
θt − a12α − a22α θt 2
∞
∑ ∞
∑
v0 = − β r(xt , µt ) = −
t
β t s(θt , µt ) (9.47)
t=0 t=0
9.2.3 Structure
The following structure is induced by private agents behavior as summarized by the demand function for
money (9.41) that leads to equation (9.43) that tells how future settings of µ affect the current value of θ
⃗ = {µt }∞
Equation (9.43) maps a policy sequence of money growth rates µ t=0 ∈ L into an inflation sequence
2
θ⃗ = {θt }∞
t=0 ∈ L
2
recursion
vt = s(θt , µt ) + βvt+1
Criterion function (9.47) and constraint system (9.44) exhibit the following structure:
• Setting µt ̸= 0 imposes costs 2c µ2t at time t and at no other times; but
• The money growth rate µt affects the representative households one-period utilities at all dates s =
0, 1, . . . , t
That settings of µ at one date affect household utilities at earlier dates sets the stage for the emergence of a
time-inconsistent optimal government plan under a Ramsey (also called a Stackelberg) timing protocol
The four models differ with respect to timing protocols, constraints on government policy, and govern-
ment policy makers beliefs about how their decisions affect private agents beliefs about future government
decisions.
The models are
• A single Ramsey planner chooses a sequence {µt }∞
t=0 once and for all at time 0
Ω(x0 ) = {(−
→
x 1, −
→
µ 0 ) : xt+1 = Axt + Bµt , ∀t ≥ 0}
Subproblem 1
subject to:
x′ = Ax + Bµ
As in Stackelberg problems, we map this problem into a linear-quadratic control problem and then carefully
use the optimal value function associated with it
Guessing that J(x) = −x′ P x and substituting into the Bellman equation gives rise to the algebraic matrix
Riccatti equation:
µt = −F xt
where
F = β(Q + βB ′ P B)−1 B ′ P A
Subproblem 2
−2P21 − 2P22 θ0 = 0
which implies
P21
θ0∗ = −
P22
θ0 = θ0∗
µt = b0 + b1 θt (9.49)
θt+1 = d0 + d1 θt
⃗ = {µt }∞
It can verified that if we substitute a plan µ t=0 that satisfies these equations into equation (9.43), we
obtain the same sequence θ⃗ generated by system (9.49)
Thus, our construction of a Ramsey plan guarantees that promised inflation equals actual inflation
The inflation rate θt that appears in system (9.49) and equation (9.43) plays three roles simultaneously:
• In equation (9.43), θt is the actual rate of inflation between t and t + 1
• In equation (9.42) and (9.43), θt is also the publics expected rate of inflation between t and t + 1
• In system (9.49), θt is a promised rate of inflation chosen by the Ramsey planner at time 0
Time Inconsistency
As discussed in Stackelberg problems and Optimal taxation with state-contingent debt, a continuation Ram-
sey plan is not a Ramsey plan.
This is a concise way of characterizing the time inconsistency of a Ramsey plan.
The time inconsistency of a Ramsey plan has motivated other models of government decision making that
alter either
• the timing protocol and/or
• assumptions about how government decision makers think their decisions affect private agents beliefs
about future government decisions
Summary: We have introduced the constrained-to-a-constant µ government in order to highlight the role of
time-variation of µt in generating time inconsistency of a Ramsey plan
We now change the timing protocol by considering a government that chooses µt and expects all future
governments to set µt+j = µ̄
This assumption mirrors an assumption made in a different setting Markov Perfect Equilibrium
Further, the government at t believes that µ̄ is unafffected by its choice of µt
The time t rate of inflation is then:
α 1
θt = µ̄ + µt
1+α 1+α
The time t government policy maker then chooses µt to maximize:
c
W = U (−αθt ) − µ2t + βV (µ̄)
2
where V (µ̄) is the time 0 value v0 of recursion (9.48) under a money supply growth rate that is forever
constant at µ̄
Substituting for U and θt gives:
α2 α a2 α2 α c
W = a0 + a1 (− µ̄ − µt ) − ((− µ̄ − µt )2 − µ2t + βV (µ̄)
1+α 1+α 2 1+α 1+α 2
The first-order necessary condition for µt is then:
α α2 α α
− a1 − a2 (− µ̄ − µt )(− ) − cµt = 0
1+α 1+α 1+α 1+α
Rearranging we get:
−a1 α2 a2
µt = 1+α α − [ ] µ̄
α c + 1+α a2
1+α α
α c + 1+α a 2 (1 + α)
In light of results presented in the previous section, this can be simplified to:
αa1
µ̄ = − 2
α a2 + (1 + α)c
Below we compute sequences {θt , µt } under a Ramsey plan and compare these with the constant levels of θ
and µ in a) a Markov Perfect Equilibrium, and b) a Ramsey plan in which the planner is restricted to choose
µt = µ̌ for all t ≥ 0
We denote the Ramsey sequence as θR , µR and the MPE values as θM P E , µM P E
The bliss level of inflation is denoted by θ∗
First we will create a class ChangLQ that solves the models and stores their values
import numpy as np
from quantecon import LQ
import matplotlib.pyplot as plt
class ChangLQ:
"""
Class to solve LQ Chang model
"""
def __init__(self, α, α0, α1, α2, c, T=1000, θ_n=200):
# Record parameters
self.α, self.α0, self.α1 = α, α0, α1
self.α2, self.c, self.T, self.θ_n = α2, c, T, θ_n
# LQ Matrices
R = -np.array([[α0, -α1 * α / 2],
[-α1 * α/2, -α2 * α**2 / 2]])
Q = -np.array([[-c / 2]])
A = np.array([[1, 0], [0, (1 + α) / α]])
B = np.array([[0], [-1 / α]])
# Solve Subproblem 2
self.θ_R = -self.P[0, 1] / self.P[1, 1]
µ_series = np.zeros(T)
J_series = np.zeros(T)
θ_series[1, 0] = self.θ_R
µ_series[0] = -self.F.dot(θ_series[:, 0])
J_series[0] = -θ_series[:, 0] @ self.P @ θ_series[:, 0].T
for i in range(1, T):
θ_series[:, i] = (A - B @ self.F) @ θ_series[:, i-1]
µ_series[i] = -self.F @ θ_series[:, i]
J_series[i] = -θ_series[:, i] @ self.P @ θ_series[:, i].T
self.J_series = J_series
self.µ_series = µ_series
self.θ_series = θ_series
J_LB = min(J_space)
J_UB = max(J_space)
J_range = J_UB - J_LB
self.J_LB = J_LB - 0.05 * J_range
self.J_UB = J_UB + 0.05 * J_range
self.J_range = J_range
self.J_space = J_space
self.θ_space = θ_space
self.µ_space = µ_space
self.θ_prime = θ_prime
self.check_space = check_space
0.84648172489061413
The following code generates a figure that plots the value function from the Ramsey Planners problem,
which is maximized at θ0R
R to which the inflation rate θ converges under the Ramsey plan
The figure also shows the limiting value θ∞ t
and compares it to the MPE value and the bliss value
def plot_value_function(clq):
"""
Method to plot the value function over the relevant range of θ
"""
fig, ax = plt.subplots()
ax.set_xlim([clq.θ_LB, clq.θ_UB])
ax.set_ylim([clq.J_LB, clq.J_UB])
t1 = clq.θ_space[np.argmax(clq.J_space)]
tR = clq.θ_series[1, -1]
θ_points = [t1, tR, clq.θ_B, clq.θ_MPE]
labels = [r"$\theta_0^R$", r"$\theta_\infty^R$",
r"$\theta^*$", r"$\theta^{MPE}$"]
plot_value_function(clq)
The next code generates a figure that plots the value function from the Ramsey Planners problem as well as
that for a Ramsey planner that must choose a constant µ (that in turn equals an implied constant θ)
def compare_ramsey_check(clq):
"""
Method to compare values of Ramsey and Check
plt.xlabel(r"$\theta$", fontsize=18)
ax.plot(clq.θ_space, clq.check_space,
lw=2, label=r"$V^\check(\theta)$")
plt.legend(fontsize=14, loc='upper left')
θ_points = [clq.θ_space[np.argmax(clq.J_space)],
clq.µ_check]
labels = [r"$\theta_0^R$", r"$\theta^\check$"]
compare_ramsey_check(clq)
The next code generates figures that plot the policy functions for a continuation Ramsey planner
The left figure shows the choice of θ′ chosen by a continuation Ramsey planner who inherits θ
The right figure plots a continuation Ramsey planners choice of µ as a function of an inherited θ
def plot_policy_functions(clq):
"""
Method to plot the policy functions over the relevant range of θ
ax = axes[0]
ax.set_ylim([clq.θ_LB, clq.θ_UB])
ax.plot(clq.θ_space, clq.θ_prime,
label=r"$\theta'(\theta)$", lw=2)
x = np.linspace(clq.θ_LB, clq.θ_UB, 5)
ax.plot(x, x, 'k--', lw=2, alpha=0.7)
ax.set_ylabel(r"$\theta'$", fontsize=18)
θ_points = [clq.θ_space[np.argmax(clq.J_space)],
clq.θ_series[1, -1]]
ax = axes[1]
µ_min = min(clq.µ_space)
µ_max = max(clq.µ_space)
µ_range = µ_max - µ_min
ax.set_ylim([µ_min - 0.05 * µ_range, µ_max + 0.05 * µ_range])
ax.plot(clq.θ_space, clq.µ_space, lw=2)
ax.set_ylabel(r"$\mu(\theta)$", fontsize=18)
for ax in axes:
ax.set_xlabel(r"$\theta$", fontsize=18)
ax.set_xlim([clq.θ_LB, clq.θ_UB])
plot_policy_functions(clq)
The following code generates a figure that plots sequences of µ and θ in the Ramsey plan and compares these
to the constant levels in a MPE and in a Ramsey plan with a government restricted to set µt to a constant for
all t
plt.tight_layout()
plt.show()
plot_ramsey_MPE(clq)
In settings in which governments choose sequentially, many economists regard a time inconsistent plan
implausible because of the incentives to deviate that occur along the plan
A way to summarize this defect in a Ramsey plan is to say that it is not credible because there endure
incentives for policy makers to deviate from it
For that reason, the Markov perfect equilibrium concept attracts many economists
• A Markov perfect equilibrium plan is constructed to insure that government policy makers who choose
sequentially do not want to deviate from it
The no incentive to deviate from the plan property is what makes the Markov perfect equilibrium concept
attractive
Research by Abreu [Abr88], Chari and Kehoe [CK90] [Sto89], and [Sto91] discovered conditions under
which a Ramsey plan can be rescued from the complaint that it is not credible
They accomplished this by expanding the description of a plan to include expectations about adverse con-
sequences of deviating from it that can serve to deter deviations
We turn to such theories of sustainable plans next
µ A A ∞
⃗ = {µj }j=0 is an alternative government plan to be described below
The governments one-period return function s(θ, µ) described in equation (9.46) above has the property that
for all θ
s(θ, 0) ≥ s(θ, µ)
This inequality implies that whenever the policy calls for the government to set µ ̸= 0, the government could
raise its one-period return by setting µ = 0
Disappointing private sector expectations in that way would increase the governments current payoff but
would have adverse consequences for subsequent government payoffs because the private sector would
alter its expectations about future settings of µ
The temporary gain constitutes the governments temptation to deviate from a plan
If the government at t is to resist the temptation to raise its current payoff, it is only because it forecasts
adverse consequences that its setting of µt would bring for subsequent government payoffs via alterations
in the private sectors expectations
⃗ A is said to be self-enforcing if
A plan µ
• the consequence of disappointing private agents expectations at time j is to restart the plan at time
j+1
• that consequence is sufficiently adverse that it deters all deviations from the plan
⃗ A is self-enforcing if
More precisely, a government plan µ
vjA = s(θjA , µA A
j ) + βvj+1
(9.50)
≥ s(θjA , 0) + βv0A ≡ vjA,D , j≥0
(Here it is useful to recall that setting µ = 0 is the maximizing choice for the governments one-period return
function)
The first line tells the consequences of confirming private agents expectations, while the second line tells
the consequences of disappointing private agents expectations
A consequence of the definition is that a self-enforcing plan is credible
Self-enforcing plans can be used to construct other credible plans, ones with better values
A sufficient condition for a plan µ
⃗ to be credible or sustainable is that
A key step in constructing a credible plan is first constructing a self-enforcing plan that has a low time 0
value
The idea is to use the self-enforcing plan as a continuation plan when the governments choice at time t fails
to confirm private agents expectation
We shall use a construction featured in [Abr88] to construct a self-enforcing plan with low time 0 value
[Abr88] invented a way to create a self-enforcing plan with low initial value.
Imitating his idea, we can construct a self-enforcing plan µ
⃗ with a low time 0 value to the government by
insisting that the government set µt to a value yielding low one-period utilities to the household for a long
time, after which the government forever does things yielding high one-period utilities
• low one-period utilities early are a stick
• high one-period utilities later are a carrot
Consider a plan µ⃗ A in which the government sets µA
t = µ̄ (a high positive number) for TA periods, and then
reverts to the Ramsey plan
Denote this sequence by {µA ∞
t }t=0
A −1
T∑
v0A = β t s(θtA , µA
t )+β
TA
J(θ0R )
t=0
clq.V_A = np.zeros(T)
for t in range(T):
clq.V_A[t] = sum(U_A[t:] / clq.β **t)
plt.tight_layout()
plt.show()
abreu_plan(clq)
⃗ A is self-enforcing, the right panel plots an object that we call VtA,D , defined in
To confirm that the plan µ
the second line of equation (9.50) above
VtA,D is the value at t of deviating from the self-enforcing plan µ
⃗ A by setting µt = 0 and then restarting the
plan at v0A at t + 1
Notice that vtA > vtA,D
⃗ A is a self-enforcing plan
This confirms that µ
⃗ A to be self-confirming numerically as follows
We can also verify the inequalities required for µ
True
check_ramsey(clq)
True
We can represent a sustainable plan recursively by taking the continuation value vt as a state variable
We form the following 3-tuple of functions:
µ̂t = νµ (vt )
θt = νθ (vt ) (9.51)
vt+1 = νv (vt , µt )
The third equation of (9.51) updates the continuation value in a way that depends on whether the government
at t confirms private agents expectations by setting µt equal to the recommended value µ̂t , or whether it
disappoints those expectations
clq.J_series[0]
6.67918822960449
clq.J_check
6.6767295246748981
clq.J_MPE
6.6634358869951074
We have also computed sustainable plans for a government or sequence of governments that chooses se-
quentially
These include
• a self-enforcing plan that gives a low initial value v0
• a better plan – possibly one that attains values associated with Ramsey plan – that is not self-enforcing
The theory deployed in this lecture is an application of what we nickname dynamic programming squared
The nickname refers to the fact that a value satisfying one Bellman equation is itself an argument in a second
Bellman equation
Thus, our models have involved two Bellman equations:
• equation (9.41) expresses how θt depends on µt and θt+1
Contents
9.3.1 Overview
The treatment given here closely follows this manuscript, prepared by Thomas J. Sargent and Francois R.
Velde
We cover only the key features of the problem in this lecture, leaving you to refer to that source for additional
results and intuition
Model Features
We begin by outlining the key assumptions regarding technology, households and the government sector
Technology
Households
Consider a representative household who chooses a path {ℓt , ct } for labor and consumption to maximize
∞
1∑ t[ ]
−E β (ct − bt )2 + ℓ2t (9.52)
2
t=0
∞
∑
E β t p0t [dt + (1 − τt )ℓt + st − ct ] = 0 (9.53)
t=0
Here
• β is a discount factor in (0, 1)
• p0t is a scaled Arrow-Debreu price at time 0 of history contingent goods at time t + j
Government
The government imposes a linear tax on labor income, fully committing to a stochastic path of tax rates at
time zero
The government also issues state-contingent debt
Given government tax and borrowing plans, we can construct a competitive equilibrium with distorting
government taxes
Among all such competitive equilibria, the Ramsey plan is the one that maximizes the welfare of the repre-
sentative consumer
Exogenous Variables
Endowments, government expenditure, the preference shock process bt , and promised coupon payments on
initial government debt st are all exogenous, and given by
• dt = Sd xt
• g t = Sg x t
• bt = Sb xt
• s t = Ss x t
The matrices Sd , Sg , Sb , Ss are primitives and {xt } is an exogenous stochastic process taking values in Rk
We consider two specifications for {xt }
1. Discrete case: {xt } is a discrete state Markov chain with transition matrix P
2. VAR case: {xt } obeys xt+1 = Axt + Cwt+1 where {wt } is independent zero mean Gaussian with
identify covariance matrix
Feasibility
ct + gt = dt + ℓt (9.54)
Where p0t is again a scaled Arrow-Debreu price, the time zero government budget constraint is
∞
∑
E β t p0t (st + gt − τt ℓt ) = 0 (9.55)
t=0
Equilibrium
An equilibrium is a feasible allocation {ℓt , ct }, a sequence of prices {p0t }, and a tax system {τt } such that
1. The allocation {ℓt , ct } is optimal for the household given {p0t } and {τt }
2. The governments budget constraint (9.55) is satisfied
The Ramsey problem is to choose the equilibrium {ℓt , ct , τt , p0t } that maximizes the households welfare
If {ℓt , ct , τt , p0t } solves the Ramsey problem, then {τt } is called the Ramsey plan
The solution procedure we adopt is
1. Use the first-order conditions from the household problem to pin down prices and allocations given
{τt }
2. Use these expressions to rewrite the government budget constraint (9.55) in terms of exogenous vari-
ables and allocations
3. Maximize the households objective function (9.52) subject to the constraint constructed in step 2 and
the feasibility constraint (9.54)
The solution to this maximization problem pins down all quantities of interest
Solution
Step one is to obtain the first-conditions for the households problem, taking taxes and prices as given
Letting µ be the Lagrange multiplier on (9.53), the first-order conditions are p0t = (ct − bt )/µ and ℓt =
(ct − bt )(1 − τt )
Rearranging and normalizing at µ = b0 − c0 , we can write these conditions as
bt − ct ℓt
p0t = and τt = 1 − (9.56)
b0 − c0 bt − ct
Substituting (9.56) into the governments budget constraint (9.55) yields
∞
∑ [ ]
E β t (bt − ct )(st + gt − ℓt ) + ℓ2t = 0 (9.57)
t=0
The Ramsey problem now amounts to maximizing (9.52) subject to (9.57) and (9.54)
The associated Lagrangian is
∞
∑ { }
1[ ] [ ]
L =E β − (ct − bt ) + ℓt + λ (bt − ct )(ℓt − st − gt ) − ℓt + µt [dt + ℓt − ct − gt ] (9.58)
t 2 2 2
2
t=0
and
ℓt − λ[(bt − ct ) − 2ℓt ] = µt
Combining these last two equalities with (9.54) and working through the algebra, one can show that
where
• ν := λ/(1 + 2λ)
• ℓ̄t := (bt − dt + gt )/2
• c̄t := (bt + dt − gt )/2
• mt := (bt − dt − st )/2
Apart from ν, all of these quantities are expressed in terms of exogenous variables
To solve for ν, we can use the governments budget constraint again
The term inside the brackets in (9.57) is (bt − ct )(st + gt ) − (bt − ct )ℓt + ℓ2t
Using (9.59), the definitions above and the fact that ℓ̄ = b − c̄, this term can be rewritten as
{ ∞
} { ∞
}
∑ ∑
E β t (bt − c̄t )(gt + st ) + (ν 2 − ν)E β t 2m2t =0 (9.60)
t=0 t=0
{∞ } {∞ }
∑ ∑
b0 := E β t (bt − c̄t )(gt + st ) and a0 := E β t 2m2t (9.61)
t=0 t=0
b0 + a0 (ν 2 − ν) = 0
for ν
Provided that 4b0 < a0 , there is a unique solution ν ∈ (0, 1/2), and a unique corresponding λ > 0
Lets work out how to compute mathematical expectations in (9.61)
For the first one, the random variable (bt − c̄t )(gt + st ) inside the summation can be expressed as
1 ′
x (Sb − Sd + Sg )′ (Sg + Ss )xt
2 t
For the second expectation in (9.61), the random variable 2m2t can be written as
1 ′
x (Sb − Sd − Ss )′ (Sb − Sd − Ss )xt
2 t
It follows that both objects of interest are special cases of the expression
∞
∑
q(x0 ) = E β t x′t Hxt (9.62)
t=0
Next suppose that {xt } is the discrete Markov process described above
Suppose further that each xt takes values in the state space {x1 , . . . , xN } ⊂ Rk
Let h : Rk → R be a given function, and suppose that we wish to evaluate
∞
∑
q(x0 ) = E β t h(xt ) given x0 = xj
t=0
∞
∑
q(x0 ) = β t (P t h)[j] (9.63)
t=0
Here
• P t is the t-th power of the transition matrix P
• h is, with some abuse of notation, the vector (h(x1 ), . . . , h(xN ))
• (P t h)[j] indicates the j-th element of P t h
It can be show that (9.63) is in fact equal to the j-th element of the vector (I − βP )−1 h
This last fact is applied in the calculations below
Other Variables
We are interested in tracking several other variables besides the ones described above.
To prepare the way for this, we define
bt+j − ct+j
ptt+j =
bt − ct
as the scaled Arrow-Debreu time t price of a history contingent claim on one unit of consumption at time
t+j
These are prices that would prevail at time t if market were reopened at time t
These prices are constituents of the present value of government obligations outstanding at time t, which
can be expressed as
∞
∑
Bt := Et β j ptt+j (τt+j ℓt+j − gt+j ) (9.64)
j=0
Using our expression for prices and the Ramsey plan, we can also write Bt as
∞
∑ (bt+j − ct+j )(ℓt+j − gt+j ) − ℓ2t+j
Bt = Et βj
bt − ct
j=0
and
Define
A Martingale
∑
t
Πt := πt
s=0
Thus, πt+1 is the excess payout on the actual portfolio of state contingent government debt relative to an
alternative portfolio sufficient to finance Bt + gt − τt ℓt and consisting entirely of risk-free one-period bonds
Use expressions (9.65) and (9.66) to obtain
1 [ ]
πt+1 = Bt+1 − t βEt ptt+1 Bt+1
βEt pt+1
or
where Ẽt is the conditional mathematical expectation taken with respect to a one-step transition density that
has been formed by multiplying the original transition density with the likelihood ratio
ptt+1
mtt+1 =
Et ptt+1
which asserts that {πt+1 } is a martingale difference sequence under the distorted probability measure, and
that {Πt } is a martingale under the distorted probability measure
In the tax-smoothing model of Robert Barro [Bar79], government debt is a random walk
In the current model, government debt {Bt } is not a random walk, but the excess payoff {Πt } on it is
9.3.3 Implementation
import sys
import numpy as np
from numpy import sqrt, eye, zeros, cumsum
from numpy.random import randn
import scipy.linalg
import matplotlib.pyplot as plt
from collections import namedtuple
from quantecon import nullspace, mc_sample_path, var_quadratic_sum
Parameters
===========
T: int
Length of the simulation
Returns
========
path: a namedtuple of type 'Path', containing
g - Govt spending
d - Endowment
b - Utility shift parameter
s - Coupon payment on existing debt
c - Consumption
l - Labor
p - Price
τ - Tax rate
rvn - Revenue
B - Govt debt
R - Risk free gross return
π - One-period risk-free interest rate
Π - Cumulative rate of return, adjusted
ξ - Adjustment factor for Π
"""
# == Simplify names == #
β, Sg, Sd, Sb, Ss = econ.β, econ.Sg, econ.Sd, econ.Sb, econ.Ss
if econ.discrete:
P, x_vals = econ.proc
else:
A, C = econ.proc
b0 = 0.5 * (F @ H.T)[0]
a0, b0 = float(a0), float(b0)
else:
H = Sm.T @ Sm
a0 = 0.5 * var_quadratic_sum(A, C, H, β, x0)
H = (Sb - Sd + Sg).T @ (Sg + Ss)
b0 = 0.5 * var_quadratic_sum(A, C, H, β, x0)
return path
def gen_fig_1(path):
"""
The parameter is the path namedtuple returned by compute_paths(). See
the docstring of that function for details.
"""
T = len(path.c)
# == Prepare axes == #
num_rows, num_cols = 2, 2
fig, axes = plt.subplots(num_rows, num_cols, figsize=(14, 10))
plt.subplots_adjust(hspace=0.4)
for i in range(num_rows):
for j in range(num_cols):
axes[i, j].grid()
axes[i, j].set_xlabel('Time')
bbox = (0., 1.02, 1., .102)
legend_args = {'bbox_to_anchor': bbox, 'loc': 3, 'mode': 'expand'}
p_args = {'lw': 2, 'alpha': 0.7}
plt.show()
def gen_fig_2(path):
"""
The parameter is the path namedtuple returned by compute_paths(). See
the docstring of that function for details.
"""
T = len(path.c)
# == Prepare axes == #
num_rows, num_cols = 2, 1
fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 10))
plt.subplots_adjust(hspace=0.5)
bbox = (0., 1.02, 1., .102)
bbox = (0., 1.02, 1., .102)
legend_args = {'bbox_to_anchor': bbox, 'loc': 3, 'mode': 'expand'}
p_args = {'lw': 2, 'alpha': 0.7}
plt.show()
The function var_quadratic_sum imported from quadsums is for computing the value of (9.62) when
the exogenous process {xt } is of the VAR type described above
Below the definition of the function, you will see definitions of two namedtuple objects, Economy and
Path
The first is used to collect all the parameters and primitives of a given LQ economy, while the second collects
output of the computations
In Python, a namedtuple is a popular data type from the collections module of the standard library
that replicates the functionality of a tuple, but also allows you to assign a name to each tuple element
These elements can then be references via dotted attribute notation see for example the use of path in the
functions gen_fig_1() and gen_fig_2()
The benefits of using namedtuples:
• Keeps content organized by meaning
• Helps reduce the number of global variables
Other than that, our code is long but relatively straightforward
9.3.4 Examples
# == Parameters == #
β = 1 / 1.05
ρ, mg = .7, .35
A = eye(2)
A[0, :] = ρ, mg * (1-ρ)
C = np.zeros((2, 1))
C[0, 0] = np.sqrt(1 - ρ**2) * mg / 10
Sg = np.array((1, 0)).reshape(1, 2)
Sd = np.array((0, 0)).reshape(1, 2)
Sb = np.array((0, 2.135)).reshape(1, 2)
Ss = np.array((0, 0)).reshape(1, 2)
T = 50
path = compute_paths(T, economy)
gen_fig_1(path)
gen_fig_2(path)
Our second example adopts a discrete Markov specification for the exogenous process
# == Parameters == #
β = 1 / 1.05
P = np.array([[0.8, 0.2, 0.0],
[0.0, 0.5, 0.5],
[0.0, 0.0, 1.0]])
Sg = np.array((1, 0, 0, 0, 0)).reshape(1, 5)
Sd = np.array((0, 1, 0, 0, 0)).reshape(1, 5)
Sb = np.array((0, 0, 1, 0, 0)).reshape(1, 5)
Ss = np.array((0, 0, 0, 1, 0)).reshape(1, 5)
T = 15
path = compute_paths(T, economy)
gen_fig_1(path)
gen_fig_2(path)
9.3.5 Exercises
Exercise 1
9.3.6 Solutions
Exercise 1
# == Parameters == #
β = 1 / 1.05
ρ, mg = .95, .35
A = np.array([[0, 0, 0, ρ, mg*(1-ρ)],
[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 1]])
C = np.zeros((5, 1))
C[0, 0] = np.sqrt(1 - ρ**2) * mg / 8
Sg = np.array((1, 0, 0, 0, 0)).reshape(1, 5)
Sd = np.array((0, 0, 0, 0, 0)).reshape(1, 5)
Sb = np.array((0, 0, 0, 0, 2.135)).reshape(1, 5) # Chosen st. (Sc + Sg) * x0
,→= 1
Ss = np.array((0, 0, 0, 0, 0)).reshape(1, 5)
T = 50
path = compute_paths(T, economy)
gen_fig_1(path)
gen_fig_2(path)
Contents
9.4.1 Overview
This lecture describes a celebrated model of optimal fiscal policy by Robert E. Lucas, Jr., and Nancy Stokey
[LS83]
The model revisits classic issues about how to pay for a war
Here a war means a more or less temporary surge in an exogenous government expenditure process
The model features
• a government that must finance an exogenous stream of government expenditures with either
– a flat rate tax on labor, or
– purchases and sales from a full array of Arrow state-contingent securities
• a representative household that values consumption and leisure
• a linear production function mapping labor into a single good
• a Ramsey planner who at time t = 0 chooses a plan for taxes and trades of Arrow securities for all
t≥0
After first presenting the model in a space of sequences, we shall represent it recursively in terms of two
Bellman equations formulated along lines that we encountered in Dynamic Stackelberg models
As in Dynamic Stackelberg models, to apply dynamic programming we shall define the state vector artfully
In particular, we shall include forward-looking variables that summarize optimal responses of private agents
to a Ramsey plan
See Optimal taxation for an analysis within a linear-quadratic setting
For t ≥ 0, a history st = [st , st−1 , . . . , s0 ] of an exogenous state st has joint probability density πt (st )
We begin by assuming that government purchases gt (st ) at time t ≥ 0 depend on st
Let ct (st ), ℓt (st ), and nt (st ) denote consumption, leisure, and labor supply, respectively, at history st and
date t
A representative household is endowed with one unit of time that can be divided between leisure ℓt and
labor nt :
Output equals nt (st ) and can be divided between ct (st ) and gt (st )
∞ ∑
∑
β t πt (st )u[ct (st ), ℓt (st )] (9.70)
t=0 st
where the utility function u is increasing, strictly concave, and three times continuously differentiable in
both arguments
The technology pins down a pre-tax wage rate to unity for all t, st
The government imposes a flat-rate tax τt (st ) on labor income at time t, history st
There are complete markets in one-period Arrow securities
One unit of an Arrow security issued at time t at history st and promising to pay one unit of time t + 1
consumption in state st+1 costs pt+1 (st+1 |st )
The government issues one-period Arrow securities each period
The government has a sequence of budget constraints whose time t ≥ 0 component is
∑
gt (st ) = τt (st )nt (st ) + pt+1 (st+1 |st )bt+1 (st+1 |st ) − bt (st |st−1 ) (9.71)
st+1
where
• pt+1 (st+1 |st ) is a competitive equilibrium price of one unit of consumption at date t + 1 in state st+1
at date t and history st
• bt (st |st−1 ) is government debt falling due at time t, history st .
Government debt b0 (s0 ) is an exogenous initial condition
The representative household has a sequence of budget constraints whose time t ≥ 0 component is
∑ [ ]
ct (st ) + pt (st+1 |st )bt+1 (st+1 |st ) = 1 − τt (st ) nt (st ) + bt (st |st−1 ) ∀t ≥ 0. (9.72)
st+1
A price system is a sequence of Arrow security prices {pt+1 (st+1 |st )}∞
t=0
The household faces the price system as a price-taker and takes the government policy as given
The household chooses {ct (st ), ℓt (st )}∞
t=0 to maximize (9.70) subject to (9.72) and (9.68) for all t, s
t
A competitive equilibrium with distorting taxes is a feasible allocation, a price system, and a government
policy such that
• Given the price system and the government policy, the allocation solves the households optimization
problem
• Given the allocation, government policy, and price system, the governments budget constraint is sat-
isfied for all t, st
Note: There are many competitive equilibria with distorting taxes
They are indexed by different government policies
The Ramsey problem or optimal taxation problem is to choose a competitive equilibrium with distorting
taxes that maximizes (9.70)
We find it convenient sometimes to work with the Arrow-Debreu price system that is implied by a sequence
of Arrow securities prices
Let qt0 (st ) be the price at time 0, measured in time 0 consumption goods, of one unit of consumption at time
t, history st
The following recursion relates Arrow-Debreu prices {qt0 (st )}∞
t=0 to Arrow securities prices
{pt+1 (st+1 |st )}∞
t=0
0
qt+1 (st+1 ) = pt+1 (st+1 |st )qt0 (st ) s.t. q00 (s0 ) = 1 (9.73)
Arrow-Debreu prices are useful when we want to compress a sequence of budget constraints into a single
intertemporal budget constraint, as we shall find it convenient to do below
Primal Approach
We apply a popular approach to solving a Ramsey problem, called the primal approach
The idea is to use first-order conditions for household optimization to eliminate taxes and prices in favor of
quantities, then pose an optimization problem cast entirely in terms of quantities
After Ramsey quantities have been found, taxes and prices can then be unwound from the allocation
The primal approach uses four steps:
1. Obtain first-order conditions of the households problem and solve them for
{qt0 (st ), τt (st )}∞ t ∞
t=0 as functions of the allocation {ct (s ), nt (s )}t=0
t
2. Substitute these expressions for taxes and prices in terms of the allocation into the households
present-value budget constraint
• This intertemporal constraint involves only the allocation and is regarded as an imple-
mentability constraint
3. Find the allocation that maximizes the utility of the representative household (9.70) subject
to the feasibility constraints (9.68) and (9.69) and the implementability condition derived in
step 2
By sequential substitution of one one-period budget constraint (9.72) into another, we can obtain the house-
holds present-value budget constraint:
∞ ∑
∑ ∞ ∑
∑
qt0 (st )ct (st ) = qt0 (st )[1 − τt (st )]nt (st ) + b0 (9.74)
t=0 st t=0 st
ul (st )
(1 − τt (st )) = (9.75)
uc (st )
and
( )
uc (st+1 )
pt+1 (st+1 |s ) = βπ(st+1 |s )
t t
(9.76)
uc (st )
where π(st+1 |st ) is the probability distribution of st+1 conditional on history st
Equation (9.76) implies that the Arrow-Debreu price system satisfies
uc (st )
qt0 (st ) = β t πt (st ) (9.77)
uc (s0 )
Using the first-order conditions (9.75) and (9.76) to eliminate taxes and prices from (9.74), we derive the
implementability condition
∞ ∑
∑
β t πt (st )[uc (st )ct (st ) − uℓ (st )nt (st )] − uc (s0 )b0 = 0. (9.78)
t=0 st
∞ ∑
∑
β t πt (st )u[ct (st ), 1 − nt (st )] (9.79)
t=0 st
subject to (9.78)
Solution Details
[ ] [ ]
V ct (st ), nt (st ), Φ = u[ct (st ), 1 − nt (st )] + Φ uc (st )ct (st ) − uℓ (st )nt (st ) (9.80)
∞ ∑
∑ { [ ]}
J= β t πt (st ) V [ct (st ), nt (st ), Φ] + θt (st ) nt (st ) − ct (st ) − gt (st ) − Φuc (0)b0 (9.81)
t=0 st
where {θt (st ); ∀st }t≥0 is a sequence of Lagrange multipliers on the feasible conditions (9.69)
Given an initial government debt b0 , we want to maximize J with respect to {ct (st ), nt (st ); ∀st }t≥0 and to
minimize with respect to {θ(st ); ∀st }t≥0
The first-order conditions for the Ramsey problem for periods t ≥ 1 and t = 0, respectively, are
[ ]
ct (st ): (1 + Φ)uc (st ) + Φ ucc (st )ct (st ) − uℓc (st )nt (st ) − θt (st ) = 0, t ≥ 1
[ ] (9.82)
nt (st ): − (1 + Φ)uℓ (st ) − Φ ucℓ (st )ct (st ) − uℓℓ (st )nt (st ) + θt (st ) = 0, t ≥ 1
and
[ ]
c0 (s0 , b0 ): (1 + Φ)uc (s0 , b0 ) + Φ ucc (s0 , b0 )c0 (s0 , b0 ) − uℓc (s0 , b0 )n0 (s0 , b0 ) − θ0 (s0 , b0 )
− Φucc (s0 , b0 )b0 = 0
[ ] (9.83)
n0 (s0 , b0 ): − (1 + Φ)uℓ (s0 , b0 ) − Φ ucℓ (s0 , b0 )c0 (s0 , b0 ) − uℓℓ (s0 , b0 )n0 (s0 , b0 ) + θ0 (s0 , b0 )
+ Φucℓ (s0 , b0 )b0 = 0
[ ]
(1 + Φ)uc (c, 1 − c − g) + Φ cucc (c, 1 − c − g) − (c + g)uℓc (c, 1 − c − g)
[ ] (9.84)
= (1 + Φ)uℓ (c, 1 − c − g) + Φ cucℓ (c, 1 − c − g) − (c + g)uℓℓ (c, 1 − c − g)
[ ]
(1 + Φ)uc (c, 1 − c − g) + Φ cucc (c, 1 − c − g) − (c + g)uℓc (c, 1 − c − g)
[ ]
= (1 + Φ)uℓ (c, 1 − c − g) + Φ cucℓ (c, 1 − c − g) − (c + g)uℓℓ (c, 1 − c − g) + Φ(ucc − uc,ℓ )b0
(9.85)
Notice that a counterpart to b0 does not appear in (9.84), so c does not depend on it for t ≥ 1
But things are different for time t = 0
An analogous argument for the t = 0 equations (9.83) leads to one equation that can be solved for c0 as a
function of the pair (g(s0 ), b0 )
These outcomes mean that the following statement would be true even when government purchases are
history-dependent functions gt (st ) of the history of st
Proposition: If government purchases are equal after two histories st and s̃τ for t, τ ≥ 0, i.e., if
gt (st ) = g τ (s̃τ ) = g
then it follows from (9.84) that the Ramsey choices of consumption and leisure, (ct (st ), ℓt (st )) and
(cj (s̃τ ), ℓj (s̃τ )), are identical
The proposition asserts that the optimal allocation is a function of the currently realized quantity of govern-
ment purchases g only and does not depend on the specific history that preceded that realization of g
Further Specialization
Also, assume that government purchases g are an exact time-invariant function g(s) of s
We maintain these assumptions throughout the remainder of this lecture
Determining Φ
We complete the Ramsey plan by computing the Lagrange multiplier Φ on the implementability constraint
(9.78)
Government budget balance restricts Φ via the following line of reasoning
The households first-order conditions imply
ul (st )
(1 − τt (st )) = (9.86)
uc (st )
uc (st+1 )
pt+1 (st+1 |st ) = βΠ(st+1 |st ) (9.87)
uc (st )
Substituting from (9.86), (9.87), and the feasibility condition (9.69) into the recursive version (9.72) of the
household budget constraint gives
∑
uc (st )[nt (st ) − gt (st )] + β Π(st+1 |st )uc (st+1 )bt+1 (st+1 |st )
st+1 (9.88)
= ul (s )nt (s ) + uc (s )bt (st |s
t t t t−1
)
∑
uc (s)[n(s) − g(s)] + β Π(s′ |s)x′ (s′ ) = ul (s)n(s) + x(s) (9.89)
s′
where s′ denotes a next period value of s and x′ (s′ ) denotes a next period value of x
Equation (9.89) is easy to solve for x(s) for s = 1, . . . , S
If we let ⃗n, ⃗g , ⃗x denote S × 1 vectors whose ith elements are the respective n, g, and x values when s = i,
and let Π be the transition matrix for the Markov state s, then we can express (9.89) as the matrix equation
In these equations, by ⃗uc⃗n, for example, we mean element-by-element multiplication of the two vectors.
x(s)
After solving for ⃗x, we can find b(st |st−1 ) in Markov state st = s from b(s) = uc (s) or the matrix equation
⃗b = ⃗x (9.92)
⃗uc
where division here means element-by-element division of the respective components of the S × 1 vectors
⃗x and ⃗uc
Here is a computational algorithm:
1. Start with a guess for the value for Φ, then use the first-order conditions and the feasibility conditions
to compute c(st ), n(st ) for s ∈ [1, . . . , S] and c0 (s0 , b0 ) and n0 (s0 , b0 ), given Φ
• these are 2(S + 1) equations in 2(S + 1) unknowns
2. Solve the S equations (9.91) for the S elements of ⃗x
• these depend on Φ
3. Find a Φ that satisfies
∑
S
uc,0 b0 = uc,0 (n0 − g0 ) − ul,0 n0 + β Π(s|s0 )x(s) (9.93)
s=1
by gradually raising Φ if the left side of (9.93) exceeds the right side and lowering Φ if the left side is
less than the right side
4. After computing a Ramsey allocation, recover the flat tax rate on labor from (9.75) and the implied
one-period Arrow securities prices from (9.76)
In summary, when gt is a time invariant function of a Markov state st , a Ramsey plan can be constructed by
solving 3S + 3 equations in S components each of ⃗c, ⃗n, and ⃗x together with n0 , c0 , and Φ
Time Inconsistency
A time t, history st Ramsey plan is a Ramsey plan that starts from initial conditions st , bt (st |st−1 )
A time t, history st continuation of a time 0, state 0 Ramsey plan is not a time t, history st Ramsey plan
The means that a Ramsey plan is not time consistent
Another way to say the same thing is that a Ramsey plan is time inconsistent
The reason is that a continuation Ramsey plan takes uct bt (st |st−1 ) as given, not bt (st |st−1 )
We shall discuss this more below
In our calculations below and in a subsequent lecture based on an extension of the Lucas-Stokey model by
Aiyagari, Marcet, Sargent, and Seppälä (2002) [AMSS02], we shall modify the one-period utility function
assumed above.
(We adopted the preceding utility specification because it was the one used in the original [LS83] paper)
We will modify their specification by instead assuming that the representative agent has utility function
c1−σ n1+γ
u(c, n) = −
1−σ 1+γ
where σ > 0, γ > 0
We continue to assume that
ct + gt = nt
With these understandings, equations (9.84) and (9.85) simplify in the case of the CRRA utility function.
They become
and
(1 + Φ)[uc (c0 ) + un (c0 + g0 )] + Φ[c0 ucc (c0 ) + (c0 + g0 )unn (c0 + g0 )] − Φucc (c0 )b0 = 0 (9.95)
In equation (9.94), it is understood that c and g are each functions of the Markov state s.
In addition, the time t = 0 budget constraint is satisfied at c0 and initial government debt b0 :
b̄
b0 + g0 = τ0 (c0 + g0 ) + (9.96)
R0
where R0 is the gross interest rate for the Markov state s0 that is assumed to prevail at time t = 0 and τ0 is
the time t = 0 tax rate.
In equation (9.96), it is understood that
ul,0
τ0 = 1 −
uc,0
∑
S
uc (s)
R0 = β Π(s|s0 )
uc,0
s=1
Sequence Implementation
import numpy as np
from scipy.optimize import root
from quantecon import MarkovChain
class SequentialAllocation:
'''
Class that takes CESutility or BGPutility object as input returns
planner's allocation as a function of the multiplier on the
implementability constraint µ.
'''
def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Un = model.Uc, model.Un
def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])
if not res.success:
raise Exception('Could not find first best')
self.cFB = res.x[:S]
self.nFB = res.x[S:]
def FOC(z):
c = z[:S]
n = z[S:2 * S]
Ξ = z[2 * S:]
return np.hstack([Uc(c, n) - µ * (Ucc(c, n) * c + Uc(c, n)) - Ξ,
,→ # FOC of c
Un(c, n) - µ * (Unn(c, n) * n + Un(c, n)) + \
Θ * Ξ, # FOC of n
Θ * n - c - G])
z = res.x
c, n, Ξ = z[:S], z[S:2 * S], z[2 * S:]
# Compute x
I = Uc(c, n) * c + Un(c, n) * n
x = np.linalg.solve(np.eye(S) - self.β * self.π, I)
return c, n, x, Ξ
'''
model, π, Θ, G, β = self.model, self.π, self.Θ, self.G, self.β
Uc, Ucc, Un, Unn = model.Uc, model.Ucc, model.Un, model.Unn
Uc(c, n) - µ * (Ucc(c, n) *
(c - B_) + Uc(c, n)) - Ξ,
Un(c, n) - µ * (Unn(c, n) * n +
Un(c, n)) + Θ[s_0] * Ξ,
(Θ * n - c - G)[s_0]])
# Find root
res = root(FOC, np.array(
[0, self.cFB[s_0], self.nFB[s_0], self.ΞFB[s_0]]))
if not res.success:
raise Exception('Could not find time 0 LS allocation.')
return res.x
if sHist is None:
sHist = self.mc.simulate(T, s_0)
# Time 0
µ, cHist[0], nHist[0], _ = self.time0_allocation(B_, s_0)
THist[0] = self.T(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
µHist[0] = µ
# Time 1 onward
for t in range(1, T):
c, n, x, Ξ = self.time1_allocation(µ)
T = self.T(c, n)
u_c = Uc(c, n)
s = sHist[t]
Eu_c = π[sHist[t - 1]] @ u_c
cHist[t], nHist[t], Bhist[t], THist[t] = c[s], n[s], x[s] / \
u_c[s], T[s]
RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (β * Eu_c)
µHist[t] = µ
xt (st ) = uc (st )bt (st |st−1 ) in equation (9.88) appears to be a purely forward-looking variable
But xt (st ) is a also a natural candidate for a state variable in a recursive formulation of the Ramsey problem
Intertemporal Delegation
To express a Ramsey plan recursively, we imagine that a time 0 Ramsey planner is followed by a sequence
of continuation Ramsey planners at times t = 1, 2, . . .
A continuation Ramsey planner has a different objective function and faces different constraints than a
Ramsey planner
A key step in representing a Ramsey plan recursively is to regard the marginal utility scaled government
debts xt (st ) = uc (st )bt (st |st−1 ) as predetermined quantities that continuation Ramsey planners at times
After st has been realized at time t ≥ 1, the state variables confronting the time t continuation Ramsey
planner are (xt , st )
• Let V (x, s) be the value of a continuation Ramsey plan at xt = x, st = s for t ≥ 1
• Let W (b, s) be the value of a Ramsey plan at time 0 at b0 = b and s0 = s
We work backwards by presenting a Bellman equation for V (x, s) first, then a Bellman equation for W (b, s)
∑
V (x, s) = max u(n − g(s), 1 − n) + β Π(s′ |s)V (x′ , s′ ) (9.97)
n,{x′ (s′ )}
s′ ∈S
where maximization over n and the S elements of x′ (s′ ) is subject to the single implementability constraint
for t ≥ 1
∑
x = uc (n − g(s)) − ul n + β Π(s′ |s)x′ (s′ ) (9.98)
s′ ∈S
nt = f (xt , st ), t≥1
(9.99)
xt+1 (st+1 ) = h(st+1 ; xt , st ), st+1 ∈ S, t ≥ 1
∑
W (b0 , s0 ) = max u(n0 − g0 , 1 − n0 ) + β Π(s1 |s0 )V (x′ (s1 ), s1 ) (9.100)
n0 ,{x′ (s1 )}
s1 ∈S
where maximization over n0 and the S elements of x′ (s1 ) is subject to the time 0 implementability constraint
∑
uc,0 b0 = uc,0 (n0 − g0 ) − ul,0 n0 + β Π(s1 |s0 )x′ (s1 ) (9.101)
s1 ∈S
n0 = f0 (b0 , s0 )
(9.102)
x1 (s1 ) = h0 (s1 ; b0 , s0 )
Notice the appearance of state variables (b0 , s0 ) in the time 0 policy functions for the Ramsey planner as
compared to (xt , st ) in the policy functions (9.99) for the time t ≥ 1 continuation Ramsey planners
∑
The value function V (xt , st ) of the time t continuation Ramsey planner equals Et ∞ τ =t β
τ −t u(c , l ), where
t t
the consumption and leisure processes are evaluated along the original time 0 Ramsey plan
First-Order Conditions
Attach a Lagrange multiplier Φ1 (x, s) to constraint (9.98) and a Lagrange multiplier Φ0 to constraint (9.93)
Time t ≥ 1: the first-order conditions for the time t ≥ 1 constrained maximization problem on the right side
of the continuation Ramsey planners Bellman equation (9.97) are
for n
Given Φ1 , equation (9.104) is one equation to be solved for n as a function of s (or of g(s))
Equation (9.103) implies Vx (x′ , s′ ) = Φ1 , while an envelope condition is Vx (x, s) = Φ1 , so it follows that
Time t = 0: For the time 0 problem on the right side of the Ramsey planners Bellman equation (9.100),
first-order conditions are
Vx (x(s1 ), s1 ) = Φ0 (9.106)
[ ]
(1 + Φ0 )(uc,0 − un,0 ) + Φ0 n0 (ull,0 − ulc,0 ) + (n0 − g(s0 ))(ucc,0 − ucl,0 )
(9.107)
− Φ0 (ucc,0 − ucl,0 )b0 = 0
Notice similarities and differences between the first-order conditions for t ≥ 1 and for t = 0
An additional term is present in (9.107) except in three special cases
• b0 = 0, or
• uc is constant (i.e., preferences are quasi-linear in consumption), or
• initial government assets are sufficiently large to finance all government purchases with interest earn-
ings from those assets, so that Φ0 = 0
Except in these special cases, the allocation and the labor tax rate as functions of st differ between dates
t = 0 and subsequent dates t ≥ 1
Naturally, the first-order conditions in this recursive formulation of the Ramsey problem agree with the
first-order conditions derived when we first formulated the Ramsey plan in the space of sequences
Vx (xt , st ) = Φ0 (9.108)
for all t ≥ 1
When V is concave in x, this implies state-variable degeneracy along a Ramsey plan in the sense that for
t ≥ 1, xt will be a time-invariant function of st
Given Φ0 , this function mapping st into xt can be expressed as a vector ⃗x that solves equation (9.101) for n
and c as functions of g that are associated with Φ = Φ0
While the marginal utility adjusted level of government debt xt is a key state variable for the continuation
Ramsey planners at t ≥ 1, it is not a state variable at time 0
The time 0 Ramsey planner faces b0 , not x0 = uc,0 b0 , as a state variable
The discrepancy in state variables faced by the time 0 Ramsey planner and the time t ≥ 1 continuation
Ramsey planners captures the differing obligations and incentives faced by the time 0 Ramsey planner and
the time t ≥ 1 continuation Ramsey planners
• The time 0 Ramsey planner is obligated to honor government debt b0 measured in time 0 consumption
goods
• The time 0 Ramsey planner can manipulate the value of government debt as measured by uc,0 b0
• In contrast, time t ≥ 1 continuation Ramsey planners are obligated not to alter values of debt, as
measured by uc,t bt , that they inherit from a preceding Ramsey planner or continuation Ramsey planner
When government expenditures gt are a time invariant function of a Markov state st , a Ramsey plan and
associated Ramsey allocation feature marginal utilities of consumption uc (st ) that, given Φ, for t ≥ 1
depend only on st , but that for t = 0 depend on b0 as well
This means that uc (st ) will be a time invariant function of st for t ≥ 1, but except when b0 = 0, a different
function for t = 0
This in turn means that prices of one period Arrow securities pt+1 (st+1 |st ) = p(st+1 |st ) will be the same
time invariant functions of (st+1 , st ) for t ≥ 1, but a different function p0 (s1 |s0 ) for t = 0, except when
b0 = 0
The differences between these time 0 and time t ≥ 1 objects reflect the Ramsey planners incentive to
manipulate Arrow security prices and, through them, the value of initial government debt b0
Recursive Implementation
class RecursiveAllocation:
'''
Compute the planner's allocation by solving Bellman
equation.
'''
def solve_time1_bellman(self):
'''
Solve the time 1 Bellman equation for calibration model and initial
,→grid µgrid0
'''
model, µgrid0 = self.model, self.µgrid
S = len(model.π)
# Create xgrid
xbar = [x.min(0).max(), x.max(0).min()]
xgrid = np.linspace(xbar[0], xbar[1], len(µgrid0))
self.xgrid = xgrid
'''
PF = self.T(self.Vf)
z0 = PF(B_, s0)
c0, n0, xprime0 = z0[1], z0[2], z0[3:]
return c0, n0, xprime0
if sHist is None:
sHist = self.mc.simulate(T, s_0)
# Time 0
cHist[0], nHist[0], xprime = self.time0_allocation(B_, s_0)
THist[0] = self.T(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
µHist[0] = 0
# Time 1 onward
for t in range(1, T):
s, x = sHist[t], xprime[sHist[t]]
c, n, xprime = np.empty(self.S), nf[s](x), np.empty(self.S)
for shat in range(self.S):
c[shat] = cf[shat](x)
for sprime in range(self.S):
xprime[sprime] = xprimef[s, sprime](x)
T = self.T(c, n)[s]
u_c = Uc(c, n)
Eu_c = π[sHist[t - 1]] @ u_c
µHist[t] = self.Vf[s](x, 1)
class BellmanEquation:
'''
Bellman equation for the continuation of the Lucas-Stokey Problem
'''
self.z0 = {}
cf, nf, xprimef = policies0
for s in range(self.S):
for x in xgrid:
xprime0 = np.empty(self.S)
for sprime in range(self.S):
xprime0[sprime] = xprimef[s, sprime](x)
self.find_first_best()
def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, Uc, Un, G = self.S, self.Θ, model.Uc, model.Un, self.G
def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])
self.cFB = res.x[:S]
self.nFB = res.x[S:]
IFB = Uc(self.cFB, self.nFB) * self.cFB + Un(self.cFB, self.nFB) *
,→self.nFB
self.xFB = np.linalg.solve(np.eye(S) - self.β * self.π, IFB)
self.zFB = {}
for s in range(S):
self.zFB[s] = np.hstack([self.cFB[s], self.nFB[s], self.xFB])
def objf(z):
c, n, xprime = z[0], z[1], z[2:]
Vprime = np.empty(S)
for sprime in range(S):
Vprime[sprime] = Vf[sprime](xprime[sprime])
def cons(z):
c, n, xprime = z[0], z[1], z[2:]
return np.hstack([x - Uc(c, n) * c - Un(c, n) * n - β * π[s] @
,→xprime,
(Θ * n - c - G)[s]])
if imode > 0:
raise Exception(smode)
self.z0[x, s] = out
return np.hstack([-fx, out])
def objf(z):
c, n, xprime = z[0], z[1], z[2:]
Vprime = np.empty(S)
for sprime in range(S):
Vprime[sprime] = Vf[sprime](xprime[sprime])
def cons(z):
c, n, xprime = z[0], z[1], z[2:]
return np.hstack([-Uc(c, n) * (c - B_) - Un(c, n) * n - β * π[s0]
,→@ xprime,
(Θ * n - c - G)[s0]])
if imode > 0:
raise Exception(smode)
9.4.4 Examples
This example illustrates in a simple setting how a Ramsey planner manages risk
Government expenditures are known for sure in all periods except one
• For t < 3 and t > 3 we assume that gt = gl = 0.1
• At t = 3 a war occcurs with probability 0.5.
– If there is war, g3 = gh = 0.2
– If there is no war g3 = gl = 0.1
We define the components of the state vector as the following six (t, g) pairs:
(0, gl ), (1, gl ), (2, gl ), (3, gl ), (3, gh ), (t ≥ 4, gl ).
We think of these 6 states as corresponding to s = 1, 2, 3, 4, 5, 6
The transition matrix is
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 0.5 0.5 0
Π=
0
0 0 0 0 1
0 0 0 0 0 1
0 0 0 0 0 1
c1−σ n1+γ
u(c, n) = −
1−σ 1+γ
import numpy as np
class CRRAutility:
def __init__(self,
β=0.9,
σ=2,
γ=2,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):
# Utility function
def U(self, c, n):
σ = self.σ
if σ == 1.:
U = np.log(c)
else:
U = (c**(1 - σ) - 1) / (1 - σ)
return U - n**(1 + self.γ) / (1 + self.γ)
# Output paths
sim_seq_l[5] = time_example.Θ[sHist_l] * sim_seq_l[1]
sim_seq_h[5] = time_example.Θ[sHist_h] * sim_seq_h[1]
plt.tight_layout()
plt.show()
Tax smoothing
• the tax rate is constant for all t ≥ 1
– For t ≥ 1, t ̸= 3, this is a consequence of gt being the same at all those dates
– For t = 3, it is a consequence of the special one-period utility function that we have assumed
– Under other one-period utility functions, the time t = 3 tax rate could be either higher or lower
than for dates t ≥ 1, t ̸= 3
• the tax rate is the same at t = 3 for both the high gt outcome and the low gt outcome
We have assumed that at t = 0, the government owes positive debt b0
It sets the time t = 0 tax rate partly with an eye to reducing the value uc,0 b0 of b0
It does this by increasing consumption at time t = 0 relative to consumption in later periods
This has the consequence of raising the time t = 0 value of the gross interest rate for risk-free loans between
periods t and t + 1, which equals
uc,t
Rt =
βEt [uc,t+1 ]
A tax policy that makes time t = 0 consumption be higher than time t = 1 consumption evidently increases
the risk-free rate one-period interest rate, Rt , at t = 0
Raising the time t = 0 risk-free interest rate makes time t = 0 consumption goods cheaper relative to
consumption goods at later dates, thereby lowering the value uc,0 b0 of initial government debt b0
We see this in a figure below that plots the time path for the risk free interest rate under both realizations of
the time t = 3 government expenditure shock
The following plot illustrates how the government lowers the interest rate at time 0 by raising consumption
plt.figure(figsize=(8, 5))
plt.title('Gross Interest Rate')
plt.plot(sim_seq_l[-1], '-ok', sim_seq_h[-1], '-or', alpha=0.7)
plt.grid()
plt.show()
Government Saving
We have seen that when b0 > 0, the Ramsey plan sets the time t = 0 tax rate partly with an eye toward
raising a risk-free interest rate for one-period loans between times t = 0 and t = 1
By raising this interest rate, the plan makes time t = 0 goods cheap relative to consumption goods at later
times
By doing this, it lowers the value of time t = 0 debt that it has inherited and must finance
In the preceding example, the Ramsey tax rate at time 0 differs from its value at time 1
To explore what is going on here, lets simplify things by removing the possibility of war at time t = 3
The Ramsey problem then includes no randomness because gt = gl for all t
The figure below plots the Ramsey tax rates and gross interest rates at time t = 0 and time t ≥ 1 as functions
of the initial government debt (using the sequential allocation solution and a CRRA utility function defined
above)
tax_sequence = SequentialAllocation(CRRAutility(G=0.15,
π=np.ones((1, 1)),
Θ=np.ones(1)))
n = 100
tax_policy = np.empty((n, 2))
interest_rate = np.empty((n, 2))
gov_debt = np.linspace(-1.5, 1, n)
for i in range(n):
tax_policy[i] = tax_sequence.simulate(gov_debt[i], 0, 2)[3]
interest_rate[i] = tax_sequence.simulate(gov_debt[i], 0, 3)[-1]
fig.tight_layout()
plt.show()
The figure indicates that if the government enters with positive debt, it sets a tax rate at t = 0 that is less
than all later tax rates
By setting a lower tax rate at t = 0, the government raises consumption, which reduces the value uc,0 b0 of
its initial debt
It does this by increasing c0 and thereby lowering uc,0
Conversely, if b0 < 0, the Ramsey planner sets the tax rate at t = 0 higher than in subsequent periods.
A side effect of lowering time t = 0 consumption is that it raises the one-period interest rate at time 0 above
that of subsequent periods.
There are only two values of initial government debt at which the tax rate is constant for all t ≥ 0
The first is b0 = 0
• Here the government cant use the t = 0 tax rate to alter the value of the initial debt
The second occurs when the government enters with sufficiently large assets that the Ramsey planner can
achieve first best and sets τt = 0 for all t
It is only for these two values of initial government debt that the Ramsey plan is time-consistent
Another way of saying this is that, except for these two values of initial government debt, a continuation of
a Ramsey plan is not a Ramsey plan
To illustrate this, consider a Ramsey planner who starts with an initial government debt b1 associated with
one of the Ramsey plans computed above
Call τ1R the time t = 0 tax rate chosen by the Ramsey planner confronting this value for initial government
debt government
The figure below shows both the tax rate at time 1 chosen by our original Ramsey planner and what a new
Ramsey planner would choose for its time t = 0 tax rate
tax_sequence = SequentialAllocation(CRRAutility(G=0.15,
π=np.ones((1, 1)),
Θ=np.ones(1)))
n = 100
tax_policy = np.empty((n, 2))
τ _reset = np.empty((n, 2))
gov_debt = np.linspace(-1.5, 1, n)
for i in range(n):
tax_policy[i] = tax_sequence.simulate(gov_debt[i], 0, 2)[3]
τ _reset[i] = tax_sequence.simulate(gov_debt[i], 0, 1)[3]
fig.tight_layout()
plt.show()
The tax rates in the figure are equal for only two values of initial government debt
The complete tax smoothing for t ≥ 1 in the preceding example is a consequence of our having assumed
CRRA preferences
To see what is driving this outcome, we begin by noting that the Ramsey tax rate for t ≥ 1 is a time invariant
function τ (Φ, g) of the Lagrange multiplier on the implementability constraint and government expenditures
For CRRA preferences, we can exploit the relations Ucc c = −σUc and Unn n = γUn to derive
(1 + (1 − σ)Φ)Uc
=1
(1 + (1 − γ)Φ)Un
class LogUtility:
def __init__(self,
β=0.9,
ψ=0.69,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):
# Utility function
def U(self, c, n):
return np.log(c) + self.ψ * np.log(1 - n)
Also suppose that gt follows a two state i.i.d. process with equal probabilities attached to gl and gh
To compute the tax rate, we will use both the sequential and recursive approaches described above
The figure below plots a sample path of the Ramsey tax rate
log_example = LogUtility()
seq_log = SequentialAllocation(log_example) # Solve sequential problem
T = 20
sHist = np.array([0, 0, 0, 0, 0, 0, 0,
0, 1, 1, 0, 0, 0, 1,
1, 1, 1, 1, 1, 0])
# Simulate
sim_seq = seq_log.simulate(0.5, 0, T, sHist)
sim_bel = bel_log.simulate(0.5, 0, T, sHist)
sim_bel[4] = log_example.G[sHist]
# Output paths
sim_seq[5] = log_example.Θ[sHist] * sim_seq[1]
sim_bel[5] = log_example.Θ[sHist] * sim_bel[1]
axes.flatten()[0].legend(('Sequential', 'Recursive'))
fig.tight_layout()
plt.show()
As should be expected, the recursive and sequential solutions produce almost identical allocations
Unlike outcomes with CRRA preferences, the tax rate is not perfectly smoothed
Instead the government raises the tax rate when gt is high
Further Comments
A related lecture describes an extension of the Lucas-Stokey model by Aiyagari, Marcet, Sargent, and
Seppälä (2002) [AMSS02]
In th AMSS economy, only a risk-free bond is traded
That lecture compares the recursive representation of the Lucas-Stokey model presented in this lecture with
one for an AMSS economy
By comparing these recursive formulations, we shall glean a sense in which the dimension of the state is
lower in the Lucas Stokey model
Accompanying that difference in dimension will be different dynamics of government debt
Contents
9.5.1 Overview
In an earlier lecture we described a model of optimal taxation with state-contingent debt due to Robert E.
Lucas, Jr., and Nancy Stokey [LS83]
Aiyagari, Marcet, Sargent, and Seppälä [AMSS02] (hereafter, AMSS) studied optimal taxation in a model
without state-contingent debt
In this lecture, we
• describe assumptions and equilibrium concepts
• solve the model
• implement the model numerically
• conduct some policy experiments
• compare outcomes with those in a corresponding complete-markets model
We begin with an introduction to the model
Many but not all features of the economy are identical to those of the Lucas-Stokey economy
Lets start with things that are identical
For t ≥ 0, a history of the state is represented by st = [st , st−1 , . . . , s0 ]
Government purchases g(s) are an exact time-invariant function of s
Let ct (st ), ℓt (st ), and nt (st ) denote consumption, leisure, and labor supply, respectively, at history st at
time t
Each period a representative household is endowed with one unit of time that can be divided between leisure
ℓt and labor nt :
Output equals nt (st ) and can be divided between consumption ct (st ) and g(st )
∞ ∑
∑
β t πt (st )u[ct (st ), ℓt (st )] (9.111)
t=0 st
where
• πt (st ) is a joint probability distribution over the sequence st , and
• the utility function u is increasing, strictly concave, and three times continuously differentiable in
both arguments
The government imposes a flat rate tax τt (st ) on labor income at time t, history st
Lucas and Stokey assumed that there are complete markets in one-period Arrow securities; also see smooth-
ing models
It is at this point that AMSS [AMSS02] modify the Lucas and Stokey economy
AMSS allow the government to issue only one-period risk-free debt each period
Ruling out complete markets in this way is a step in the direction of making total tax collections behave
more like that prescribed in [Bar79] than they do in [LS83]
bt+1 (st )
bt (st−1 ) = τtn (st )nt (st ) − gt (st ) − Tt (st ) +
Rt (st )
(9.112)
bt+1 (st )
≡ z(s ) +
t
,
Rt (st )
1 ∑ t+1 )
t+1 t uc (s
= βπ t+1 (s |s )
Rt (st ) uc (st )
t+1 t
s |s
Substituting this expression into the governments budget constraint (9.112) yields:
∑ uc (st+1 )
bt (st−1 ) = z(st ) + β πt+1 (st+1 |st ) bt+1 (st ) (9.113)
uc (st )
st+1 |st
Components of z(st ) on the right side depend on st , but the left side is required to depend on st−1 only
This is what it means for one-period government debt to be risk-free
Therefore, the sum on the right side of equation (9.113) also has to depend only on st−1
This requirement will give rise to measurability constraints on the Ramsey allocation to be discussed soon
1
In an allocation that solves the Ramsey problem and that levies distorting taxes on labor, why would the government ever want
to hand revenues back to the private sector? It would not in an economy with state-contingent debt, since any such allocation could
be improved by lowering distortionary taxes rather than handing out lump-sum transfers. But without state-contingent debt there
can be circumstances when a government would like to make lump-sum transfers to the private sector.
If we replace bt+1 (st ) on the right side of equation (9.113) by the right side of next periods budget constraint
(associated with a particular realization st ) we get
∑ t+1 )
[ ]
t+1 t uc (s bt+2 (st+1 )
t−1
bt (s ) = z(s ) +t
βπt+1 (s |s ) t+1
z(s ) +
uc (st ) Rt+1 (st+1 )
s |s
t+1 t
After making similar repeated substitutions for all future occurrences of government indebtedness, and by
invoking the natural debt limit, we arrive at:
∞ ∑
∑ uc (st+j )
t−1
bt (s )= β j πt+j (st+j |st ) z(st+j ) (9.114)
uc (st )
j=0 st+j |st
Now lets
• substitute the resource constraint into the net-of-interest government surplus, and
• use the households first-order condition 1 − τtn (st ) = uℓ (st )/uc (st ) to eliminate the labor tax rate
so that we can express the net-of-interest government surplus z(st ) as
[ ]
uℓ (st ) [ ]
z(st ) = 1 − t
ct (st ) + gt (st ) − gt (st ) − Tt (st ) . (9.115)
uc (s )
If we substitute the appropriate versions of right side of (9.115) for z(st+j ) into equation (9.114), we obtain
a sequence of implementability constraints on a Ramsey allocation in an AMSS economy
Expression (9.114) at time t = 0 and initial state s0 was also an implementability constraint on a Ramsey
allocation in a Lucas-Stokey economy:
∞
∑ uc (sj )
b0 (s−1 ) = E 0 βj z(sj ) (9.116)
uc (s0 )
j=0
∞
∑ uc (st+j )
bt (s t−1
) = Et βj z(st+j ) (9.117)
uc (st )
j=0
The expression on the right side of (9.117) in the Lucas-Stokey (1983) economy would equal the present
value of a continuation stream of government surpluses evaluated at what would be competitive equilibrium
Arrow-Debreu prices at date t
After we have substituted the resource constraint into the utility function, we can express the Ramsey prob-
lem as being to choose an allocation that solves
∞
∑ ( )
max
{ct (st ),bt+1 t
(s )}
E0 β t u ct (st ), 1 − ct (st ) − gt (st )
t=0
∞
∑ uc (sj )
E0 βj
uc (s0 )
z(sj ) ≥ b0 (s−1 ) (9.118)
j=0
and
∞
∑ uc (st+j )
Et βj
uc (st )
z(st+j ) = bt (st−1 ) ∀ st (9.119)
j=0
given b0 (s−1 )
Lagrangian Formulation
A negative multiplier γt (st ) < 0 means that if we could relax constraint (9.119), we would like to increase
the beginning-of-period indebtedness for that particular realization of history st
That would let us reduce the beginning-of-period indebtedness for some other history2
These features flow from the fact that the government cannot use state-contingent debt and therefore cannot
allocate its indebtedness efficiently across future states
Some Calculations
∞
∑ {
( )
J = E0 β u ct (st ), 1 − ct (st ) − gt (st )
t
t=0
[ ∑ ∞ }
+ γt (s ) E t
t
β uc (s ) z(s ) − uc (s ) bt (s )
j t+j t+j t t−1
j=0 (9.120)
∞
∑ {
( )
= E0 β u ct (st ), 1 − ct (st ) − gt (st )
t
t=0
}
+ Ψt (s ) uc (s ) z(s ) − γt (s ) uc (s ) bt (s
t t t t t t−1
)
where
In (9.120), the second equality uses the law of iterated expectations and Abels summation formula (also
called summation by parts, see this page)
First-order conditions with respect to ct (st ) can be expressed as
{[ ] }
uc (st ) − uℓ (st ) + Ψt (st ) ucc (st ) − ucℓ (st ) z(st ) + uc (st ) zc (st )
[ ] (9.122)
− γt (st ) ucc (st ) − ucℓ (st ) bt (st−1 ) = 0
[ ]
Et γt+1 (st+1 ) uc (st+1 ) = 0 (9.123)
2
From the first-order conditions for the Ramsey problem, there exists another realization s̃t with the same history up until the
previous period, i.e., s̃t−1 = st−1 , but where the multiplier on constraint (9.119) takes a positive value, so γt (s̃t ) > 0.
If we substitute z(st ) from (9.115) and its derivative zc (st ) into first-order condition (9.122), we find two
differences from the corresponding condition for the optimal allocation in a Lucas-Stokey economy with
state-contingent government debt
1. The term involving bt (st−1 ) in first-order condition (9.122) does not appear in the corresponding expres-
sion for the Lucas-Stokey economy
• This term reflects the constraint that beginning-of-period government indebtedness must be the same
across all realizations of next periods state, a constraint that would not be present if government debt
could be state contingent
2. The Lagrange multiplier Ψt (st ) in first-order condition (9.122) may change over time in response to
realizations of the state, while the multiplier Φ in the Lucas-Stokey economy is time invariant
We need some code from our an earlier lecture on optimal taxation with state-contingent debt sequential
allocation implementation:
import numpy as np
from scipy.optimize import root
from quantecon import MarkovChain
class SequentialAllocation:
'''
Class that takes CESutility or BGPutility object as input returns
planner's allocation as a function of the multiplier on the
implementability constraint µ.
'''
def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Un = model.Uc, model.Un
def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])
if not res.success:
raise Exception('Could not find first best')
self.cFB = res.x[:S]
self.nFB = res.x[S:]
def FOC(z):
c = z[:S]
n = z[S:2 * S]
Ξ = z[2 * S:]
return np.hstack([Uc(c, n) - µ * (Ucc(c, n) * c + Uc(c, n)) - Ξ,
,→ # FOC of c
Un(c, n) - µ * (Unn(c, n) * n + Un(c, n)) + \
Θ * Ξ, # FOC of n
Θ * n - c - G])
# Compute x
I = Uc(c, n) * c + Un(c, n) * n
x = np.linalg.solve(np.eye(S) - self.β * self.π, I)
return c, n, x, Ξ
µ, c, n, Ξ = z
xprime = self.time1_allocation(µ)[2]
return np.hstack([Uc(c, n) * (c - B_) + Un(c, n) * n + β * π[s_0]
,→ @ xprime,
Uc(c, n) - µ * (Ucc(c, n) *
(c - B_) + Uc(c, n)) - Ξ,
Un(c, n) - µ * (Unn(c, n) * n +
Un(c, n)) + Θ[s_0] * Ξ,
(Θ * n - c - G)[s_0]])
# Find root
res = root(FOC, np.array(
[0, self.cFB[s_0], self.nFB[s_0], self.ΞFB[s_0]]))
if not res.success:
raise Exception('Could not find time 0 LS allocation.')
return res.x
if sHist is None:
sHist = self.mc.simulate(T, s_0)
# Time 0
µ, cHist[0], nHist[0], _ = self.time0_allocation(B_, s_0)
THist[0] = self.T(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
µHist[0] = µ
# Time 1 onward
for t in range(1, T):
c, n, x, Ξ = self.time1_allocation(µ)
T = self.T(c, n)
u_c = Uc(c, n)
s = sHist[t]
Eu_c = π[sHist[t - 1]] @ u_c
cHist[t], nHist[t], Bhist[t], THist[t] = c[s], n[s], x[s] / \
u_c[s], T[s]
RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (β * Eu_c)
µHist[t] = µ
To analyze the AMSS model, we find it useful to adopt a recursive formulation using techniques like those
in our lectures on dynamic Stackelberg models and optimal taxation with state-contingent debt
where Rt (st ) is the gross risk-free rate of interest between t and t+1 at history st and Tt (st ) are nonnegative
transfers
Throughout this lecture, we shall set transfers to zero (for some issues about the limiting behavior of debt,
this makes a possibly important difference from AMSS [AMSS02], who restricted transfers to be nonnega-
tive)
In this case, the household faces a sequence of budget constraints
bt (st−1 ) + (1 − τt (st ))nt (st ) = ct (st ) + bt+1 (st )/Rt (st ) (9.124)
The households first-order conditions are uc,t = βRt E t uc,t+1 and (1 − τt )uc,t = ul,t
Using these to eliminate Rt and τt from budget constraint (9.124) gives
uc,t (st )bt (st−1 ) + ul,t (st )nt (st ) = uc,t (st )ct (st ) + β(E t uc,t+1 )bt+1 (st ) (9.126)
Now define
bt+1 (st )
xt ≡ βbt+1 (st )E t uc,t+1 = uc,t (st ) (9.127)
Rt (st )
uc,t xt−1
= uc,t ct − ul,t nt + xt
β E t−1 uc,t
(9.128)
for t ≥ 1
Measurability Constraints
Let Π(s|s− ) be a Markov transition matrix whose entries tell probabilities of moving from state s− to state
s in one period
Let
• V (x− , s− ) be the continuation value of a continuation Ramsey plan at xt−1 = x− , st−1 = s− for
t≥1
• W (b, s) be the value of the Ramsey plan at time 0 at b0 = b and s0 = s
We distinguish between two types of planners:
For t ≥ 1, the value function for a continuation Ramsey planner satisfies the Bellman equation
∑
V (x− , s− ) = max Π(s|s− ) [u(n(s) − g(s), 1 − n(s)) + βV (x(s), s)] (9.130)
{n(s),x(s)}
s
uc (s)x−
∑ = uc (s)(n(s) − g(s)) − ul (s)n(s) + x(s) (9.131)
β s̃ Π(s̃|s− )uc (s̃)
A continuation Ramsey planner at t ≥ 1 takes (xt−1 , st−1 ) = (x− , s− ) as given and before s is realized
chooses (nt (st ), xt (st )) = (n(s), x(s)) for s ∈ S
The Ramsey planner takes (b0 , s0 ) as given and chooses (n0 , x0 ).
The value function W (b0 , s0 ) for the time t = 0 Ramsey planner satisfies the Bellman equation
∑ uc (s)
Vx (x− , s− ) = Π(s|s− )µ(s|s− ) ∑ (9.135)
s
β s̃ Π(s̃|s− )uc (s̃)
∑( uc (s)
)
Vx (x− , s− ) = Π(s|s− ) ∑ Vx (x(s), s) (9.136)
s s̃ Π(s̃|s− )uc (s̃)
uc (s)
Π̌(s|s− ) ≡ Π(s|s− ) ∑
s̃ Π(s̃|s− )uc (s̃)
Exercise: Please verify that Π̌(s|s− ) is a valid Markov transition density, i.e., that its elements are all
nonnegative and that for each s− , the sum over s equals unity
Along a Ramsey plan, the state variable xt = xt (st , b0 ) becomes a function of the history st and initial
government debt b0
In Lucas-Stokey model, we found that
• a counterpart to Vx (x, s) is time invariant and equal to the Lagrange multiplier on the Lucas-Stokey
implementability constraint
• time invariance of Vx (x, s) is the source of a key feature of the Lucas-Stokey model, namely, state
variable degeneracy (i.e., xt is an exact function of st )
That Vx (x, s) varies over time according to a twisted martingale means that there is no state-variable degen-
eracy in the AMSS model
In the AMSS model, both x and s are needed to describe the state
This property of the AMSS model transmits a twisted martingale component to consumption, employment,
and the tax rate
For quasi-linear preferences, the first-order condition with respect to n(s) becomes
When µ(s|s− ) = βVx (x(s), x) converges to zero, in the limit ul (s) = 1 = uc (s), so that τ (x(s), s) = 0
Thus, in the limit, if gt is perpetually random, the government accumulates sufficient assets to finance all
expenditures from earnings on those assets, returning any excess revenues to the household as nonnegative
lump sum transfers
Code
class RecursiveAllocationAMSS:
def solve_time1_bellman(self):
'''
Solve the time 1 Bellman equation for calibration model and
initial grid µgrid0
'''
model, µgrid0 = self.model, self.µgrid
π = model.π
S = len(model.π)
c, n = np.vstack(c).T, np.vstack(n).T
x, V = np.hstack(x), np.hstack(V)
xprimes = np.vstack([x] * S)
cf.append(interp(x, c))
nf.append(interp(x, n))
Vf.append(interp(x, V))
xgrid.append(x)
xprimef.append(interp(x, xprimes))
cf, nf, xprimef = fun_vstack(cf), fun_vstack(nf), fun_vstack(xprimef)
Vf = fun_hstack(Vf)
policies = [cf, nf, xprimef]
# Create xgrid
x = np.vstack(xgrid).T
xbar = [x.min(0).max(), x.max(0).min()]
xgrid = np.linspace(xbar[0], xbar[1], len(µgrid0))
self.xgrid = xgrid
print(diff)
Vf = Vfnew
if sHist is None:
sHist = simulate_markov(π, s_0, T)
# time 1 onward
for t in range(1, T):
s_, x, s = sHist[t - 1], xHist[t - 1], sHist[t]
c, n, xprime, T = cf[s_, :](x), nf[s_, :](
x), xprimef[s_, :](x), Tf[s_, :](x)
T = self.T(c, n)[s]
u_c = Uc(c, n)
Eu_c = π[s_, :] @ u_c
µHist[t] = self.Vf[s](xprime[s])
class BellmanEquation:
'''
Bellman equation for the continuation of the Lucas-Stokey Problem
'''
self.z0 = {}
cf, nf, xprimef = policies0
for s_ in range(self.S):
for x in xgrid:
self.z0[x, s_] = np.hstack([cf[s_, :](x),
nf[s_, :](x),
xprimef[s_, :](x),
np.zeros(self.S)])
self.find_first_best()
def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, Uc, Un, G = self.S, self.Θ, model.Uc, model.Un, self.G
def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])
self.cFB = res.x[:S]
self.nFB = res.x[S:]
IFB = Uc(self.cFB, self.nFB) * self.cFB + \
Un(self.cFB, self.nFB) * self.nFB
self.zFB = {}
for s in range(S):
self.zFB[s] = np.hstack(
[self.cFB[s], self.nFB[s], self.π[s] @ self.xFB, 0.])
def objf(z):
c, n, xprime = z[:S], z[S:2 * S], z[2 * S:3 * S]
Vprime = np.empty(S)
for s in range(S):
Vprime[s] = Vf[s](xprime[s])
def cons(z):
c, n, xprime, T = z[:S], z[S:2 * S], z[2 * S:3 * S], z[3 * S:]
u_c = Uc(c, n)
Eu_c = π[s_] @ u_c
return np.hstack([
x * u_c / Eu_c - u_c * (c - T) - Un(c, n) * n - β * xprime,
Θ * n - c - G])
if model.transfers:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 100.)] * S
else:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 0.)] * S
out, fx, _, imode, smode = fmin_slsqp(objf, self.z0[x, s_],
f_eqcons=cons, bounds=bounds,
full_output=True, iprint=0,
acc=self.tol, iter=self.maxiter)
if imode > 0:
raise Exception(smode)
def objf(z):
c, n, xprime = z[:-1]
def cons(z):
c, n, xprime, T = z
return np.hstack([
-Uc(c, n) * (c - B_ - T) - Un(c, n) * n - β * xprime,
(Θ * n - c - G)[s0]])
if model.transfers:
bounds = [(0., 100), (0., 100), self.xbar, (0., 100.)]
else:
bounds = [(0., 100), (0., 100), self.xbar, (0., 0.)]
out, fx, _, imode, smode = fmin_slsqp(objf, self.zFB[s0], f_
,→eqcons=cons,
bounds=bounds, full_output=True,
,→ iprint=0)
if imode > 0:
raise Exception(smode)
9.5.4 Examples
class interpolate_wrapper:
return interpolate_wrapper(np.asarray(self.F[index]))
def transpose(self):
self.F = self.F.transpose()
def __len__(self):
return len(self.F)
class interpolator_factory:
def fun_vstack(fun_list):
def fun_hstack(fun_list):
sHist[0] = s_0
S = len(π)
for t in range(1, T):
sHist[t] = np.random.choice(np.arange(S), p=π[sHist[t - 1]])
return sHist
In our lecture on optimal taxation with state contingent debt we studied how the government manages
uncertainty in a simple setting
As in that lecture, we assume the one-period utility function
c1−σ n1+γ
u(c, n) = −
1−σ 1+γ
Note: For convenience in matching our computer code, we have expressed utility as a function of n rather
than leisure l
We consider the same government expenditure process studied in the lecture on optimal taxation with state
contingent debt
Government expenditures are known for sure in all periods except one
• For t < 3 or t > 3 we assume that gt = gl = 0.1
• At t = 3 a war occurs with probability 0.5
– If there is war, g3 = gh = 0.2
– If there is no war g3 = gl = 0.1
A useful trick is to define components of the state vector as the following six (t, g) pairs:
import numpy as np
class CRRAutility:
def __init__(self,
β=0.9,
σ=2,
γ=2,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):
# Utility function
def U(self, c, n):
σ = self.σ
if σ == 1.:
U = np.log(c)
else:
U = (c**(1 - σ) - 1) / (1 - σ)
return U - n**(1 + self.γ) / (1 + self.γ)
The following figure plots the Ramsey plan under both complete and incomplete markets for both possible
realizations of the state at time t = 3
Optimal policies when the government has access to state contingent debt are represented by black lines,
while the optimal policies when there is only a risk free bond are in red
Paths with circles are histories in which there is peace, while those with triangle denote war
time_example = CRRAutility()
# Output paths
sim_seq_l[5] = time_example.Θ[sHist_l] * sim_seq_l[1]
sim_seq_h[5] = time_example.Θ[sHist_h] * sim_seq_h[1]
sim_bel_l[5] = time_example.Θ[sHist_l] * sim_bel_l[1]
sim_bel_h[5] = time_example.Θ[sHist_h] * sim_bel_h[1]
plt.tight_layout()
plt.show()
How a Ramsey planner responds to war depends on the structure of the asset market.
If it is able to trade state-contingent debt, then at time t = 2
• the government purchases an Arrow security that pays off when g3 = gh
• the government sells an Arrow security that pays off when g3 = gl
• These purchases are designed in such a way that regardless of whether or not there is a war at t = 3,
the government will begin period t = 4 with the same government debt
This pattern facilities smoothing tax rates across states
The government without state contingent debt cannot do this
Instead, it must enter time t = 3 with the same level of debt falling due whether there is peace or war at
t=3
It responds to this constraint by smoothing tax rates across time
To finance a war it raises taxes and issues more debt
To service the additional debt burden, it raises taxes in all future periods
The absence of state contingent debt leads to an important difference in the optimal tax policy
When the Ramsey planner has access to state contingent debt, the optimal tax policy is history independent
• the tax rate is a function of the current level of government spending only, given the Lagrange multi-
plier on the implementability constraint
Without state contingent debt, the optimal tax rate is history dependent
• A war at time t = 3 causes a permanent increase in the tax rate
History dependence occurs more dramatically in a case in which the government perpetually faces the
prospect of war
This case was studied in the final example of the lecture on optimal taxation with state-contingent debt
There, each period the government faces a constant probability, 0.5, of war
In addition, this example features the following preferences
class LogUtility:
def __init__(self,
β=0.9,
ψ=0.69,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):
# Utility function
def U(self, c, n):
return np.log(c) + self.ψ * np.log(1 - n)
With these preferences, Ramsey tax rates will vary even in the Lucas-Stokey model with state-contingent
debt
The figure below plots optimal tax policies for both the economy with state contingent debt (circles) and the
economy with only a risk-free bond (triangles)
log_example = LogUtility()
log_example.transfers = True # Government can use
,→transfers
log_sequential = SequentialAllocation(log_example) # Solve sequential problem
log_bellman = RecursiveAllocationAMSS(log_example, µ_grid)
T = 20
sHist = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1,
0, 0, 0, 1, 1, 1, 1, 1, 1, 0])
# Simulate
sim_seq = log_sequential.simulate(0.5, 0, T, sHist)
sim_bel = log_bellman.simulate(0.5, 0, T, sHist)
# Output paths
sim_seq[5] = log_example.Θ[sHist] * sim_seq[1]
sim_bel[5] = log_example.Θ[sHist] * sim_bel[1]
When the government experiences a prolonged period of peace, it is able to reduce government debt and set
permanently lower tax rates
However, the government finances a long war by borrowing and raising taxes
This results in a drift away from policies with state contingent debt that depends on the history of shocks
This is even more evident in the following figure that plots the evolution of the two policies over 200 periods
# Output paths
sim_seq_long[5] = log_example.Θ[sHist_long] * sim_seq_long[1]
sim_bel_long[5] = log_example.Θ[sHist_long] * sim_bel_long[1]
Contents
– Forces at work
– Logical flow of lecture
– Example economy
– Reverse engineering strategy
– Code for reverse engineering
– Short simulation for reverse-engineered: initial debt
– Long simulation
– BEGS approximations of limiting debt and convergence rate
9.6.1 Overview
This lecture extends our investigations of how optimal policies for levying a flat-rate tax on labor income
and issuing government debt depend on whether there are complete markets for debt
A Ramsey allocation and Ramsey policy in the AMSS [AMSS02] model described in optimal taxation
without state-contingent debt generally differs from a Ramsey allocation and Ramsey policy in the Lucas-
Stokey [LS83] model described in optimal taxation with state-contingent debt
This is because the implementability restriction that a competitive equilibrium with a distorting tax imposes
on allocations in the Lucas-Stokey model is just one among a set of implementability conditions imposed in
the AMSS model
These additional constraints require that time t components of a Ramsey allocation for the AMSS model be
measurable with respect to time t − 1 information
The measurability constraints imposed by the AMSS model are inherited from the restriction that only one-
period risk-free bonds can be traded
Differences between the Ramsey allocations in the two models indicate that at least some of the measura-
bility constraints of the AMSS model of optimal taxation without state-contingent debt are violated at the
Ramsey allocation of a corresponding [LS83] model with state-contingent debt
Another way to say this is that differences between the Ramsey allocations of the two models indicate that
some of the measurability constraints of the AMSS model are violated at the Ramsey allocation of the
Lucas-Stokey model
Nonzero Lagrange multipliers on those constraints make the Ramsey allocation for the AMSS model differ
from the Ramsey allocation for the Lucas-Stokey model
This lecture studies a special AMSS model in which
• The exogenous state variable st is governed by a finite-state Markov chain
• With an arbitrary budget-feasible initial level of government debt, the measurability constraints
– bind for many periods, but . . .
The forces driving asymptotic outcomes here are examples of dynamics present in a more general class
incomplete markets models analyzed in [BEGS17] (BEGS)
BEGS provide conditions under which government debt under a Ramsey plan converges to an invariant
distribution
BEGS construct approximations to that asymptotically invariant distribution of government debt under a
Ramsey plan
BEGS also compute an approximation to a Ramsey plans rate of convergence to that limiting invariant
distribution
We shall use the BEGS approximating limiting distribution and the approximating rate of convergence to
help interpret outcomes here
For a long time, the Ramsey plan puts a nontrivial martingale-like component into the par value of govern-
ment debt as part of the way that the Ramsey plan imperfectly smooths distortions from the labor tax rate
across time and Markov states
But BEGS show that binding implementability constraints slowly push government debt in a direction de-
signed to let the government use fluctuations in equilibrium interest rate rather than fluctuations in par values
of debt to insure against shocks to government expenditures
• This is a weak (but unrelenting) force that, starting from an initial debt level, for a long time is
dominated by the stochastic martingale-like component of debt dynamics that the Ramsey planner
uses to facilitate imperfect tax-smoothing across time and states
• This weak force slowly drives the par value of government assets to a constant level at which the
government can completely insure against government expenditure shocks while shutting down the
stochastic component of debt dynamics
• At that point, the tail of the par value of government debt becomes a trivial martingale: it is constant
over time
• We compute the BEGS approximations to check how accurately they describe the dynamics of the
long-simulation
Although we are studying an AMSS [AMSS02] economy, a Lucas-Stokey [LS83] economy plays an impor-
tant role in the reverse-engineering calculation to be described below
For that reason, it is helpful to have readily available some key equations underlying a Ramsey plan for the
Lucas-Stokey economy
Recall first-order conditions for a Ramsey allocation for the Lucas-Stokey economy
For t ≥ 1, these take the form
[ ]
(1 + Φ)uc (c, 1 − c − g) + Φ cucc (c, 1 − c − g) − (c + g)uℓc (c, 1 − c − g)
[ ] (9.137)
= (1 + Φ)uℓ (c, 1 − c − g) + Φ cucℓ (c, 1 − c − g) − (c + g)uℓℓ (c, 1 − c − g)
There is one such equation for each value of the Markov state st
In addition, given an initial Markov state, the time t = 0 quantities c0 and b0 satisfy
[ ]
(1 + Φ)uc (c, 1 − c − g) + Φ cucc (c, 1 − c − g) − (c + g)uℓc (c, 1 − c − g)
[ ]
= (1 + Φ)uℓ (c, 1 − c − g) + Φ cucℓ (c, 1 − c − g) − (c + g)uℓℓ (c, 1 − c − g) + Φ(ucc − uc,ℓ )b0
(9.138)
In addition, the time t = 0 budget constraint is satisfied at c0 and initial government debt b0 :
b̄
b0 + g0 = τ0 (c0 + g0 ) + (9.139)
R0
where R0 is the gross interest rate for the Markov state s0 that is assumed to prevail at time t = 0 and τ0 is
the time t = 0 tax rate.
In equation (9.139), it is understood that
ul,0
τ0 = 1 −
uc,0
∑
S
uc (s)
R0−1 =β Π(s|s0 )
uc,0
s=1
It is useful to transform some of the above equations to forms that are more natural for analyzing the case of
a CRRA utility specification that we shall use in our example economies
As in lectures optimal taxation without state-contingent debt and optimal taxation with state-contingent
debt, we assume that the representative agent has utility function
c1−σ n1+γ
u(c, n) = −
1−σ 1+γ
and set σ = 2, γ = 2, and the discount factor β = 0.9
We eliminate leisure from the model and continue to assume that
ct + gt = nt
The analysis of Lucas and Stokey prevails once we make the following replacements
With these understandings, equations (9.137) and (9.138) simplify in the case of the CRRA utility function.
They become
and
(1 + Φ)[uc (c0 ) + un (c0 + g0 )] + Φ[c0 ucc (c0 ) + (c0 + g0 )unn (c0 + g0 )] − Φucc (c0 )b0 = 0 (9.141)
In equation (9.140), it is understood that c and g are each functions of the Markov state s
The CRRA utility function is represented in the following class
import numpy as np
class CRRAutility:
def __init__(self,
β=0.9,
σ=2,
γ=2,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):
# Utility function
def U(self, c, n):
σ = self.σ
if σ == 1.:
U = np.log(c)
else:
U = (c**(1 - σ) - 1) / (1 - σ)
return U - n**(1 + self.γ) / (1 + self.γ)
class SequentialAllocation:
'''
Class that takes CESutility or BGPutility object as input returns
planner's allocation as a function of the multiplier on the
implementability constraint µ.
'''
def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Un = model.Uc, model.Un
def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])
if not res.success:
raise Exception('Could not find first best')
self.cFB = res.x[:S]
self.nFB = res.x[S:]
def FOC(z):
c = z[:S]
n = z[S:2 * S]
Ξ = z[2 * S:]
return np.hstack([Uc(c, n) - µ * (Ucc(c, n) * c + Uc(c, n)) - Ξ,
,→ # FOC of c
Un(c, n) - µ * (Unn(c, n) * n + Un(c, n)) + \
Θ * Ξ, # FOC of n
Θ * n - c - G])
# Compute x
I = Uc(c, n) * c + Un(c, n) * n
x = np.linalg.solve(np.eye(S) - self.β * self.π, I)
return c, n, x, Ξ
# Find root
res = root(FOC, np.array(
[0, self.cFB[s_0], self.nFB[s_0], self.ΞFB[s_0]]))
if not res.success:
raise Exception('Could not find time 0 LS allocation.')
return res.x
if sHist is None:
sHist = self.mc.simulate(T, s_0)
# Time 0
µ, cHist[0], nHist[0], _ = self.time0_allocation(B_, s_0)
THist[0] = self.T(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
µHist[0] = µ
# Time 1 onward
for t in range(1, T):
c, n, x, Ξ = self.time1_allocation(µ)
T = self.T(c, n)
u_c = Uc(c, n)
s = sHist[t]
Eu_c = π[sHist[t - 1]] @ u_c
cHist[t], nHist[t], Bhist[t], THist[t] = c[s], n[s], x[s] / \
u_c[s], T[s]
RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (β * Eu_c)
µHist[t] = µ
class RecursiveAllocationAMSS:
def solve_time1_bellman(self):
'''
Solve the time 1 Bellman equation for calibration model and
initial grid µgrid0
'''
model, µgrid0 = self.model, self.µgrid
π = model.π
S = len(model.π)
c, n = np.vstack(c).T, np.vstack(n).T
x, V = np.hstack(x), np.hstack(V)
xprimes = np.vstack([x] * S)
cf.append(interp(x, c))
nf.append(interp(x, n))
Vf.append(interp(x, V))
xgrid.append(x)
xprimef.append(interp(x, xprimes))
cf, nf, xprimef = fun_vstack(cf), fun_vstack(nf), fun_vstack(xprimef)
Vf = fun_hstack(Vf)
policies = [cf, nf, xprimef]
# Create xgrid
x = np.vstack(xgrid).T
xbar = [x.min(0).max(), x.max(0).min()]
xgrid = np.linspace(xbar[0], xbar[1], len(µgrid0))
self.xgrid = xgrid
print(diff)
Vf = Vfnew
if sHist is None:
sHist = simulate_markov(π, s_0, T)
# time 1 onward
for t in range(1, T):
s_, x, s = sHist[t - 1], xHist[t - 1], sHist[t]
c, n, xprime, T = cf[s_, :](x), nf[s_, :](
x), xprimef[s_, :](x), Tf[s_, :](x)
T = self.T(c, n)[s]
u_c = Uc(c, n)
Eu_c = π[s_, :] @ u_c
µHist[t] = self.Vf[s](xprime[s])
class BellmanEquation:
'''
Bellman equation for the continuation of the Lucas-Stokey Problem
'''
self.z0 = {}
cf, nf, xprimef = policies0
for s_ in range(self.S):
for x in xgrid:
self.z0[x, s_] = np.hstack([cf[s_, :](x),
nf[s_, :](x),
xprimef[s_, :](x),
np.zeros(self.S)])
self.find_first_best()
def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, Uc, Un, G = self.S, self.Θ, model.Uc, model.Un, self.G
def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])
self.cFB = res.x[:S]
self.nFB = res.x[S:]
IFB = Uc(self.cFB, self.nFB) * self.cFB + \
Un(self.cFB, self.nFB) * self.nFB
self.zFB = {}
for s in range(S):
self.zFB[s] = np.hstack(
[self.cFB[s], self.nFB[s], self.π[s] @ self.xFB, 0.])
'''
Finds the optimal policies
'''
model, β, Θ, G, S, π = self.model, self.β, self.Θ, self.G, self.S,
,→ self.π
U, Uc, Un = model.U, model.Uc, model.Un
def objf(z):
c, n, xprime = z[:S], z[S:2 * S], z[2 * S:3 * S]
Vprime = np.empty(S)
for s in range(S):
Vprime[s] = Vf[s](xprime[s])
def cons(z):
c, n, xprime, T = z[:S], z[S:2 * S], z[2 * S:3 * S], z[3 * S:]
u_c = Uc(c, n)
Eu_c = π[s_] @ u_c
return np.hstack([
x * u_c / Eu_c - u_c * (c - T) - Un(c, n) * n - β * xprime,
Θ * n - c - G])
if model.transfers:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 100.)] * S
else:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 0.)] * S
out, fx, _, imode, smode = fmin_slsqp(objf, self.z0[x, s_],
f_eqcons=cons, bounds=bounds,
full_output=True, iprint=0,
acc=self.tol, iter=self.maxiter)
if imode > 0:
raise Exception(smode)
def objf(z):
c, n, xprime = z[:-1]
def cons(z):
c, n, xprime, T = z
return np.hstack([
-Uc(c, n) * (c - B_ - T) - Un(c, n) * n - β * xprime,
(Θ * n - c - G)[s0]])
if model.transfers:
bounds = [(0., 100), (0., 100), self.xbar, (0., 100.)]
else:
bounds = [(0., 100), (0., 100), self.xbar, (0., 0.)]
out, fx, _, imode, smode = fmin_slsqp(objf, self.zFB[s0], f_
,→eqcons=cons,
bounds=bounds, full_output=True,
,→ iprint=0)
if imode > 0:
raise Exception(smode)
class interpolate_wrapper:
def transpose(self):
self.F = self.F.transpose()
def __len__(self):
return len(self.F)
class interpolator_factory:
def fun_vstack(fun_list):
def fun_hstack(fun_list):
return sHist
We can reverse engineer a value b0 of initial debt due that renders the AMSS measurability constraints not
binding from time t = 0 onward
We accomplish this by recognizing that if the AMSS measurability constraints never bind, then the AMSS
allocation and Ramsey plan is equivalent with that for a Lucas-Stokey economy in which for each period
t ≥ 0, the government promises to pay the same state-contingent amount b̄ in each state tomorrow.
This insight tells us to find a b0 and other fundamentals for the Lucas-Stokey [LS83] model that make the
Ramsey planner want to borrow the same value b̄ next period for all states and all dates
We accomplish this by using various equations for the Lucas-Stokey [LS83] model presented in optimal
taxation with state-contingent debt
We use the following steps
⃗b = ⃗x (9.143)
⃗uc
u = CRRAutility()
def min_Φ(Φ):
# Solve Φ(c)
def equations(unknowns, Φ):
c1, c2 = unknowns
# First argument of .Uc and second argument of .Un are redundant
return loss
We obtain
b_bar = b[0]
b_bar
-1.0757576567504166
To complete the reverse engineering exercise by jointly determining c0 , b0 , we set up a function that returns
two simultaneous equations
c0, b0 = unknowns
g0 = u.G[s-1]
(0.9344994030900681, -1.038698407551764)
Thus, we have reverse engineered an initial b0 = −1.038698407551764 that ought to render the AMSS
measurability constraints slack
The following graph shows simulations of outcomes for both a Lucas-Stokey economy and for an AMSS
economy starting from initial government debt equal to b0 = −1.038698407551764
These graphs report outcomes for both the Lucas-Stokey economy with complete markets and the AMSS
economy with one-period risk-free debt only
log_example = CRRAutility()
T = 20
sHist = np.array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1,
0, 0, 0, 1, 1, 1, 1, 1, 1, 0])
# Output paths
sim_seq[5] = log_example.Θ[sHist] * sim_seq[1]
sim_bel[5] = log_example.Θ[sHist] * sim_bel[1]
The Ramsey allocations and Ramsey outcomes are identical for the Lucas-Stokey and AMSS economies
This outcome confirms the success of our reverse-engineering exercises
Notice how for t ≥ 1, the tax rate is a constant - so is the par value of government debt
However, output and labor supply are both nontrivial time-invariant functions of the Markov state
The following graph shows the par value of government debt and the flat rate tax on labor income for a long
simulation for our sample economy
For the same realization of a government expenditure path, the graph reports outcomes for two economies
• the gray lines are for the Lucas-Stokey economy with complete markets
• the blue lines are for the AMSS economy with risk-free one-period debt only
For both economies, initial government debt due at time 0 is b0 = .5
For the Lucas-Stokey complete markets economy, the government debt plotted is bt+1 (st+1 )
• Notice that this is a time-invariant function of the Markov state from the beginning
For the AMSS incomplete markets economy, the government debt plotted is bt+1 (st )
• Notice that this is a martingale-like random process that eventually seems to converge to a constant
b̄ ≈ −1.07
• Notice that the limiting value b̄ < 0 so that asymptotically the government makes a constant level of
risk-free loans to the public
• In the simulation displayed as well as other simulations we have run, the par value of government debt
converges to about 1.07 afters between 1400 to 2000 periods
For the AMSS incomplete markets economy, the marginal tax rate on labor income τt converges to a constant
• labor supply and output each converge to time-invariant functions of the Markov state
sim_seq_long = log_sequential.simulate(0.5, 0, T)
sHist_long = sim_seq_long[-3]
sim_bel_long = log_bellman.simulate(0.5, 0, T, sHist_long)
As remarked above, after bt+1 (st ) has converged to a constant, the measurablility constraints in the AMSS
model cease to bind
• the associated Lagrange multipliers on those implementability constraints converge to zero
This leads us to seek an initial value of government debt b0 that renders the measurability constraints slack
from time t = 0 onward
• a tell-tale sign of this situation is that the Ramsey planner in a corresponding Lucas-Stokey economy
would instruct the government to issue a constant level of government debt bt+1 (st+1 ) across the two
Markov states
We now describe how to find such an initial level of government debt
It is useful to link the outcome of our reverse engineering exercise to limiting approximations constructed
by [BEGS17]
[BEGS17] used a slightly different notation to represent a generalization of the AMSS model.
Well introduce a version of their notation so that readers can quickly relate notation that appears in their key
formulas to the notation that we have used
BEGS work with objects Bt , Bt , Rt , Xt that are related to our notation by
uc,t uc,t
Rt = Rt−1 =
uc,t−1 βEt−1 uc,t
bt+1 (st )
Bt =
Rt (st )
bt (st−1 ) = Rt−1 Bt−1
Bt = uc,t Bt = (βEt uc,t+1 )bt+1 (st )
Xt = uc,t [gt − τt nt ]
In terms of their notation, equation (44) of [BEGS17] expresses the time t state s government budget con-
straint as
Asymptotic mean
cov∞ (R, X )
B∗ = − (9.145)
var∞ (R)
where the superscript ∞ denotes a moment taken with respect to an ergodic distribution
Formula (9.145) presents B ∗ as a regression coefficient of Xt on Rt in the ergodic distribution
This regression coefficient emerges as the minimizer for a variance-minimization problem:
B∗
b̂ = (9.147)
βEt uc,t+1
Rate of convergence
BEGS also derive the following approximation to the rate of convergence to B ∗ from an arbitrary initial
condition
Et (Bt+1 − B ∗ ) 1
∗
≈ (9.148)
(Bt − B ) 2
1 + β var(R)
For our example, we describe some code that we use to compute the steady state mean and the rate of
convergence to it
The values of π(s) are .5, .5
We can then construct X (s), R(s), uc (s) for our two states using the definitions above
∑
We can then construct βEt−1 uc = β s uc (s)π(s), cov(R(s), X (s)) and var(R(s)) to be plugged into
formula (9.147)
We also want to compute var(X )
To compute the variances and covariance, we use the following standard formulas
Temporarily let x(s), s = 1, 2 be an arbitrary random variables
Then we define
∑
µx = x(s)π(s)
s
( )
∑∑
var(x) = x(s)2 π(s) − µ2x
s s
( )
∑
cov(x, y) = x(s)y(s)π(s) − µx µy
s
After we compute these moments, we compute the BEGS approximation to the asymptotic mean b̂ in formula
(9.147)
After that, we move on to compute B ∗ in formula (9.145)
Well also evaluate the BEGS criterion (9.144) at the limiting value B ∗
Here are some functions that well use to compute key objects that we want
def mean(x):
'''Returns mean for x given initial state'''
x = np.array(x)
return x @ u.π[s]
def variance(x):
x = np.array(x)
return x**2 @ u.π[s] - mean(x)**2
Now lets form the two random variables R, X appearing in the BEGS approximating formulas
u = CRRAutility()
s = 0
c = [0.940580824225584, 0.8943592757759343] # Vector for c
g = u.G # Vector for g
n = c + g # Total population
τ = lambda s: 1 + u.Un(1, n[s]) / u.Uc(c[s], 1)
R = [R_s(0), R_s(1)]
X = [X_s(0), X_s(1)]
Now lets compute the ingredient of the approximating limit and the approximating rate of convergence
bhat, b_bar
(-1.0757585378303758, -1.0757576567504166)
So we have
bhat - b_bar
-8.810799592140484e-07
-9.020562075079397e-17
This is machine zero, a verification that b̂ succeeds in minimizing the nonnegative fiscal cost criterion J(B ∗ )
defined in BEGS and in equation (9.149) above
Lets push our luck and compute the mean reversion speed in the formula above equation (47) in [BEGS17]
Now lets compute the implied mean time to get to within .01 of the limit
The slow rate of convergence and the implied time of getting within one percent of the limiting value do a
good job of approximating our long simulation above
This lecture studies government debt in an AMSS economy [AMSS02] of the type described in Optimal
Taxation without State-Contingent Debt
We study the behavior of government debt as time t → +∞
We use these techniques
• simulations
• a regression coefficient from the tail of a long simulation that allows us to verify that the asymptotic
mean of government debt solves a fiscal-risk minimization problem
• an approximation to the mean of an ergodic distribution of government debt
• an approximation to the rate of convergence to an ergodic distribution of government debt
We apply tools applicable to more general incomplete markets economies that are presented on pages 648 -
650 in section III.D of [BEGS17] (BEGS)
We study an [AMSS02] economy with three Markov states driving government expenditures
• In a previous lecture, we showed that with only two Markov states, it is possible that eventually
endogenous interest rate fluctuations support complete markets allocations and Ramsey outcomes
• The presence of three states prevents the full spanning that eventually prevails in the two-state example
featured in Fiscal Insurance via Fluctuating Interest Rates
The lack of full spanning means that the ergodic distribution of the par value of government debt is nontrivial,
in contrast to the situation in Fiscal Insurance via Fluctuating Interest Rates where the ergodic distribution
of the par value is concentrated on one point
Nevertheless, [BEGS17] (BEGS) establish for general settings that include ours, the Ramsey planner steers
government assets to a level that comes as close as possible to providing full spanning in a precise a sense
defined by BEGS that we describe below
We use code constructed in a previous lecture
Warning: Key equations in [BEGS17] section III.D carry typos that we correct below
As in Optimal Taxation without State-Contingent Debt and Optimal Taxation with State-Contingent Debt,
we assume that the representative agent has utility function
c1−σ n1+γ
u(c, n) = −
1−σ 1+γ
We work directly with labor supply instead of leisure
We assume that
ct + gt = nt
β = .9
σ=2
γ=2
import numpy as np
class CRRAutility:
def __init__(self,
β=0.9,
σ=2,
γ=2,
π=0.5*np.ones((2, 2)),
G=np.array([0.1, 0.2]),
Θ=np.ones(2),
transfers=False):
# Utility function
def U(self, c, n):
σ = self.σ
if σ == 1.:
U = np.log(c)
else:
U = (c**(1 - σ) - 1) / (1 - σ)
return U - n**(1 + self.γ) / (1 + self.γ)
Well want first and second moments of some key random variables below
The following code computes these moments; the code is recycled from Fiscal Insurance via Fluctuating
Interest Rates
def mean(x, s):
'''Returns mean for x given initial state'''
x = np.array(x)
return x @ u.π[s]
x = np.array(x)
return x**2 @ u.π[s] - mean(x, s)**2
import numpy as np
from scipy.optimize import root
from quantecon import MarkovChain
class SequentialAllocation:
'''
Class that takes CESutility or BGPutility object as input returns
planner's allocation as a function of the multiplier on the
implementability constraint µ.
'''
def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, G = self.S, self.Θ, self.G
Uc, Un = model.Uc, model.Un
def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])
if not res.success:
raise Exception('Could not find first best')
self.cFB = res.x[:S]
self.nFB = res.x[S:]
def FOC(z):
c = z[:S]
n = z[S:2 * S]
Ξ = z[2 * S:]
return np.hstack([Uc(c, n) - µ * (Ucc(c, n) * c + Uc(c, n)) - Ξ,
,→ # FOC of c
Un(c, n) - µ * (Unn(c, n) * n + Un(c, n)) + \
Θ * Ξ, # FOC of n
Θ * n - c - G])
# Compute x
I = Uc(c, n) * c + Un(c, n) * n
x = np.linalg.solve(np.eye(S) - self.β * self.π, I)
return c, n, x, Ξ
µ, c, n, Ξ = z
xprime = self.time1_allocation(µ)[2]
return np.hstack([Uc(c, n) * (c - B_) + Un(c, n) * n + β * π[s_0]
,→ @ xprime,
Uc(c, n) - µ * (Ucc(c, n) *
(c - B_) + Uc(c, n)) - Ξ,
Un(c, n) - µ * (Unn(c, n) * n +
Un(c, n)) + Θ[s_0] * Ξ,
(Θ * n - c - G)[s_0]])
# Find root
res = root(FOC, np.array(
[0, self.cFB[s_0], self.nFB[s_0], self.ΞFB[s_0]]))
if not res.success:
raise Exception('Could not find time 0 LS allocation.')
return res.x
if sHist is None:
sHist = self.mc.simulate(T, s_0)
# Time 0
µ, cHist[0], nHist[0], _ = self.time0_allocation(B_, s_0)
THist[0] = self.T(cHist[0], nHist[0])[s_0]
Bhist[0] = B_
µHist[0] = µ
# Time 1 onward
for t in range(1, T):
c, n, x, Ξ = self.time1_allocation(µ)
T = self.T(c, n)
u_c = Uc(c, n)
s = sHist[t]
Eu_c = π[sHist[t - 1]] @ u_c
cHist[t], nHist[t], Bhist[t], THist[t] = c[s], n[s], x[s] / \
u_c[s], T[s]
RHist[t - 1] = Uc(cHist[t - 1], nHist[t - 1]) / (β * Eu_c)
µHist[t] = µ
class RecursiveAllocationAMSS:
def solve_time1_bellman(self):
'''
Solve the time 1 Bellman equation for calibration model and
initial grid µgrid0
'''
model, µgrid0 = self.model, self.µgrid
π = model.π
S = len(model.π)
c, n = np.vstack(c).T, np.vstack(n).T
x, V = np.hstack(x), np.hstack(V)
xprimes = np.vstack([x] * S)
cf.append(interp(x, c))
nf.append(interp(x, n))
Vf.append(interp(x, V))
xgrid.append(x)
xprimef.append(interp(x, xprimes))
cf, nf, xprimef = fun_vstack(cf), fun_vstack(nf), fun_vstack(xprimef)
Vf = fun_hstack(Vf)
policies = [cf, nf, xprimef]
# Create xgrid
x = np.vstack(xgrid).T
xbar = [x.min(0).max(), x.max(0).min()]
xgrid = np.linspace(xbar[0], xbar[1], len(µgrid0))
self.xgrid = xgrid
print(diff)
Vf = Vfnew
if sHist is None:
sHist = simulate_markov(π, s_0, T)
# time 1 onward
for t in range(1, T):
s_, x, s = sHist[t - 1], xHist[t - 1], sHist[t]
c, n, xprime, T = cf[s_, :](x), nf[s_, :](
x), xprimef[s_, :](x), Tf[s_, :](x)
T = self.T(c, n)[s]
u_c = Uc(c, n)
Eu_c = π[s_, :] @ u_c
µHist[t] = self.Vf[s](xprime[s])
class BellmanEquation:
'''
Bellman equation for the continuation of the Lucas-Stokey Problem
'''
self.z0 = {}
cf, nf, xprimef = policies0
for s_ in range(self.S):
for x in xgrid:
self.z0[x, s_] = np.hstack([cf[s_, :](x),
nf[s_, :](x),
xprimef[s_, :](x),
np.zeros(self.S)])
self.find_first_best()
def find_first_best(self):
'''
Find the first best allocation
'''
model = self.model
S, Θ, Uc, Un, G = self.S, self.Θ, model.Uc, model.Un, self.G
def res(z):
c = z[:S]
n = z[S:]
return np.hstack([Θ * Uc(c, n) + Un(c, n), Θ * n - c - G])
self.cFB = res.x[:S]
self.nFB = res.x[S:]
IFB = Uc(self.cFB, self.nFB) * self.cFB + \
Un(self.cFB, self.nFB) * self.nFB
self.zFB = {}
for s in range(S):
self.zFB[s] = np.hstack(
[self.cFB[s], self.nFB[s], self.π[s] @ self.xFB, 0.])
def objf(z):
c, n, xprime = z[:S], z[S:2 * S], z[2 * S:3 * S]
Vprime = np.empty(S)
for s in range(S):
Vprime[s] = Vf[s](xprime[s])
def cons(z):
c, n, xprime, T = z[:S], z[S:2 * S], z[2 * S:3 * S], z[3 * S:]
u_c = Uc(c, n)
Eu_c = π[s_] @ u_c
return np.hstack([
x * u_c / Eu_c - u_c * (c - T) - Un(c, n) * n - β * xprime,
Θ * n - c - G])
if model.transfers:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 100.)] * S
else:
bounds = [(0., 100)] * S + [(0., 100)] * S + \
[self.xbar] * S + [(0., 0.)] * S
out, fx, _, imode, smode = fmin_slsqp(objf, self.z0[x, s_],
f_eqcons=cons, bounds=bounds,
full_output=True, iprint=0,
acc=self.tol, iter=self.maxiter)
if imode > 0:
raise Exception(smode)
def objf(z):
c, n, xprime = z[:-1]
def cons(z):
c, n, xprime, T = z
return np.hstack([
-Uc(c, n) * (c - B_ - T) - Un(c, n) * n - β * xprime,
(Θ * n - c - G)[s0]])
if model.transfers:
bounds = [(0., 100), (0., 100), self.xbar, (0., 100.)]
else:
bounds = [(0., 100), (0., 100), self.xbar, (0., 0.)]
out, fx, _, imode, smode = fmin_slsqp(objf, self.zFB[s0], f_
,→eqcons=cons,
bounds=bounds, full_output=True,
,→ iprint=0)
if imode > 0:
raise Exception(smode)
class interpolate_wrapper:
def transpose(self):
self.F = self.F.transpose()
def __len__(self):
return len(self.F)
class interpolator_factory:
def fun_vstack(fun_list):
def fun_hstack(fun_list):
return sHist
Next, we show code that we use to generate a very long simulation starting from initial government debt
equal to −.5
Here is a graph of a long simulation of 102000 periods
sim_seq_long = log_sequential.simulate(0.5, 0, T)
sHist_long = sim_seq_long[-3]
sim_bel_long = log_bellman.simulate(0.5, 0, T, sHist_long)
The black vertical line denotes the sample mean for the last 100,000 observations included in the historgram;
B∗
the green vertical line denots the value of Eu c
, associated with the sample (presumably) from the ergodic
where B∗ is the regression coefficient described below; the red vertical line denotes an approximation by
[BEGS17] to the mean of the ergodic distribution that can be precomputed before sampling from the ergodic
distribution, as described below
Before moving on to discuss the histogram and the vertical lines approximating the ergodic mean of gov-
ernment debt in more detail, the following graphs show government debt and taxes early in the simulation,
for periods 1-100 and 101 to 200 respectively
titles = ['Government Debt', 'Tax Rate']
axes[i].set(title=titles[i])
axes[i+2].set(title=titles[i])
axes[i].grid()
axes[i+2].grid()
For the short samples early in our simulated sample of 102,000 observations, fluctuations in government
debt and the tax rate conceal the weak but inexorable force that the Ramsey planner puts into both series
driving them toward ergodic distributions far from these early observations
• early observations are more influenced by the initial value of the par value of government debt than
by the ergodic mean of the par value of government debt
• much later observations are more influenced by the ergodic mean and are independent of the initial
value of the par value of government debt
[BEGS17] call Xt the effective government deficit, and Bt the effective government debt
Equation (44) of [BEGS17] expresses the time t state s government budget constraint as
where the dependence on τ is to remind us that these objects depend on the tax rate; s− is last periods
Markov state
BEGS interpret random variations in the right side of (9.150) as fiscal risks generated by
• interest-rate-driven fluctuations in time t effective payments due on the government portfolio, namely,
Rτ (s, s− )B− , and
• fluctuations in the effective government deficit Xt
Asymptotic mean
BEGS give conditions under which the ergodic mean of Bt approximately satisfies the equation
cov∞ (Rt , Xt )
B∗ = − (9.151)
var∞ (Rt )
where the superscript ∞ denotes a moment taken with respect to an ergodic distribution
Formula (9.151) represents B ∗ as a regression coefficient of Xt on Rt in the ergodic distribution
Regression coefficient B ∗ solves a variance-minimization problem:
The minimand in criterion (9.152) measures fiscal risk associated with a given tax-debt policy that appears
on the right side of equation (9.150)
Expressing formula (9.151) in terms of our notation tells us that the ergodic mean of the par value b of
government debt in the AMSS model should approximately equal
B∗ B∗
b̂ = = (9.153)
βE(Et uc,t+1 ) βE(uc,t+1 )
where mathematical expectations are taken with respect to the ergodic distribution
Rate of convergence
BEGS also derive the following approximation to the rate of convergence to B ∗ from an arbitrary initial
condition
Et (Bt+1 − B ∗ ) 1
≈ (9.154)
(Bt − B ∗ ) 1 + β 2 var∞ (R)
The remainder of this lecture is about technical material based on formulas from [BEGS17]
The topic is interpreting and extending formula (9.152) for the ergodic mean B ∗
Attributes of the ergodic distribution for Bt appear on the right side of formula (9.152) for the ergodic mean
B∗
Thus, formula (9.152) is not useful for estimating the mean of the ergodic in advance of actually computing
the ergodic distribution
• we need to know the ergodic distribution to compute the right side of formula (9.152)
So the primary use of equation (9.152) is how it confirms that the ergodic distribution solves a fiscal-risk
minimization problem
As an example, notice how we used the formula for the mean of B in the ergodic distribution of the special
AMSS economy in Fiscal Insurance via Fluctuating Interest Rates
• first we computed the ergodic distribution using a reverse-engineering construction
• then we verified that B agrees with the mean of that distribution
Approximating B ∗
[BEGS17] propose an approximation to B∗ that can be computed without first knowing the ergodic distri-
bution
To construct the BEGS approximation to B ∗ , we just follow steps set forth on pages 648 - 650 of section
III.D of [BEGS17]
• notation in BEGS might be confusing at first sight, so it is important to stare and digest before com-
puting
• there are also some sign errors in the [BEGS17] text that well want to correct
Here is a step-by-step description of the [BEGS17] approximation procedure
Step by step
cτ (s)−σ
Rτ (s) = ∑S
β ′ −σ π(s′ )
s′ =1 cτ (s )
and
each for s = 1, . . . , S
BEGS call Rτ (s) the effective return on risk-free debt and they call Xτ (s) the effective government deficit
Step 3: With the preceding objects in hand, for a given B, we seek a τ that satisfies
β β ∑
B=− EXτ ≡ − Xτ (s)π(s)
1−β 1−β s
This equation says that at a constant discount factor β, equivalent government debt B equals the present
value of the mean effective government surplus
Typo alert: there is a sign error in equation (46) of [BEGS17] –the left side should be multiplied by −1
• We have made this correction in the above equation
For a given B, let a τ that solves the above equation be called τ (B)
Well use a Python root solver to finds a τ that this equation for a given B
Well use this function to induce a function τ (B)
Step 4: With a Python program that computes τ (B) in hand, next we write a Python function to compute
the random variable
Step 5: Now that we have a machine to compute the random variable J(B)(s), s = 1, . . . , S, via a compo-
sition of Python functions, we can use the population variance function that we defined in the code above to
construct a function var(J(B))
We put var(J(B)) into a function minimizer and compute
B ∗ = argminB var(J(B))
Step 6: Next we take the minimizer B ∗ and the Python functions for computing means and variances and
compute
1
rate =
1+ β 2 var(Rτ (B∗ ) )
Ultimate outputs of this string of calculations are two scalars
(B ∗ , rate)
div = βEuc,t+1
and then compute the mean of the par value of government debt in the AMSS model
B∗
b̂ =
div
In the two-Markov-state AMSS economy in Fiscal Insurance via Fluctuating Interest Rates, Et uc,t+1 =
Euc,t+1 in the ergodic distribution and we have confirmed that this formula very accurately describes a
constant par value of government debt that
• supports full fiscal insurance via fluctuating interest parameters, and
• is the limit of government debt as t → +∞
In the three-Markov-state economy of this lecture, the par value of government debt fluctuates in a history-
dependent way even asymptotically
In this economy, b̂ given by the above formula approximates the mean of the ergodic distribution of the par
value of government debt
• this is the red vertical line plotted in the histogram of the last 100,000 observations of our simulation
of the par value of government debt plotted above
• the approximation is fairly accurate but not perfect
• so while the approximation circumvents the chicken and egg problem surrounding the much better
approximation associated with the green vertical line, it does so by enlarging the approximation error
Execution
Step 1
nfev: 11
qtf: array([1.55568331e-08, 1.28322481e-08, 7.89913426e-11])
r: array([ 4.26943131, 0.08684775, -0.06300593, -4.71278821, -0.
,→0743338 ,
-5.50778548])
status: 1
success: True
x: array([0.93852387, 0.89231015, 0.84858872])
Step 2
Code
c**(-u.σ) @ u.π
u.π
s = 0
R, X = compute_R_X(τ , u, s)
mean(R,s)
1.1111111111111112
mean(X,s)
0.19134248445303795
X @ u.π
Step 3
s = 0
B = 1.0
0.2740159773695818
Step 4
min_J(B, u, s)
0.035564405653720765
Step 6
-1.199483167941158
B_hat = B_star/div
B_hat
-1.0577661126390971
0.09572916798461703
0.9931353432732218
Contents
9.8.1 Overview
This lecture describes how Chang [Cha98] analyzed competitive equilibria and a best competitive equilib-
rium called a Ramsey plan
He did this by
• characterizing a competitive equilibrium recursively in a way also employed in dynamic Stackelberg
problems, the Calvo model, and history dependent public policies lecture to pose Stackelberg prob-
lems in linear economies, and then
• appropriately adapting an argument of Abreu, Pearce, and Stachetti [APS90] to describe key features
of the set of competitive equilibria
Roberto Chang [Cha98] chose a model of Calvo [Cal78] as a simple structure that conveys ideas that apply
more broadly
A textbook version of Changs model appears in chapter 25 of [LS18]
This lecture and Credible Government Policies in Chang Model can be viewed as more sophisticated and
complete treatments of the topics discussed in Ramsey plans, time inconsistency, sustainable plans
Both this lecture and Credible Government Policies in Chang Model make extensive use of an idea to which
we apply the nickname dynamic programming squared
In dynamic programming squared problems there are typically two interrelated Bellman equations
• A Bellman equation for a set of agents or followers with value or value function va
• A Bellman equation for a principal or Ramsey planner or Stackelberg leader with value or value
function vp in which va appears as an argument
We encountered problems with this structure in dynamic Stackelberg problems, optimal taxation with state-
contingent debt, and other lectures
The setting
An infinitely lived representative agent and an infinitely lived government exist at dates t = 0, 1, . . .
The objects in play are
• an initial quantity M−1 of nominal money holdings
• a sequence of inverse money growth rates ⃗h and an associated sequence of nominal money holdings
M⃗
The ultimate source of time inconsistency is that a time 0 Ramsey planner takes these effects into account
in designing a plan of government actions for t ≥ 0
9.8.2 Setting
A representative household faces a nonnegative value of money sequence ⃗q and sequences ⃗y , ⃗x of income
and total tax collections, respectively
⃗ of consumption and nominal balances, respectively, to
The household chooses nonnegative sequences ⃗c, M
maximize
∞
∑
β t [u(ct ) + v(qt Mt )] (9.155)
t=0
subject to
qt Mt ≤ yt + qt Mt−1 − ct − xt (9.156)
and
qt Mt ≤ m̄ (9.157)
Here qt is the reciprocal of the price level at t, which we can also call the value of money
Chang [Cha98] assumes that
• u : R+ → R is twice continuously differentiable, strictly concave, and strictly increasing;
• v : R+ → R is twice continuously differentiable and strictly concave;
• u′ (c)c→0 = limm→0 v ′ (m) = +∞;
• there is a finite level m = mf such that v ′ (mf ) = 0
The household carries real balances out of a period equal to mt = qt Mt
Inequality (9.156) is the households time t budget constraint
It tells how real balances qt Mt carried out of period t depend on income, consumption, taxes, and real
balances qt Mt−1 carried into the period
Equation (9.157) imposes an exogenous upper bound m̄ on the households choice of real balances, where
m̄ ≥ mf
Government
Mt−1
The government chooses a sequence of inverse money growth rates with time t component ht ≡ Mt ∈
Π ≡ [π, π], where 0 < π < 1 < β1 ≤ π
The government faces a sequence of budget constraints with time t component
−xt = mt (1 − ht ) (9.158)
The restrictions mt ∈ [0, m̄] and ht ∈ Π evidently imply that xt ∈ X ≡ [(π − 1)m̄, (π − 1)m̄]
We define the set E ≡ [0, m̄] × Π × X, so that we require that (m, h, x) ∈ E
To represent the idea that taxes are distorting, Chang makes the following assumption about outcomes for
per capita output:
yt = f (xt ), (9.159)
where f : R → R satisfies f (x) > 0, is twice continuously differentiable, f ′′ (x) < 0, and f (x) = f (−x)
for all x ∈ R, so that subsidies and taxes are equally distorting
Calvos and Changs purpose is not to model the causes of tax distortions in any detail but simply to summarize
the outcome of those distortions via the function f (x)
A key part of the specification is that tax distortions are increasing in the absolute value of tax revenues
Ramsey plan: A Ramsey plan is a competitive equilibrium that maximizes (9.155)
Within-period timing of decisions is as follows:
• first, the government chooses ht and xt ;
• then given ⃗q and its expectations about future values of x and ys, the household chooses Mt and
therefore mt because mt = qt Mt ;
• then output yt = f (xt ) is realized;
• finally ct = yt
This within-period timing confronts the government with choices framed by how the private sector wants to
respond when the government takes time t actions that differ from what the private sector had expected
This consideration will be important in lecture credible government policies when we study credible gov-
ernment policies
The model is designed to focus on the intertemporal trade-offs between the welfare benefits of deflation
and the welfare costs associated with the high tax collections required to retire money at a rate that delivers
deflation
A benevolent time 0 government can promote utility generating increases in real balances only by imposing
sufficiently large distorting tax collections
To promote the welfare increasing effects of high real balances, the government wants to induce gradual
deflation
Households problem
∞
∑ {
L = max min β t u(ct ) + v(Mt qt ) + λt [yt − ct − xt + qt Mt−1 − qt Mt ]
⃗ ⃗λ,⃗
⃗c,M µ t=0
}
+ µt [m̄ − qt Mt ]
u′ (ct ) = λt
qt [u′ (ct ) − v ′ (Mt qt )] ≤ βu′ (ct+1 )qt+1 , = if Mt qt < m̄
The last equation expresses Karush-Kuhn-Tucker complementary slackness conditions (see here)
These insist that the inequality is an equality at an interior solution for Mt
Mt−1 mt
Using ht = Mt and qt = Mt in these first-order conditions and rearranging implies
This is real money balances at time t + 1 measured in units of marginal utility, which Chang refers to as the
marginal utility of real balances
From the standpoint of the household at time t, equation (9.161) shows that θt+1 intermediates the influences
of (⃗xt+1 , m
⃗ t+1 ) on the households choice of real balances mt
By intermediates we mean that the future paths (⃗xt+1 , m
⃗ t+1 ) influence mt entirely through their effects on
the scalar θt+1
The observation that the one dimensional promised marginal utility of real balances θt+1 functions in this
way is an important step in constructing a class of competitive equilibria that have a recursive representation
A closely related observation pervaded the analysis of Stackelberg plans in lecture dynamic Stackelberg
problems
Definition:
• A government policy is a pair of sequences (⃗h, ⃗x) where ht ∈ Π ∀t ≥ 0
• A price system is a nonnegative value of money sequence ⃗q
• An allocation is a triple of nonnegative sequences (⃗c, m,
⃗ ⃗y )
It is required that time t components (mt , xt , ht ) ∈ E
Definition:
Given M−1 , a government policy (⃗h, ⃗x), price system ⃗q, and allocation (⃗c, m,
⃗ ⃗y ) are said to be a competitive
equilibrium if
• mt = qt Mt and yt = f (xt )
• The government budget constraint is satisfied
• Given ⃗q, ⃗x, ⃗y , (⃗c, m)
⃗ solves the households problem
ht = h(θt )
mt = m(θt )
(9.162)
xt = x(θt )
θt+1 = Ψ(θt )
starting from θ0
The range and domain of Ψ(·) are both Ω
3. A recursive representation of a Ramsey plan
• The Ramsey planner chooses θ0 , (h, m, x, Ψ) from among the set of recursive competitive equi-
libria at time 0
• Iterations on the function Ψ determine subsequent θt s that summarize the aspects of the contin-
uation competitive equilibria that influence the households decisions
• At time 0, the Ramsey planner commits to this implied sequence {θt }∞
t=0 and therefore to an
associated sequence of continuation competitive equilibria
4. A characterization of time-inconsistency of a Ramsey plan
• Imagine that after a revolution at time t ≥ 1, a new Ramsey planner is given the opportunity to
ignore history and solve a brand new Ramsey plan
• This new planner would want to reset the θt associated with the original Ramsey plan to θ0
• The incentive to reinitialize θt associated with this revolution experiment indicates the time-
inconsistency of the Ramsey plan
• By resetting θ to θ0 , the new planner avoids the costs at time t that the original Ramsey planner
must pay to reap the beneficial effects that the original Ramsey plan for s ≥ t had achieved via
its influence on the households decisions for s = 0, . . . , t − 1
9.8.5 Analysis
Equation (9.160) inherits from the households Euler equation for money holdings the property that the value
of m0 consistent with the representative households choices depends on (⃗h1 , m
⃗ 1)
This dependence is captured in the definition above by making Ω be the set of first period values of θ0 satis-
fying θ0 = u′ (f (x0 ))(m0 + x0 ) for first period component (m0 , h0 ) of competitive equilibrium sequences
⃗ ⃗x, ⃗h)
(m,
Chang establishes that Ω is a nonempty and compact subset of R+
Next Chang advances:
⃗ ⃗x, ⃗h) ∈ CE|θ = u′ (f (x0 ))(m0 + x0 )}
Definition: Γ(θ) = {(m,
Thus, Γ(θ) is the set of competitive equilibrium sequences (m, ⃗ ⃗x, ⃗h) whose first period components (m0 , h0 )
deliver the prescribed value θ for first period marginal utility
If we knew the sets Ω, Γ(θ), we could use the following two-step procedure to find at least the value of the
Ramsey outcome to the representative household
1. Find the indirect value function w(θ) defined as
∞
∑
w(θ) = max β t [u(f (xt )) + v(mt )]
⃗ x,⃗h)∈Γ(θ) t=0
(m,⃗
{ }
w(θ) = max ′ u(f (x)) + v(m) + βw(θ′ ) (9.163)
x,m,h,θ
and
θ = u′ (f (x))(m + x) (9.165)
and
−x = m(1 − h) (9.166)
and
Before we use this proposition to recover a recursive representation of the Ramsey plan, note that the propo-
sition relies on knowing the set Ω
To find Ω, Chang uses the insights of Kydland and Prescott [KP80a] together with a method based on the
Abreu, Pearce, and Stacchetti [APS90] iteration to convergence on an operator B that maps continuation
values into values
We want an operator that maps a continuation θ into a current θ
Chang lets Q be a nonempty, bounded subset of R
Elements of the set Q are taken to be candidate values for continuation marginal utilities
Chang defines an operator
Let ⃗ht = (h0 , h1 , . . . , ht ) denote a history of inverse money creation rates with time t component ht ∈ Π
A government strategy σ = {σt }∞
t=0 is a σ0 ∈ Π and for t ≥ 1 a sequence of functions σt : Π
t−1 → Π
In words, CEπ is the set of money growth sequences consistent with the existence of competitive equilibria
In words, CEπ0 is the set of all first period money growth rates h = h0 , each of which is consistent with
the existence of a sequence of money growth rates ⃗h starting from h0 in the initial period and for which a
competitive equilibrium exists
Remark: CEπ0 = {h ∈ Π : there is (m, θ′ ) ∈ [0, m̄] × Ω such that mu′ [f ((h − 1)m) − v ′ (m)] ≤
βθ′ with equality if m < m̄}
⃗ = {αt }∞
Definition: An allocation rule is a sequence of functions α t=0 such that αt : Π → [0, m̄] × X
t
Thus, the time t component of αt (ht ) is a pair of functions (mt (ht ), xt (ht ))
Definition: Given an admissible government strategy σ, an allocation rule α is called competitive if given
any history ⃗ht−1 and ht ∈ CEπ0 , the continuations of σ and α after (⃗ht−1 , ht ) induce a competitive equilib-
rium sequence
Another operator
At this point it is convenient to introduce another operator that can be used to compute a Ramsey plan
For computing a Ramsey plan, this operator is wasteful because it works with a state vector that is bigger
than necessary
We introduce this operator because it helps to prepare the way for Changs operator called D̃(Z) that we
shall describe in lecture credible government policies
It is also useful because a fixed point of the operator to be defined here provides a good guess for an initial
set from which to initiate iterations on Changs set-to-set operator D̃(Z) to be described in lecture credible
government policies
Let S be the set of all pairs (w, θ) of competitive equilibrium values and associated initial marginal utilities
Let W be a bounded set of values in R
Let Z be a nonempty subset of W × Ω
Think of using pairs (w′ , θ′ ) drawn from Z as candidate continuation value, θ pairs
Define the operator
{
D(Z) = (w, θ) : there is h ∈ CEπ0
such that
It is possible to establish
Proposition:
1. If Z ⊂ D(Z), then D(Z) ⊂ S (self-generation)
2. S = D(S) (factorization)
Proposition:
1. Monotonicity of D: Z ⊂ Z ′ implies D(Z) ⊂ D(Z ′ )
2. Z compact implies that D(Z) is compact
It can be shown that S is compact and that therefore there exists a (w, θ) pair within this set that attains the
highest possible value w
This (w, θ) pair i associated with a Ramsey plan
Further, we can compute S by iterating to convergence on D provided that one begins with a sufficiently
large initial set S0
As a very useful by-product, the algorithm that finds the largest fixed point S = D(S) also produces the
Ramsey plan, its value w, and the associated competitive equilibrium
D(Z) = {(w, θ) : ∃h ∈ CEπ0 and (m(h), x(h), w′ (h), θ′ (h)) ∈ [0, m̄] × X × Z
such that
θ = u′ (f (x(h)))(m(h) + x(h))
x(h) = m(h)(h − 1)
m(h)(u′ (f (x(h))) − v ′ (m(h))) ≤ βθ′ (h) (with equality if m(h) < m̄)}
We noted that the set S can be found by iterating to convergence on D, provided that we start with a
sufficiently large initial set S0
Our implementation builds on ideas in this notebook
To find S we use a numerical algorithm called the outer hyperplane approximation algorithm
It was invented by Judd, Yeltekin, Conklin [JYC03]
This algorithm constructs the smallest convex set that contains the fixed point of the D(S) operator
Given that we are finding the smallest convex set that contains S, we can represent it on a computer as the
intersection of a finite number of half-spaces
Let H be a set of subgradients, and C be a set of hyperplane levels
We approximate S by:
A key feature of this algorithm is that we discretize the action space, i.e., we create a grid of possible values
for m and h (note that x is implied by m and h). This discretization simplifies computation of S̃ by allowing
us to find it by solving a sequence of linear programs
The outer hyperplane approximation algorithm proceeds as follows:
1. Initialize subgradients, H, and hyperplane levels, C0
2. Given a set of subgradients, H, and hyperplane levels, Ct , for each subgradient hi ∈ H:
• Solve a linear program (described below) for each action in the action space
• Find the maximum and update the corresponding hyperplane level, Ci,t+1
3. If |Ct+1 − Ct | > ϵ, return to 2
Step 1 simply creates a large initial set S0 .
Given some set St , Step 2 then constructs the set St+1 = D(St ). The linear program in Step 2 is designed
to construct a set St+1 that is as large as possible while satisfying the constraints of the D(S) operator
To do this, for each subgradient hi , and for each point in the action space (mj , hj ), we solve the following
problem:
max hi · (w, θ)
[w′ ,θ′ ]
subject to
H · (w′ , θ′ ) ≤ Ct
θ = u′ (f (xj ))(mj + xj )
xj = mj (hj − 1)
mj (u′ (f (xj )) − v ′ (mj )) ≤ βθ′ (= if mj < m̄)
This problem maximizes the hyperplane level for a given set of actions
The second part of Step 2 then finds the maximum possible hyperplane level across the action space
The algorithm constructs a sequence of progressively smaller sets St+1 ⊂ St ⊂ St−1 · · · ⊂ S0
Step 3 ends the algorithm when the difference between these sets is small enough
We have created a Python class that solves the model assuming the following functional forms:
u(c) = log(c)
1
v(m) = (mm̄ − 0.5m2 )0.5
500
f (x) = 180 − (0.4x)2
The remaining parameters {β, m̄, h, h̄} are then variables to be specified for an instance of the Chang class
Below we use the class to solve the model and plot the resulting equilibrium set, once with β = 0.3 and
once with β = 0.8
(Here we have set the number of subgradients to 10 in order to speed up the code for now - we can increase
accuracy by increasing the number of subgradients)
Note: this code requires the polytope package
The package can be installed in a terminal/command prompt with pip
!pip install polytope
"""
Author: Sebastian Graves
import numpy as np
import quantecon as qe
import time
class ChangModel:
"""
Class to solve for the competitive and sustainable sets in the Chang
,→(1998)
p_vec[i] = self.Θ_vec[i]
w_vec[i] = self.u_vec[i]/(1 - β)
w_space = np.array([min(w_vec[~np.isinf(w_vec)]),
max(w_vec[~np.isinf(w_vec)])])
p_space = np.array([0, max(p_vec[~np.isinf(w_vec)])])
self.p_space = p_space
"""
# Points on circle
H = np.zeros((N, 2))
for i in range(N):
x = degrees[i]
H[i, 0] = np.cos(x)
H[i, 1] = np.sin(x)
return C, H, Z
def solve_worst_spe(self):
"""
Method to solve for BR(Z). See p.449 of Chang (1998)
"""
# Pre-compute constraints
aineq_mbar = np.vstack((self.H, np.array([0, -self.β])))
bineq_mbar = np.vstack((self.c0_s, 0))
aineq = self.H
bineq = self.c0_s
aeq = [[0, -self.β]]
for j in range(self.N_a):
# Only try if consumption is possible
if self.f_vec[j] > 0:
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_mbar[-1] = self.euler_vec[j]
res = linprog(c, A_ub=aineq_mbar, b_ub=bineq_mbar,
bounds=(self.w_bnds_s, self.p_bnds_s))
else:
beq = self.euler_vec[j]
res = linprog(c, A_ub=aineq, b_ub=bineq, A_eq=aeq, b_
,→eq=beq,
bounds=(self.w_bnds_s, self.p_bnds_s))
if res.status == 0:
p_vec[j] = self.u_vec[j] + self.β * res.x[0]
# Max over h and min over other variables (see Chang (1998) p.449)
self.br_z = np.nanmax(np.nanmin(p_vec.reshape(self.n_m, self.n_h), 0))
def solve_subgradient(self):
"""
Method to solve for E(Z). See p.449 of Chang (1998)
"""
# Pre-compute constraints
aineq_C_mbar = np.vstack((self.H, np.array([0, -self.β])))
bineq_C_mbar = np.vstack((self.c0_c, 0))
aineq_C = self.H
bineq_C = self.c0_c
aeq_C = [[0, -self.β]]
for j in range(self.N_a):
# Only try if consumption is possible
if self.f_vec[j] > 0:
# COMPETITIVE EQUILIBRIA
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_C_mbar[-1] = self.euler_vec[j]
res = linprog(c, A_ub=aineq_C_mbar, b_ub=bineq_C_mbar,
bounds=(self.w_bnds_c, self.p_bnds_c))
# If m < mbar, use equality constraint
else:
beq_C = self.euler_vec[j]
res = linprog(c, A_ub=aineq_C, b_ub=bineq_C, A_eq =
,→ aeq_C,
b_eq = beq_C, bounds=(self.w_bnds_c,
,→ self.p_bnds_c))
if res.status == 0:
c_a1a2_c[j] = self.H[i, 0]*(self.u_vec[j] + self.β *
,→res.x[0]) + self.H[i, 1] * self.Θ_vec[j]
t_a1a2_c[j] = res.x
# SUSTAINABLE EQUILIBRIA
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_S_mbar[-2] = self.euler_vec[j]
bineq_S_mbar[-1] = self.u_vec[j] - self.br_z
res = linprog(c, A_ub=aineq_S_mbar, b_ub=bineq_S_mbar,
bounds=(self.w_bnds_s, self.p_bnds_s))
# If m < mbar, use equality constraint
else:
t_a1a2_s[j] = res.x
self.Θ_vec[idx_s]])
for i in range(self.N_g):
self.c1_c[i] = np.dot(self.z1_c[:, i], self.H[i, :])
self.c1_s[i] = np.dot(self.z1_s[:, i], self.H[i, :])
t = time.time()
diff = tol + 1
iters = 0
# Save iteration
self.c_dic_c[iters], self.c_dic_s[iters] = np.copy(self.c1_c), np.
,→copy(self.c1_s)
self.iters = iters
elapsed = time.time() - t
print('Convergence achieved after {} iterations and {} seconds'.
,→format(iters, round(elapsed, 2)))
def p_fun2(x):
scale = -1 + 2*(x[1] - θ_min)/(θ_max - θ_min)
p_fun = - (u(x[0],mbar) + self.β * np.dot(cheb.chebvander(scale,
,→order - 1), c))
return p_fun
# Bellman Iterations
diff = 1
iters = 1
constraints=cons2,
tol=1e-10)
if -p_fun2(res.x) > p_iter1[i] and res.success == True:
p_iter1[i] = -p_fun2(res.x)
self.θ_grid = s
self.p_iter = p_iter1
self.Φ = Φ
self.c = c
print('Convergence achieved after {} iterations'.format(iters))
# Check residuals
θ_grid_fine = np.linspace(θ_min, θ_max, 100)
resid_grid = np.zeros(100)
p_grid = np.zeros(100)
θ_prime_grid = np.zeros(100)
m_grid = np.zeros(100)
h_grid = np.zeros(100)
for i in range(100):
θ = θ_grid_fine[i]
res = minimize(p_fun,
lb1 + (ub1-lb1) / 2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
tol=1e-10)
if res.success == True:
p = -p_fun(res.x)
p_grid[i] = p
θ_prime_grid[i] = res.x[2]
h_grid[i] = res.x[0]
m_grid[i] = res.x[1]
res = minimize(p_fun2,
lb2 + (ub2-lb2)/2,
method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res.x) > p and res.success == True:
p = -p_fun2(res.x)
p_grid[i] = p
θ_prime_grid[i] = res.x[1]
h_grid[i] = res.x[0]
m_grid[i] = self.mbar
scale = -1 + 2 * (θ - θ_min)/(θ_max - θ_min)
resid_grid[i] = np.dot(cheb.chebvander(scale, order-1), c) - p
self.resid_grid = resid_grid
self.θ_grid_fine = θ_grid_fine
self.θ_prime_grid = θ_prime_grid
self.m_grid = m_grid
self.h_grid = h_grid
self.p_grid = p_grid
self.x_grid = m_grid * (h_grid - 1)
# Simulate
θ_series = np.zeros(31)
m_series = np.zeros(30)
h_series = np.zeros(30)
# Find initial θ
def ValFun(x):
scale = -1 + 2*(x - θ_min)/(θ_max - θ_min)
p_fun = np.dot(cheb.chebvander(scale, order - 1), c)
return -p_fun
res = minimize(ValFun,
(θ_min + θ_max)/2,
bounds=[(θ_min, θ_max)])
θ_series[0] = res.x
# Simulate
for i in range(30):
θ = θ_series[i]
res = minimize(p_fun,
lb1 + (ub1-lb1)/2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
tol=1e-10)
if res.success == True:
p = -p_fun(res.x)
h_series[i] = res.x[0]
m_series[i] = res.x[1]
θ_series[i+1] = res.x[2]
res2 = minimize(p_fun2,
lb2 + (ub2-lb2)/2,
method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res2.x) > p and res2.success == True:
h_series[i] = res2.x[0]
m_series[i] = self.mbar
θ_series[i+1] = res2.x[1]
self.θ_series = θ_series
self.m_series = m_series
self.h_series = h_series
self.x_series = m_series * (h_series - 1)
import polytope
import matplotlib.pyplot as plt
def plot_competitive(ChangModel):
"""
Method that only plots competitive equilibrium set
"""
poly_C = polytope.Polytope(ChangModel.H, ChangModel.c1_c)
ext_C = polytope.extreme(poly_C)
ax.set_xlabel('w', fontsize=16)
ax.set_ylabel(r"$\theta$", fontsize=18)
plt.tight_layout()
plt.show()
plot_competitive(ch1)
plot_competitive(ch2)
In this section we solve the Bellman equation confronting a continuation Ramsey planner
The construction of a Ramsey plan is decomposed into a two subproblems in Ramsey plans, time inconsis-
tency, sustainable plans and dynamic Stackelberg problems
• Subproblem 1 is faced by a sequence of continuation Ramsey planners at t ≥ 1
• Subproblem 2 is faced by a Ramsey planner at t = 0
The problem is:
subject to:
θ = u′ (f (x))(m + x)
x = m(h − 1)
(m, x, h) ∈ E
θ′ ∈ Ω
To solve this Bellman equation, we must know the set Ω
We have solved the Bellman equation for the two sets of parameter values for which we computed the
equilibrium value sets above
Hence for these parameter configurations, we know the bounds of Ω
The two sets of parameters differ only in the level of β
From the figures earlier in this lecture, we know that when β = 0.3, Ω = [0.0088, 0.0499], and when
β = 0.8, Ω = [0.0395, 0.2193]
First, a quick check that our approximations of the value functions are good
We do this by calculating the residuals between iterates on the value function on a fine grid:
max(abs(ch1.resid_grid)), max(abs(ch2.resid_grid))
(6.463130983291876e-06, 7.0466161972149166e-07)
The value functions plotted below trace out the right edges of the sets of equilibrium values plotted above
plt.show()
The next figure plots the optimal policy functions; values of θ′ , m, x, h for each value of the state θ:
plt.show()
With the first set of parameter values, the value of θ′ chosen by the Ramsey planner quickly hits the upper
limit of Ω
But with the second set of parameters it converges to a value in the interior of the set
Consequently, the choice of θ̄ is clearly important with the first set of parameter values
One way of seeing this is plotting θ′ (θ) for each set of parameters
With the first set of parameter values, this function does not intersect the 45-degree line until θ̄, whereas in
the second set of parameter values, it intersects in the interior
axes[0].legend()
plt.show()
Subproblem 2 is equivalent to the planner choosing the initial value of θ (i.e. the value which maximizes the
value function)
From this starting point, we can then trace out the paths for {θt , mt , ht , xt }∞
t=0 that support this equilibrium
plt.show()
Next Steps
In Credible Government Policies in Chang Model we shall find a subset of competitive equilibria that are
sustainable in the sense that a sequence of government administrations that chooses sequentially, rather
than once and for all at time 0 will choose to implement them
In the process of constructing them, we shall construct another, smaller set of competitive equilibria
Contents
9.9.1 Overview
Some of the material in this lecture and competitive equilibria in the Chang model can be viewed as more so-
phisticated and complete treatments of the topics discussed in Ramsey plans, time inconsistency, sustainable
plans
This lecture assumes almost the same economic environment analyzed in competitive equilibria in the Chang
model
The only change – and it is a substantial one – is the timing protocol for making government decisions
In competitive equilibria in the Chang model, a Ramsey planner chose a comprehensive government policy
once-and-for-all at time 0
Now in this lecture, there is no time 0 Ramsey planner
Instead there is a sequence of government decision makers, one for each t
The time t government decision maker choose time t government actions after forecasting what future
governments will do
We use the notion of a sustainable plan proposed in [CK90], also referred to as a credible public policy in
[Sto89]
Technically, this lecture starts where lecture competitive equilibria in the Chang model on Ramsey plans
within the Chang [Cha98] model stopped
That lecture presents recursive representations of competitive equilibria and a Ramsey plan for a version of
a model of Calvo [Cal78] that Chang used to analyze and illustrate these concepts
We used two operators to characterize competitive equilibria and a Ramsey plan, respectively
In this lecture, we define a credible public policy or sustainable plan
Starting from a large enough initial set Z0 , we use iterations on Changs set-to-set operator D̃(Z) to compute
a set of values associated with sustainable plans
Changs operator D̃(Z) is closely connected with the operator D(Z) introduced in lecture competitive equi-
libria in the Chang model
• D̃(Z) incorporates all of the restrictions imposed in constructing the operator D(Z), but . . .
• It adds some additional restrictions
– these additional restrictions incorporate the idea that a plan must be sustainable
– sustainable means that the government wants to implement it at all times after all histories
We begin by reviewing the set up deployed in competitive equilibria in the Chang model
Changs model, adopted from Calvo, is designed to focus on the intertemporal trade-offs between the welfare
benefits of deflation and the welfare costs associated with the high tax collections required to retire money
at a rate that delivers deflation
A benevolent time 0 government can promote utility generating increases in real balances only by imposing
an infinite sequence of sufficiently large distorting tax collections
To promote the welfare increasing effects of high real balances, the government wants to induce gradual
deflation
We start by reviewing notation
For a sequence of scalars ⃗z ≡ {zt }∞ z t = (z0 , . . . , zt ), ⃗zt = (zt , zt+1 , . . .).
t=0 , let ⃗
An infinitely lived representative agent and an infinitely lived government exist at dates t = 0, 1, . . .
The objects in play are
• an initial quantity M−1 of nominal money holdings
• a sequence of inverse money growth rates ⃗h and an associated sequence of nominal money holdings
M⃗
A representative household faces a nonnegative value of money sequence ⃗q and sequences ⃗y , ⃗x of income
and total tax collections, respectively
⃗ of consumption and nominal balances, respectively, to
The household chooses nonnegative sequences ⃗c, M
maximize
∞
∑
β t [u(ct ) + v(qt Mt )] (9.172)
t=0
subject to
qt Mt ≤ yt + qt Mt−1 − ct − xt (9.173)
and
qt Mt ≤ m̄ (9.174)
Here qt is the reciprocal of the price level at t, also known as the value of money
Chang [Cha98] assumes that
• u : R+ → R is twice continuously differentiable, strictly concave, and strictly increasing;
• v : R+ → R is twice continuously differentiable and strictly concave;
• u′ (c)c→0 = limm→0 v ′ (m) = +∞;
• there is a finite level m = mf such that v ′ (mf ) = 0
Real balances carried out of a period equal mt = qt Mt
Inequality (9.173) is the households time t budget constraint
It tells how real balances qt Mt carried out of period t depend on income, consumption, taxes, and real
balances qt Mt−1 carried into the period
Equation (9.174) imposes an exogenous upper bound m̄ on the choice of real balances, where m̄ ≥ mf
Government
Mt−1
The government chooses a sequence of inverse money growth rates with time t component ht ≡ Mt ∈
Π ≡ [π, π], where 0 < π < 1 < β1 ≤ π
The government faces a sequence of budget constraints with time t component
−xt = mt (1 − ht ) (9.175)
The restrictions mt ∈ [0, m̄] and ht ∈ Π evidently imply that xt ∈ X ≡ [(π − 1)m̄, (π − 1)m̄]
We define the set E ≡ [0, m̄] × Π × X, so that we require that (m, h, x) ∈ E
To represent the idea that taxes are distorting, Chang makes the following assumption about outcomes for
per capita output:
yt = f (xt ) (9.176)
where f : R → R satisfies f (x) > 0, is twice continuously differentiable, f ′′ (x) < 0, and f (x) = f (−x)
for all x ∈ R, so that subsidies and taxes are equally distorting
The purpose is not to model the causes of tax distortions in any detail but simply to summarize the outcome
of those distortions via the function f (x)
A key part of the specification is that tax distortions are increasing in the absolute value of tax revenues
The government chooses a competitive equilibrium that maximizes (9.172)
For the results in this lecture, the timing of actions within a period is important because of the incentives
that it activates
Chang assumed the following within-period timing of decisions:
• first, the government chooses ht and xt ;
• then given ⃗q and its expectations about future values of x and ys, the household chooses Mt and
therefore mt because mt = qt Mt ;
• then output yt = f (xt ) is realized;
• finally ct = yt
This within-period timing confronts the government with choices framed by how the private sector wants to
respond when the government takes time t actions that differ from what the private sector had expected
This timing will shape the incentives confronting the government at each history that are to be incorporated
in the construction of the D̃ operator below
Households problem
u′ (ct ) = λt
qt [u′ (ct ) − v ′ (Mt qt )] ≤ βu′ (ct+1 )qt+1 , = if Mt qt < m̄
Mt−1 mt
Using ht = Mt and qt = Mt in these first-order conditions and rearranging implies
This is real money balances at time t + 1 measured in units of marginal utility, which Chang refers to as the
marginal utility of real balances
From the standpoint of the household at time t, equation (9.178) shows that θt+1 intermediates the influences
of (⃗xt+1 , m
⃗ t+1 ) on the households choice of real balances mt
By intermediates we mean that the future paths (⃗xt+1 , m
⃗ t+1 ) influence mt entirely through their effects on
the scalar θt+1
The observation that the one dimensional promised marginal utility of real balances θt+1 functions in this
way is an important step in constructing a class of competitive equilibria that have a recursive representation
A closely related observation pervaded the analysis of Stackelberg plans in dynamic Stackelberg problems
and the Calvo model
Competitive equilibrium
Definition:
• A government policy is a pair of sequences (⃗h, ⃗x) where ht ∈ Π ∀t ≥ 0
• A price system is a non-negative value of money sequence ⃗q
• An allocation is a triple of non-negative sequences (⃗c, m,
⃗ ⃗y )
It is required that time t components (mt , xt , ht ) ∈ E
Definition:
Given M−1 , a government policy (⃗h, ⃗x), price system ⃗q, and allocation (⃗c, m,
⃗ ⃗y ) are said to be a competitive
equilibrium if
• mt = qt Mt and yt = f (xt )
• The government budget constraint is satisfied
• Given ⃗q, ⃗x, ⃗y , (⃗c, m)
⃗ solves the households problem
• A recursive representation of a credible government policy is a pair of initial conditions (w0 , θ0 ) and
a five-tuple of functions
mapping wt , θt and in some cases ht into ĥt , mt , xt , wt+1 , and θt+1 , respectively
• Starting from initial condition (w0 , θ0 ), a credible government policy can be constructed by iterating
on these functions in the following order that respects the within-period timing:
ĥt = h(wt , θt )
mt = m(ht , wt , θt )
xt = x(ht , wt , θt ) (9.179)
wt+1 = χ(ht , wt , θt )
θt+1 = Ψ(ht , wt , θt )
• Here it is to be understood that ĥt is the action that the government policy instructs the government to
take, while ht possibly not equal to ĥt is some other action that the government is free to take at time
t
The plan is credible if it is in the time t governments interest to execute it
Credibility requires that the plan be such that for all possible choices of ht that are consistent with competi-
tive equilibria,
so that at each instance and circumstance of choice, a government attains a weakly higher lifetime utility
with continuation value wt+1 = Ψ(ht , wt , θt ) by adhering to the plan and confirming the associated time t
action ĥt that the public had expected earlier
Please note the subtle change in arguments of the functions used to represent a competitive equilibrium and
a Ramsey plan, on the one hand, and a credible government plan, on the other hand
The extra arguments appearing in the functions used to represent a credible plan come from allowing the
government to contemplate disappointing the private sectors expectation about its time t choice ĥt
A credible plan induces the government to confirm the private sectors expectation
The recursive representation of the plan uses the evolution of continuation values to deter the government
from wanting to disappoint the private sectors expectations
Technically, a Ramsey plan and a credible plan both incorporate history dependence
For a Ramsey plan, this is encoded in the dynamics of the state variable θt , a promised marginal utility that
the Ramsey plan delivers to the private sector
For a credible government plan, we the two-dimensional state vector (wt , θt ) encodes history dependence
Sustainable plans
A government strategy σ and an allocation rule α are said to constitute a sustainable plan (SP) if
1. σ is admissible
2. Given σ, α is competitive
3. After any history ⃗ht−1 , the continuation of σ is optimal for the government; i.e., the sequence ⃗ht
induced by σ after ⃗ht−1 maximizes over CEπ given α
Given any history ⃗ht−1 , the continuation of a sustainable plan is a sustainable plan
⃗ ⃗x, ⃗h) ∈ CE : there is an SP whose outcome is(m,
Let Θ = {(m, ⃗ ⃗x, ⃗h)}
Sustainable outcomes are elements of Θ
Now consider the space
{
⃗ ⃗x, ⃗h) ∈ Θ
S = (w, θ) : there is a sustainable outcome (m,
with value
∞
∑ }
w= β t [u(f (xt )) + v(mt )] and such that u′ (f (x0 ))(m0 + x0 ) = θ
t=0
The space S is a compact subset of W × Ω where W = [w, w] is the space of values associated with
sustainable plans. Here w and w are finite bounds on the set of values
Because there is at least one sustainable plan, S is nonempty
Now recall the within-period timing protocol, which we can depict (h, x) → m = qM → y = c
With this timing protocol in mind, the time 0 component of an SP has the following components:
1. A period 0 action ĥ ∈ Π that the public expects the government to take, together with subsequent
within-period consequences m(ĥ), x(ĥ) when the government acts as expected
2. For any first period action h ̸= ĥ with h ∈ CEπ0 , a pair of within-period consequences m(h), x(h)
when the government does not act as the public had expected
3. For every h ∈ Π, a pair (w′ (h), θ′ (h)) ∈ S to carry into next period
These components must be such that it is optimal for the government to choose ĥ as expected; and for
every possible h ∈ Π, the government budget constraint and the households Euler equation must hold with
continuation θ being θ′ (h)
Given the timing protocol within the model, the representative households response to a government devia-
tion to h ̸= ĥ from a prescribed ĥ consists of a first period action m(h) and associated subsequent actions,
together with future equilibrium prices, captured by (w′ (h), θ′ (h))
At this point, Chang introduces an idea in the spirit of Abreu, Pearce, and Stacchetti [APS90]
Let Z be a nonempty subset of W × Ω
Think of using pairs (w′ , θ′ ) drawn from Z as candidate continuation value, promised marginal utility pairs
Define the following operator:
{
D̃(Z) = (w, θ) : there is ĥ ∈ CEπ0 and for each h ∈ CEπ0
(9.180)
a four-tuple (m(h), x(h), w′ (h), θ′ (h)) ∈ [0, m̄] × X × Z
such that
and
This operator adds the key incentive constraint to the conditions that had defined the earlier D(Z) operator
defined in competitive equilibria in the Chang model
Condition (9.183) requires that the plan deter the government from wanting to take one-shot deviations when
candidate continuation values are drawn from Z
Proposition:
x = m(h − 1)
m(h)(u′ (f (x(h))) + v ′ (m(h))) ≤ βθ′ (h) (with equality if m(h) < m̄)}
We then define:
such that
θ = u′ (f (x(h)))(m(h) + x(h))
x(h) = m(h)(h − 1)
m(h)(u′ (f (x(h))) − v ′ (m(h))) ≤ βθ′ (h) (with equality if m(h) < m̄)
and
}
w ≥ BR(Z)
Aside from the final incentive constraint, this is the same as the operator in competitive equilibria in the
Chang model
Consequently, to implement this operator we just need to add one step to our outer hyperplane approxima-
tion algorithm :
1. Initialize subgradients, H, and hyperplane levels, C0
2. Given a set of subgradients, H, and hyperplane levels, Ct , calculate BR(St )
3. Given H, Ct , and BR(St ), for each subgradient hi ∈ H:
• Solve a linear program (described below) for each action in the action space
• Find the maximum and update the corresponding hyperplane level, Ci,t+1
4. If |Ct+1 − Ct | > ϵ, return to 2
Step 1 simply creates a large initial set S0 .
Given some set St , Step 2 then constructs the value BR(St )
To do this, we solve the following problem for each point in the action space (mj , hj ):
subject to
H · (w′ , θ′ ) ≤ Ct
xj = mj (hj − 1)
subject to
H · (w′ , θ′ ) ≤ Ct
u(c) = log(c)
1
v(m) = (mm̄ − 0.5m2 )0.5
500
f (x) = 180 − (0.4x)2
The remaining parameters {β, m̄, h, h̄} are then variables to be specified for an instance of the Chang class
Below we use the class to solve the model and plot the resulting equilibrium set, once with β = 0.3 and once
with β = 0.8. We also plot the (larger) competitive equilibrium sets, which we described in competitive
equilibria in the Chang model
(We have set the number of subgradients to 10 in order to speed up the code for now. We can increase
accuracy by increasing the number of subgradients)
The following code computes sustainable plans
Note: this code requires the polytope package
The package can be installed in a terminal/command prompt with pip
"""
Author: Sebastian Graves
import numpy as np
import quantecon as qe
import time
class ChangModel:
"""
Class to solve for the competitive and sustainable sets in the Chang
,→(1998)
model, for different parameterizations.
"""
v_p(self.A[:, 1]))
w_space = np.array([min(w_vec[~np.isinf(w_vec)]),
max(w_vec[~np.isinf(w_vec)])])
p_space = np.array([0, max(p_vec[~np.isinf(w_vec)])])
self.p_space = p_space
"""
# Points on circle
H = np.zeros((N, 2))
for i in range(N):
x = degrees[i]
H[i, 0] = np.cos(x)
H[i, 1] = np.sin(x)
return C, H, Z
def solve_worst_spe(self):
"""
Method to solve for BR(Z). See p.449 of Chang (1998)
"""
# Pre-compute constraints
aineq_mbar = np.vstack((self.H, np.array([0, -self.β])))
bineq_mbar = np.vstack((self.c0_s, 0))
aineq = self.H
bineq = self.c0_s
aeq = [[0, -self.β]]
for j in range(self.N_a):
# Only try if consumption is possible
if self.f_vec[j] > 0:
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_mbar[-1] = self.euler_vec[j]
res = linprog(c, A_ub=aineq_mbar, b_ub=bineq_mbar,
bounds=(self.w_bnds_s, self.p_bnds_s))
else:
beq = self.euler_vec[j]
res = linprog(c, A_ub=aineq, b_ub=bineq, A_eq=aeq, b_
,→ eq=beq,
bounds=(self.w_bnds_s, self.p_bnds_s))
if res.status == 0:
p_vec[j] = self.u_vec[j] + self.β * res.x[0]
# Max over h and min over other variables (see Chang (1998) p.449)
self.br_z = np.nanmax(np.nanmin(p_vec.reshape(self.n_m, self.n_h), 0))
def solve_subgradient(self):
"""
Method to solve for E(Z). See p.449 of Chang (1998)
"""
# Pre-compute constraints
aineq_C_mbar = np.vstack((self.H, np.array([0, -self.β])))
bineq_C_mbar = np.vstack((self.c0_c, 0))
aineq_C = self.H
bineq_C = self.c0_c
aeq_C = [[0, -self.β]]
for j in range(self.N_a):
# Only try if consumption is possible
if self.f_vec[j] > 0:
# COMPETITIVE EQUILIBRIA
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_C_mbar[-1] = self.euler_vec[j]
# SUSTAINABLE EQUILIBRIA
# If m = mbar, use inequality constraint
if self.A[j, 1] == self.mbar:
bineq_S_mbar[-2] = self.euler_vec[j]
bineq_S_mbar[-1] = self.u_vec[j] - self.br_z
res = linprog(c, A_ub=aineq_S_mbar, b_ub=bineq_S_mbar,
bounds=(self.w_bnds_s, self.p_bnds_s))
# If m < mbar, use equality constraint
else:
bineq_S[-1] = self.u_vec[j] - self.br_z
beq_S = self.euler_vec[j]
res = linprog(c, A_ub=aineq_S, b_ub=bineq_S, A_eq =
,→ aeq_S,
b_eq = beq_S, bounds=(self.w_bnds_s,
,→ self.p_bnds_s))
if res.status == 0:
c_a1a2_s[j] = self.H[i, 0] * (self.u_vec[j] + self.
,→β *res.x[0]) + self.H[i, 1] * self.Θ_vec[j]
t_a1a2_s[j] = res.x
self.Θ_vec[idx_s]])
for i in range(self.N_g):
self.c1_c[i] = np.dot(self.z1_c[:, i], self.H[i, :])
self.c1_s[i] = np.dot(self.z1_s[:, i], self.H[i, :])
t = time.time()
diff = tol + 1
iters = 0
# Save iteration
self.c_dic_c[iters], self.c_dic_s[iters] = np.copy(self.c1_c), np.
,→copy(self.c1_s)
self.iters = iters
elapsed = time.time() - t
print('Convergence achieved after {} iterations and {} seconds'.
,→format(iters, round(elapsed, 2)))
uc_p = lambda c: 1 / c
v = lambda m: 1 / 500 * (mbar * m - 0.5 * m**2)**0.5
v_p = lambda m: 0.5/500 * (mbar*m - 0.5 * m**2)**(-0.5) * (mbar - m)
u = lambda h, m: uc(f(h, m)) + v(m)
return p_fun
def p_fun2(x):
scale = -1 + 2*(x[1] - θ_min)/(θ_max - θ_min)
p_fun = - (u(x[0],mbar) + self.β * np.dot(cheb.chebvander(scale,
,→order - 1), c))
return p_fun
# Bellman Iterations
diff = 1
iters = 1
break
self.θ_grid = s
self.p_iter = p_iter1
self.Φ = Φ
self.c = c
print('Convergence achieved after {} iterations'.format(iters))
# Check residuals
θ_grid_fine = np.linspace(θ_min, θ_max, 100)
resid_grid = np.zeros(100)
p_grid = np.zeros(100)
θ_prime_grid = np.zeros(100)
m_grid = np.zeros(100)
h_grid = np.zeros(100)
for i in range(100):
θ = θ_grid_fine[i]
res = minimize(p_fun,
lb1 + (ub1-lb1) / 2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
tol=1e-10)
if res.success == True:
p = -p_fun(res.x)
p_grid[i] = p
θ_prime_grid[i] = res.x[2]
h_grid[i] = res.x[0]
m_grid[i] = res.x[1]
res = minimize(p_fun2,
lb2 + (ub2-lb2)/2,
method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res.x) > p and res.success == True:
p = -p_fun2(res.x)
p_grid[i] = p
θ_prime_grid[i] = res.x[1]
h_grid[i] = res.x[0]
m_grid[i] = self.mbar
scale = -1 + 2 * (θ - θ_min)/(θ_max - θ_min)
resid_grid[i] = np.dot(cheb.chebvander(scale, order-1), c) - p
self.resid_grid = resid_grid
self.θ_grid_fine = θ_grid_fine
self.θ_prime_grid = θ_prime_grid
self.m_grid = m_grid
self.h_grid = h_grid
self.p_grid = p_grid
self.x_grid = m_grid * (h_grid - 1)
# Simulate
θ_series = np.zeros(31)
m_series = np.zeros(30)
h_series = np.zeros(30)
# Find initial θ
def ValFun(x):
scale = -1 + 2*(x - θ_min)/(θ_max - θ_min)
p_fun = np.dot(cheb.chebvander(scale, order - 1), c)
return -p_fun
res = minimize(ValFun,
(θ_min + θ_max)/2,
bounds=[(θ_min, θ_max)])
θ_series[0] = res.x
# Simulate
for i in range(30):
θ = θ_series[i]
res = minimize(p_fun,
lb1 + (ub1-lb1)/2,
method='SLSQP',
bounds=bnds1,
constraints=cons1,
tol=1e-10)
if res.success == True:
p = -p_fun(res.x)
h_series[i] = res.x[0]
m_series[i] = res.x[1]
θ_series[i+1] = res.x[2]
res2 = minimize(p_fun2,
lb2 + (ub2-lb2)/2,
method='SLSQP',
bounds=bnds2,
constraints=cons2,
tol=1e-10)
if -p_fun2(res2.x) > p and res2.success == True:
h_series[i] = res2.x[0]
m_series[i] = self.mbar
θ_series[i+1] = res2.x[1]
self.θ_series = θ_series
self.m_series = m_series
self.h_series = h_series
self.x_series = m_series * (h_series - 1)
Comparison of sets
The set of (w, θ) associated with sustainable plans is smaller than the set of (w, θ) pairs associated with
competitive equilibria, since the additional constraints associated with sustainability must also be satisfied
Lets compute two examples, one with a low β, another with a higher β
ch1 = ChangModel(β=0.3, mbar=30, h_min=0.9, h_max=2, n_h=8, n_m=35, N_g=10)
ch1.solve_sustainable()
[ 0.32412]
[ 0.19022]
[ 0.10863]
[ 0.05817]
[ 0.0262]
[ 0.01836]
[ 0.01415]
[ 0.00297]
[ 0.00089]
[ 0.00027]
[ 0.00008]
[ 0.00002]
[ 0.00001]
Convergence achieved after 16 iterations and 522.52 second
The following plot shows both the set of w, θ pairs associated with competitive equilibria (in red) and the
smaller set of w, θ pairs associated with sustainable plans (in blue)
import polytope
import matplotlib.pyplot as plt
def plot_equilibria(ChangModel):
"""
Method to plot both equilibrium sets
"""
fig, ax = plt.subplots(figsize=(7, 5))
ax.set_xlabel('w', fontsize=16)
ax.set_ylabel(r"$\theta$", fontsize=18)
plt.tight_layout()
plt.show()
plot_equilibria(ch1)
ch2.solve_sustainable()
[ 0.0085]
[ 0.00781]
[ 0.00433]
[ 0.00492]
[ 0.00303]
[ 0.00182]
[ 0.00638]
[ 0.00116]
[ 0.00093]
[ 0.00075]
[ 0.0006]
[ 0.00494]
[ 0.00038]
[ 0.00121]
[ 0.00024]
[ 0.0002]
[ 0.00016]
[ 0.00013]
[ 0.0001]
[ 0.00008]
[ 0.00006]
[ 0.00005]
[ 0.00004]
[ 0.00003]
[ 0.00003]
[ 0.00002]
[ 0.00002]
[ 0.00001]
[ 0.00001]
[ 0.00001]
Convergence achieved after 40 iterations and 1258.26 second
plot_equilibria(ch2)
TEN
QUANTITATIVE ECONOMICS
1329
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
ELEVEN
This section of the course contains foundational mathematical and statistical tools and techniques
1331
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
TWELVE
DYNAMIC PROGRAMMING
This section of the course contains foundational models for dynamic economic modeling. Most are single
agent problems that take the activities of other agents as given. Later we will look at full equilibrium
problems.
1333
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
THIRTEEN
These lectures look at important economic models that also illustrate common equilibrium concepts.
Acknowledgements: These lectures have benefitted greatly from comments and suggestion from our col-
leagues, students and friends. Special thanks go to Anmol Bhandari, Long Bui, Jeong-Hun Choi, Chase
Coleman, David Evans, Shunsuke Hori, Chenghan Hou, Doc-Jin Jang, Spencer Lyon, Qingyin Ma, Akira
Matsushita, Matthew McKay, Tomohito Okabe, Alex Olssen, Nathan Palmer and Yixiao Zhou.
1335
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
FOURTEEN
QUANTITATIVE ECONOMICS
1337
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
FIFTEEN
This section of the course contains foundational mathematical and statistical tools and techniques
1339
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
SIXTEEN
DYNAMIC PROGRAMMING
This section of the course contains foundational models for dynamic economic modeling. Most are single
agent problems that take the activities of other agents as given. Later we will look at full equilibrium
problems.
1341
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
SEVENTEEN
These lectures look at important economic models that also illustrate common equilibrium concepts.
1343
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
EIGHTEEN
These lectures look at important concepts in time series that are used in economics
Acknowledgements: These lectures have benefitted greatly from comments and suggestion from our col-
leagues, students and friends. Special thanks go to Anmol Bhandari, Long Bui, Jeong-Hun Choi, Chase
Coleman, David Evans, Shunsuke Hori, Chenghan Hou, Doc-Jin Jang, Spencer Lyon, Qingyin Ma, Akira
Matsushita, Matthew McKay, Tomohito Okabe, Alex Olssen, Nathan Palmer and Yixiao Zhou.
Acknowledgements: These lectures have benefitted greatly from comments and suggestion from our col-
leagues, students and friends. Special thanks go to Anmol Bhandari, Long Bui, Jeong-Hun Choi, Chase
Coleman, David Evans, Shunsuke Hori, Chenghan Hou, Doc-Jin Jang, Spencer Lyon, Qingyin Ma, Akira
Matsushita, Matthew McKay, Tomohito Okabe, Alex Olssen, Nathan Palmer and Yixiao Zhou.
1345
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
[Abr88] Dilip Abreu. On the theory of infinitely repeated games with discounting. Econometrica,
56:383–396, 1988.
[APS90] Dilip Abreu, David Pearce, and Ennio Stacchetti. Toward a theory of discounted repeated games
with imperfect monitoring. Econometrica, 58(5):1041–1063, September 1990.
[AJR01] Daron Acemoglu, Simon Johnson, and JamesăA Robinson. The colonial origins of comparative
development: an empirical investigation. The American Economic Review, 91(5):1369–1401, 2001.
[Aiy94] SăRao Aiyagari. Uninsured Idiosyncratic Risk and Aggregate Saving. The Quarterly Journal of
Economics, 109(3):659–684, 1994.
[AMSS02] S.ăRao Aiyagari, Albert Marcet, ThomasăJ. Sargent, and Juha Seppala. Optimal Taxation with-
out State-Contingent Debt. Journal of Political Economy, 110(6):1220–1254, December 2002.
[AM05] D.ăB.ăO. Anderson and J.ăB. Moore. Optimal Filtering. Dover Publications, 2005.
[AHMS96] E.ăW. Anderson, L.ăP. Hansen, E.ăR. McGrattan, and T.ăJ. Sargent. Mechanics of Forming
and Estimating Dynamic Linear Economies. In Handbook of Computational Economics. Elsevier, vol 1
edition, 1996.
[Are08] Cristina Arellano. Default risk and income fluctuations in emerging economies. The American
Economic Review, pages 690–712, 2008.
[AP91] Papoulis Athanasios and SăUnnikrishna Pillai. Probability, random variables, and stochastic pro-
cesses. Mc-Graw Hill, 1991.
[ACK10] Andrew Atkeson, VaradarajanăV Chari, and PatrickăJ Kehoe. Sophisticated monetary policies*.
The Quarterly journal of economics, 125(1):47–89, 2010.
[BY04] Ravi Bansal and Amir Yaron. Risks for the Long Run: A Potential Resolution of Asset Pric-
ing Puzzles. Journal of Finance, 59(4):1481–1509, 08 2004. URL: https://ideas.repec.org/a/bla/jfinan/
v59y2004i4p1481-1509.html, doi:.
[Bar79] RobertăJ Barro. On the Determination of the Public Debt. Journal of Political Economy,
87(5):940–971, 1979.
[Bas05] Marco Bassetto. Equilibrium and government commitment. Journal of Economic Theory,
124(1):79–105, 2005.
1347
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
[BBZ15] Jess Benhabib, Alberto Bisin, and Shenghao Zhu. The wealth distribution in bewley economies
with capital income risk. Journal of Economic Theory, 159:489–515, 2015.
[BS79] LăM Benveniste and JăA Scheinkman. On the Differentiability of the Value Function in Dynamic
Models of Economics. Econometrica, 47(3):727–732, 1979.
[Ber75] Dmitri Bertsekas. Dynamic Programming and Stochastic Control. Academic Press, New York,
1975.
[Bew77] Truman Bewley. The permanent income hypothesis: a theoretical formulation. Journal of Eco-
nomic Theory, 16(2):252–292, 1977.
[Bew86] TrumanăF Bewley. Stationary monetary equilibrium with a continuum of independently fluctuat-
ing consumers. In Werner Hildenbran and Andreu Mas-Colell, editors, Contributions to Mathematical
Economics in Honor of Gerard Debreu, pages 27–102. North-Holland, Amsterdam, 1986.
[BEGS17] Anmol Bhandari, David Evans, Mikhail Golosov, and ThomasăJ. Sargent. Fiscal Policy and
Debt Management with Incomplete Markets. The Quarterly Journal of Economics, 132(2):617–663,
2017.
[Bis06] C.ăM. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[Cag56] Philip Cagan. The monetary dynamics of hyperinflation. In Milton Friedman, editor, Studies in the
Quantity Theory of Money, pages 25–117. University of Chicago Press, Chicago, 1956.
[Cal78] GuillermoăA. Calvo. On the time consistency of optimal policy in a monetary economy. Econo-
metrica, 46(6):1411–1428, 1978.
[Car01] ChristopherăD Carroll. A Theory of the Consumption Function, with and without Liquidity Con-
straints. Journal of Economic Perspectives, 15(3):23–45, 2001.
[Car06] ChristopherăD Carroll. The method of endogenous gridpoints for solving dynamic stochastic opti-
mization problems. Economics Letters, 91(3):312–320, 2006.
[Cha98] Roberto Chang. Credible monetary policy in an infinite horizon model: recursive approaches.
Journal of Economic Theory, 81(2):431–461, 1998.
[CK90] VaradarajanăV Chari and PatrickăJ Kehoe. Sustainable plans. Journal of Political Economy, pages
783–802, 1990.
[Col90] WilburăJohn Coleman. Solving the Stochastic Growth Model by Policy-Function Iteration. Journal
of Business & Economic Statistics, 8(1):27–29, 1990.
[CC08] J.ăD. Cryer and K-S. Chan. Time Series Analysis. Springer, 2nd edition edition, 2008.
[DFH06] StevenăJ Davis, RăJason Faberman, and John Haltiwanger. The flow approach to labor markets:
new data sources, micro-macro links and the recent downturn. Journal of Economic Perspectives, 2006.
[Dea91] Angus Deaton. Saving and Liquidity Constraints. Econometrica, 59(5):1221–1248, 1991.
[DP94] Angus Deaton and Christina Paxson. Intertemporal Choice and Inequality. Journal of Political
Economy, 102(3):437–467, 1994.
[DH10] WouterăJ DenăHaan. Comparison of solutions to the incomplete markets model with aggregate
uncertainty. Journal of Economic Dynamics and Control, 34(1):4–27, 2010.
1348 Bibliography
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
[DJ92] RaymondăJ Deneckere and KennethăL Judd. Cyclical and chaotic behavior in a dynamic equi-
librium model, with implications for fiscal policy. Cycles and chaos in economic equilibrium, pages
308–329, 1992.
[DS10] Ulrich Doraszelski and Mark Satterthwaite. Computable markov-perfect industry dynamics. The
RAND Journal of Economics, 41(2):215–243, 2010.
[DLP13] YăE Du, Ehud Lehrer, and AăDăY Pauzner. Competitive economy as a ranking device over net-
works. submitted, 2013.
[Dud02] RăM Dudley. Real Analysis and Probability. Cambridge Studies in Advanced Mathematics. Cam-
bridge University Press, 2002.
[EG87] RobertăF Engle and Clive WăJ Granger. Co-integration and Error Correction: Representation, Es-
timation, and Testing. Econometrica, 55(2):251–276, 1987.
[EP95] Richard Ericson and Ariel Pakes. Markov-perfect industry dynamics: a framework for empirical
work. The Review of Economic Studies, 62(1):53–82, 1995.
[ES13] David Evans and ThomasăJ Sargent. History dependent public policies. Oxford University Press,
2013.
[EH01] GăW Evans and SăHonkapohja. Learning and Expectations in Macroeconomics. Frontiers of Eco-
nomic Research. Princeton University Press, 2001.
[FSTD15] Pablo Fajgelbaum, Edouard Schaal, and Mathieu Taschereau-Dumouchel. Uncertainty traps.
Technical Report, National Bureau of Economic Research, 2015.
[Fri56] M.ăFriedman. A Theory of the Consumption Function. Princeton University Press, 1956.
[FF98] Milton Friedman and RoseăD Friedman. Two Lucky People. University of Chicago Press, 1998.
[Gal37] Albert Gallatin. Report on the finances**, november, 1807. In Reports of the Secretary of the
Treasury of the United States, Vol 1. Government printing office, Washington, DC, 1837.
[GW10] MarcăP Giannoni and Michael Woodford. Optimal target criteria for stabilization policy. Technical
Report, National Bureau of Economic Research, 2010.
[Hal78] RobertăE Hall. Stochastic Implications of the Life Cycle-Permanent Income Hypothesis: Theory
and Evidence. Journal of Political Economy, 86(6):971–987, 1978.
[HM82] RobertăE Hall and FredericăS Mishkin. The Sensitivity of Consumption to Transitory Income:
Estimates from Panel Data on Households. National Bureau of Economic Research Working Paper
Series, 1982.
[Ham05] JamesăD Hamilton. Whats real about the business cycle? Federal Reserve Bank of St. Louis
Review, pages 435–452, 2005.
[HR85] DennisăEpple, Hansen, Lars.ăP. and Will Roberds. Linear-quadratic duopoly models of resource
depletion. In Energy, Foresight, and Strategy. Resources for the Future, vol 1 edition, 1985.
[HS08] LăP Hansen and TăJ Sargent. Robustness. Princeton University Press, 2008.
[HS13] LăP Hansen and TăJ Sargent. Recursive Models of Dynamic Linear Economies. The Gorman Lec-
tures in Economics. Princeton University Press, 2013.
Bibliography 1349
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
[Han07] LarsăPeter Hansen. Beliefs, Doubts and Learning: Valuing Macroeconomic Risk. American Eco-
nomic Review, 97(2):1–30, May 2007. URL: https://ideas.repec.org/a/aea/aecrev/v97y2007i2p1-30.
html, doi:.
[HHL08] LarsăPeter Hansen, JohnăC. Heaton, and Nan Li. Consumption Strikes Back? Measuring Long-
Run Risk. Journal of Political Economy, 116(2):260–302, 04 2008. URL: https://ideas.repec.org/a/ucp/
jpolec/v116y2008i2p260-302.html, doi:.
[HR87] LarsăPeter Hansen and ScottăF Richard. The Role of Conditioning Information in Deducing
Testable. Econometrica, 55(3):587–613, May 1987.
[HS80] LarsăPeter Hansen and ThomasăJ Sargent. Formulating and estimating dynamic linear rational ex-
pectations models. Journal of Economic Dynamics and control, 2:7–46, 1980.
[HS00] LarsăPeter Hansen and ThomasăJ Sargent. Wanting robustness in macroeconomics. Manuscript,
Department of Economics, Stanford University., 2000.
[HS17] LarsăPeter Hansen and ThomasăJ. Sargent. Risk, Uncertainty, and Value. Princeton University
Press, Princeton, New Jersey, 2017.
[HS09] LarsăPeter Hansen and JoseăA. Scheinkman. Long-term risk: an operator approach. Econometrica,
77(1):177–234, 01 2009.
[HK78] J.ăMichael Harrison and DavidăM. Kreps. Speculative investor behavior in a stock market with
heterogeneous expectations. The Quarterly Journal of Economics, 92(2):323–336, 1978.
[HK79] J.ăMichael Harrison and DavidăM. Kreps. Martingales and arbitrage in multiperiod securities mar-
kets. Journal of Economic Theory, 20(3):381–408, June 1979.
[HL96] John Heaton and DeborahăJ Lucas. Evaluating the effects of incomplete markets on risk sharing
and asset pricing. Journal of Political Economy, pages 443–487, 1996.
[HK85] Elhanan Helpman and Paul Krugman. Market structure and international trade. MIT Press Cam-
bridge, 1985.
[HLL96] OăHernandez-Lerma and JăB Lasserre. Discrete-Time Markov Control Processes: Basic Optimal-
ity Criteria. NumberăVol 1 in Applications of Mathematics Stochastic Modelling and Applied Proba-
bility. Springer, 1996.
[HP92] HugoăA Hopenhayn and EdwardăC Prescott. Stochastic Monotonicity and Stationary Distributions
for Dynamic Economies. Econometrica, 60(6):1387–1406, 1992.
[HR93] HugoăA Hopenhayn and Richard Rogerson. Job Turnover and Policy Evaluation: A General Equi-
librium Analysis. Journal of Political Economy, 101(5):915–938, 1993.
[Hug93] Mark Huggett. The risk-free rate in heterogeneous-agent incomplete-insurance economies. Jour-
nal of Economic Dynamics and Control, 17(5-6):953–969, 1993.
[Haggstrom02] Olle Häggström. Finite Markov chains and algorithmic applications. Volumeă52. Cam-
bridge University Press, 2002.
[JYC88] Robert J.ăShiller John Y.ăCampbell. The Dividend-Price Ratio and Expectations of Future Divi-
dends and Discount Factors. Review of Financial Studies, 1(3):195–228, 1988.
[Jov79] Boyan Jovanovic. Firm-specific capital and turnover. Journal of Political Economy,
87(6):1246–1260, 1979.
1350 Bibliography
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
[Jud90] KăL Judd. Cournot versus bertrand: a dynamic resolution. Technical Report, Hoover Institution,
Stanford University, 1990.
[Jud85] KennethăL Judd. On the performance of patents. Econometrica, pages 567–585, 1985.
[JYC03] KennethăL. Judd, Sevin Yeltekin, and James Conklin. Computing Supergame Equi-
libria. Econometrica, 71(4):1239–1254, 07 2003. URL: https://ideas.repec.org/a/ecm/emetrp/
v71y2003i4p1239-1254.html, doi:.
[Janich94] KăJänich. Linear Algebra. Springer Undergraduate Texts in Mathematics and Technology.
Springer, 1994.
[Kam12] Takashi Kamihigashi. Elementary results on solutions to the bellman equation of dynamic pro-
gramming: existence, uniqueness, and convergence. Technical Report, Kobe University, 2012.
[Kre88] DavidăM. Kreps. Notes on the Theory of Choice. Westview Press, Boulder, Colorado, 1988.
[Kuh13] Moritz Kuhn. Recursive Equilibria In An Aiyagari-Style Economy With Permanent Income
Shocks. International Economic Review, 54:807–835, 2013.
[KP80a] FinnăE Kydland and EdwardăC Prescott. Dynamic optimal taxation, rational expectations and
optimal control. Journal of Economic Dynamics and Control, 2:79–91, 1980.
[KP77] FinnăE., Kydland and EdwardăC. Prescott. Rules rather than discretion: the inconsistency of opti-
mal plans. Journal of Political Economy, 106(5):867–896, 1977.
[KP80b] FinnăE., Kydland and EdwardăC. Prescott. Time to build and aggregate fluctuations. Economet-
rics, 50(6):1345–2370, 1980.
[LM94] AăLasota and MăC MacKey. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Applied
Mathematical Sciences. Springer-Verlag, 1994.
[LL01] Martin Lettau and Sydney Ludvigson. Consumption, Aggregate Wealth, and Expected Stock Re-
turns. Journal of Finance, 56(3):815–849, 06 2001.
[LL04] Martin Lettau and SydneyăC. Ludvigson. Understanding Trend and Cycle in Asset Values: Reeval-
uating the Wealth Effect on Consumption. American Economic Review, 94(1):276–299, March 2004.
[LM80] David Levhari and LeonardăJ Mirman. The great fish war: an example using a dynamic cournot-
nash solution. The Bell Journal of Economics, pages 322–334, 1980.
[LS18] LăLjungqvist and TăJ Sargent. Recursive Macroeconomic Theory. MIT Press, 4 edition, 2018.
[Luc78] RobertăE Lucas, Jr. Asset prices in an exchange economy. Econometrica: Journal of the Econo-
metric Society, 46(6):1429–1445, 1978.
[Luc03] RobertăE Lucas, Jr. Macroeconomic Priorities. American Economic Review, 93(1):1–14, March
2003. URL: https://www.aeaweb.org/articles?id=10.1257/000282803321455133.
[LP71] RobertăE Lucas, Jr. and EdwardăC Prescott. Investment under uncertainty. Econometrica: Journal
of the Econometric Society, pages 659–681, 1971.
[LS83] RobertăE Lucas, Jr. and NancyăL Stokey. Optimal Fiscal and Monetary Policy in an Economy
without Capital. Journal of monetary Economics, 12(3):55–93, 1983.
Bibliography 1351
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
[MS89] Albert Marcet and ThomasăJ Sargent. Convergence of Least-Squares Learning in Environments
with Hidden State Variables and Private Information. Journal of Political Economy, 97(6):1306–1322,
1989.
[MdRV10] VăFilipe Martins-da-Rocha and Yiannis Vailakis. Existence and Uniqueness of a Fixed Point
for Local Contractions. Econometrica, 78(3):1127–1141, 2010.
[MCWG95] AăMas-Colell, MăD Whinston, and JăR Green. Microeconomic Theory. Volumeă1. Oxford
University Press, 1995.
[McC70] JăJ McCall. Economics of Information and Job Search. The Quarterly Journal of Economics,
84(1):113–126, 1970.
[MT09] SăP Meyn and RăL Tweedie. Markov Chains and Stochastic Stability. Cambridge University Press,
2009.
[MS85] Marcus Miller and Mark Salmon. Dynamic Games and the Time Inconsistency of Optimal Policy
in Open Economies. Economic Journal, 95:124–137, 1985.
[MF02] MarioăJ Miranda and PăL Fackler. Applied Computational Economics and Finance. Cambridge:
MIT Press, 2002.
[MB54] F.ăModigliani and R.ăBrumberg. Utility analysis and the consumption function: An interpretation
of cross-section data. In K.K Kurihara, editor, Post-Keynesian Economics. 1954.
[Mut60] JohnăF Muth. Optimal properties of exponentially weighted forecasts. Journal of the american
statistical association, 55(290):299–306, 1960.
[Nea99] Derek Neal. The Complexity of Job Mobility among Young Men. Journal of Labor Economics,
17(2):237–261, 1999.
[Orf88] SophoclesăJ Orfanidis. Optimum Signal Processing: An Introduction. McGraw Hill Publishing,
New York, New York, 1988.
[Par99] JonathanăA Parker. The Reaction of Household Consumption to Predictable Changes in Social
Security Taxes. American Economic Review, 89(4):959–973, 1999.
[PL92] D.A.ăCurrie, Pearlman, J.G. and P.L. Levine. Rational expectations with partial information. Eco-
nomic Modeling, 3:90–105, 1992.
[Pea92] J.G. Pearlman. Reputational and nonreputational policies under partial information. Journal of Eco-
nomic Dynamics and Control, 16(2):339–358, 1992.
[Pre77] EdwardăC. Prescott. Should control theory be used for economic stabilization? Journal of Mone-
tary Economics, 7:13–38, 1977.
[Put05] MartinăL Puterman. Markov decision processes: discrete stochastic dynamic programming. John
Wiley & Sons, 2005.
[PalS13] Jen Pál and John Stachurski. Fitted value function iteration with probability one contractions.
Journal of Economic Dynamics and Control, 37(1):251–264, 2013.
[Rab02] Guillaume Rabault. When do borrowing constraints bind? Some new results on the income fluctu-
ation problem. Journal of Economic Dynamics and Control, 26(2):217–245, 2002.
[Ram27] F.ăP. Ramsey. A Contribution to the theory of taxation. Economic Journal, 37(145):47–61, 1927.
1352 Bibliography
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
[Rei09] Michael Reiter. Solving heterogeneous-agent models by projection and perturbation. Journal of
Economic Dynamics and Control, 33(3):649–665, 2009.
[Rom05] Steven Roman. Advanced linear algebra. Volumeă3. Springer, 2005.
[Roz67] Y.ăA. Rozanov. Stationary Random Processes. Holden-Day, San Francisco, 1967.
[Rus96] John Rust. Numerical dynamic programming in economics. Handbook of computational eco-
nomics, 1:619–729, 1996.
[Rya12] StephenăP Ryan. The costs of environmental regulation in a concentrated industry. Econometrica,
80(3):1019–1061, 2012.
[Sam39] PaulăA. Samuelson. Interactions between the multiplier analysis and the principle of acceleration.
Review of Economic Studies, 21(2):75–78, 1939.
[Sar79] TăJ Sargent. A note on maximum likelihood estimation of the rational expectations model of the
term structure. Journal of Monetary Economics, 35:245–274, 1979.
[Sar77] ThomasăJ Sargent. The Demand for Money During Hyperinflations under Rational Expectations:
I. International Economic Review, 18(1):59–82, February 1977.
[Sar87] ThomasăJ Sargent. Macroeconomic Theory. Academic Press, New York, 2nd edition, 1987.
[SE77] Jack Schechtman and Vera LăS Escudero. Some results on an income fluctuation problem. Journal
of Economic Theory, 16(2):151–166, 1977.
[Sch14] JoseăA. Scheinkman. Speculation, Trading, and Bubbles. Columbia University Press, New York,
2014.
[Sch69] ThomasăC Schelling. Models of Segregation. American Economic Review, 59(2):488–493, 1969.
[Shi95] AăN Shiriaev. Probability. Graduate texts in mathematics. Springer. Springer, 2nd edition, 1995.
[SLP89] NăL Stokey, RăE Lucas, and EăC Prescott. Recursive Methods in Economic Dynamics. Harvard
University Press, 1989.
[Sto89] NancyăL Stokey. Reputation and time consistency. The American Economic Review, pages
134–139, 1989.
[Sto91] NancyăL. Stokey. Credible public policy. Journal of Economic Dynamics and Control,
15(4):627–656, October 1991.
[STY04] Kjetil Storesletten, ChristopherăI Telmer, and Amir Yaron. Consumption and risk sharing over the
life cycle. Journal of Monetary Economics, 51(3):609–633, 2004.
[Sun96] RăK Sundaram. A First Course in Optimization Theory. Cambridge University Press, 1996.
[Tal00] ThomasăD Tallarini. Risk-sensitive real business cycles. Journal of Monetary Economics,
45(3):507–532, June 2000.
[Tau86] George Tauchen. Finite state markov-chain approximations to univariate and vector autoregres-
sions. Economics Letters, 20(2):177–181, 1986.
[Tow83] RobertăM. Townsend. Forecasting the forecasts of others. Journal of Political Economy,
91:546–588, 1983.
[Tre16] Daniel Treisman. Russias billionaires. The American Economic Review, 106(5):236–241, 2016.
Bibliography 1353
QuantEcon.lectures-python3 PDF, Release 2018-Sep-29
[VL11] Ngo VanăLong. Dynamic games in the economics of natural resources: a survey. Dynamic Games
and Applications, 1(1):115–148, 2011.
[Wal47] Abraham Wald. Sequential Analysis. John Wiley and Sons, New York, 1947.
[Whi63] Peter Whittle. Prediction and regulation by linear least-square methods. English Univ. Press,
1963.
[Whi83] Peter Whittle. Prediction and Regulation by Linear Least Squares Methods. University of Min-
nesota Press, Minneapolis, Minnesota, 2nd edition, 1983.
[Woo03] Michael Woodford. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton
University Press, 2003.
[Woo15] JeffreyăM Wooldridge. Introductory econometrics: A modern approach. Nelson Education, 2015.
[YS05] GăAlastair Young and RichardăL Smith. Essentials of statistical inference. Cambridge University
Press, 2005.
1354 Bibliography