[go: up one dir, main page]

0% found this document useful (0 votes)
193 views93 pages

Perturbation and Projection Methods For Solving DSGE Models: Lawrence J. Christiano

This document discusses methods for solving dynamic stochastic general equilibrium (DSGE) models. It introduces projection and perturbation methods using a simple toy example of an exogenous variable x and an equation defining y. Projection methods use a functional form characterization of the model solution. Perturbation methods show that using certainty equivalence is a valid approximation when the variance of shocks is small. The document outlines applying these methods to neoclassical models with endogenous labor supply and real business cycle models with exogenous labor hours.

Uploaded by

Jovanny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
193 views93 pages

Perturbation and Projection Methods For Solving DSGE Models: Lawrence J. Christiano

This document discusses methods for solving dynamic stochastic general equilibrium (DSGE) models. It introduces projection and perturbation methods using a simple toy example of an exogenous variable x and an equation defining y. Projection methods use a functional form characterization of the model solution. Perturbation methods show that using certainty equivalence is a valid approximation when the variance of shocks is small. The document outlines applying these methods to neoclassical models with endogenous labor supply and real business cycle models with exogenous labor hours.

Uploaded by

Jovanny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

"

° ?
"
""
"
th &
"
A.
.

10:43 y
.

.
, o

,
gyo

Perturbation and Projection 
Methods for Solving DSGE Models
Lawrence J. Christiano

Discussion of projections taken from Christiano‐Fisher, ‘Algorithms for Solving Dynamic Models with Occasionally 
Binding Constraints’, 2000, Journal of Economic Dynamics and Control.
Discussion of perturbations taken from Judd’s textbook.
A
Perturbation and Projection 
- -

Methods for Solving DSGE Models
Lawrence J. Christiano

Discussion of projections taken from Christiano‐Fisher, ‘Algorithms for Solving Dynamic Models with Occasionally 
Binding Constraints’, 2000, Journal of Economic Dynamics and Control.
Discussion of perturbations taken from Judd’s textbook.
Outline
• A Toy Example to Illustrate the basic ideas.
-

– Functional form characterization of model 
solution.
– Use of Projections and Perturbations.
model worked
RBC hours
• Neoclassical model. with constant
.


-

– Projection methods
– Perturbation methodsE .

]
• Make sense of the proposition, ‘to a first order 
approximation, can replace equilibrium conditions with 
linear expansion about nonstochastic steady state and 
solve the resulting system using certainty equivalence’ 
of trivial ( i slides wrote )
9
see
Sort

At another Level
, very powerful
important .
Simple Example
• Suppose that x is some exogenous variable 
and that the following equation implicitly  l

defines y:
h x, y 0, for all x ∈ X
=

#
• Let the solution be defined by the ‘policy rule’, 
g:
④y gx
situation
‘Error function’
• satisfying ,I , ,
.
- -

R x; g ≡ h x, g x 0 Ya yes ,
-
-
- -

Y ,
,

T t
f
.

• for all  x ∈ X
-
P .

I
Kwong
d f Cd ,x7
-
=

Payoff if
-

decision

idf is d

when 0C

Occurs .

Suppose I must be

chosen before I observed .

Example x =L people who


recore Are

immune

=0 Not item one

Belief Now is that


Ex = p .
I I. 98
I
P[oC=i ]
Decision take Ng
under
uncertainty :

① choose d

Valve
② of x

occurs

③ fcd ,
x ) Kinnan .

if chose d
you
optimal gulen x= '

then extreme regret


ex post if oc :O .

Question for economics

is whoo to choose d
under these circumstances?
Answer

M£EfCd,o÷

' I


- K IO

Problem for economists :

Computing Efcd ,
.sc )
sometimes A PAIN .

Economist t
engineers .

x x
Use eauiveah experts
certainty .

At Approx in
AS AN Approximation .

-
Afioors
.
Certainty eaviualewce :

replace original problem

with .

max fcd ,
Ex)
d

-
-

Perturbation method

shows ( proves ) that

← certainty equivalence

is correct if VARIANCE

of sis small
se
enough .

current dovid environment,


IN
Most Likely , going Away
.

Problem : SMALL
,
SMALL chance that
Ain 't Nothin get
' '

we seen is

growing
MDF
CONS
.

work
¢ ← Lofts
Max Ucc , l) eel of

k people
5.8 as l
µni9I same
-

w t '

y
.

J ← other

dh¥⇐ competition
income
.

"

:* :S AMONG


.

-
foe

w=F
.


"
firms
-

Marginal
duct
Poro

production i. ooip .
. axl of
Labor .

pl -
c .

work
W U
a - U
w
benefit
e

extra
of It ? he Ue
= -

'
°
-

a Ua
← w -

-÷%xof
working
IN consumption
goods units .
"CosfofworkiN,g\
-
IN consumption units
"

:÷: ÷÷÷÷÷÷
Slightly More formal

what is the cost pg

to a person
of de so

is consumption units .

de so what de > o is required

for o du '

-
.

total differentiation
.

l) ucdc de
Oa d Uca rue
=
,

date
" of
cost
UI
-

working
units

as

a
in
So far
,
economics relevant

to Later discussion .

Fr@rPerdprese.Nt Purposes

(illustration
economic
I want an .

hLx,y3=o

Look a h .h problem
in
Kaine) :
-

¥ so
Carty , e)

o-shfa.y.GE#c-

cord
N
?
Texas ilibrium .

GY this model
f=g( for
.

because
ONLY only simply
I

s-bsitiuted out < salty .


r

The Need to Approximate
-

• Finding the policy rule, g, is a big problem 
outside special cases

– ‘Infinite number of unknowns (i.e., one value of g
for each possible x) in an infinite number of 
equations (i.e., one equation for each possible x).’

• Two approaches: 

functioned
"

– projection and perturbation 
"

"

)
"
unknown
UNKNOWNS ? Glx wya 's
' EX
functions
equations : h( a ,gcxD=oV-
Projection
ĝ x;
• Find a parametric function,             , where     is a 
vector of parameters chosen so that it imitates 
R x; g 0
the property of the exact solution, i.e.,                     
8
.

8
for all x ∈ X , as well as possible.  know kno -

• Choose values for     so that     don't Neweyreal'T


Fs
Probablyget
?
WANT § let
'

to make Io . A
-

where do I

?
R̂ x; h x, ĝ x; fo
guess
prob
get 8
's
Try
p
economy
.

-
good urns sin
x
• is close to zero for             .
x∈X
00 Ie

IN +
b-
Y x .

• The method is defined by how ‘close to zero’ is 
ĝ x;
defined and by the parametric function,              ,  "
glad "

1-Qaoide.r.am
that is used. to

APP 73991 .
Fg
v.
"

g .

Like
behaves
gtsh.IE?.n
Projection, continued 2 finite:X
>
(
en

Hod
'

• Spectral and finite element approximations ne

– Spectral functions: functions,            , in which 
ĝ x;
ĝ x; x∈X
each parameter in     influences              for all            
"
example:      ""

basis O 1 a.
+ I ?
a

n
parameters
if
I

ĝ x; x , ∑ i Hi Mtl
.
.
- '
ar )
Lao
.

i 0 I
.

- 0n 1-
a
b

Hi x x i ~ordinary polynominal (not computationaly efficient)


-
-
Hi x Ti x ,
T z : −1, 1 → −1, 1 , i th order Chebyshev polynomial
→ i

: X → −1, 1
function :
each
Mathematician
PRUSSIAN
IgE
Spectral function
Yi inferences
Parameter , ,

globally .
← spectra

I
-

change 83

Not convenient sometimes

rn
1-
A spectral
inconvenient
Need
May
a

huge n
Projection, continued
ĝ x;
÷
–-
Finite element approximations: functions,             , 
ĝ x;
in which each parameter in     influences              
over only a subinterval of  x ∈ X
ĝ x; 1 2 3 4 5 6 7
← b
4 N

8,
only
• 85
Mfume
A

piecewise 2 has

vine : on
; 9
.
III!
Pei Eon r
on g


Tcu
' '
L Loc
'
so , , oh, sks x.
,
X
=
MA
- .ee#
¥÷ CAN
::
get
Away

Maybe SMALL
'

with
g) =o
.

pi ( x
;
t
p h
au know
I e
I
"
but:#
.

• • us
"
it .

Projection, continued Collocation

• ‘Close to zero’: two methods weighted

"
residual
Kin
x : x 1 , x 2 , . . . , x n ∈ X baler
• Collocation, for n values of                                   
choose n elements of                                so that   
-

1 n

¥¥€a÷•¥÷*÷÷io÷
R̂ x i ; h xi, ĝ xi; 0, i 1, . . . , n
RAM
– how you choose the grid of x’s matters… y

: ::*
'
• Weighted Residual, for m>n values of 
x : x 1 , x 2 , . . . , x m ∈ X choose the n i ’s
xxxxxxxx
"""

t.si#jF;rgsefosedoIn.i:s..onr-
m

∑ jh xj, ĝ xj;
w i
0, i 1, . . . , n
j 1
TX
¥x
"
-

no →
kxEX
R ( x ; g) =o

Want Don 't know


g .

what is but I
g .

what does
know g =

¢
V-x-CX.ws hlx gcses ) so
)
'

' ,
Rcxig
Also I know what
,

h is [ see
earlier example

model where h
of econ ,

is constructed resigns
derivates of ul.ge ) ) .

IN .

general No hope

to determine g exactly .

close
But can get pretty .
We (
generally id

SMALL # of examples
determine
can g exactly
have to Approximate ,
.

using § .

pics; g) to .
fxex

Look for § thats


Spectral
colocation n →
:
pick
8→ finite
element
Glx ) =D xp >a

pick
@ ,

80+8,043=0
,;

Bedi 8080,04)
Ttf be chosen .to

"
,
make £ I 0K¥
"

NATURAL Method is collocation


pi - o R -
- o


#

↳÷
o

.
Ia ,
Colo 58 ? =o

f- a Cto ,
8*7 =o

Find Jo 8 ,
to save
,

the Above equations .

5- ( o ) :O

#•¥
for ?

¥ * In .
-
-
Natural way
: - avi distant
#y
-

Example of Importance of Grid Points
• Here is an example, taken from a related problem, the problem  grid
=3

points
of interpolation.
-
– You get to evaluate a function on a set of grid points that you 
select, and you must guess the shape of the function 
between the grid points. Bunge
CARL
4th century
(

• Consider the function, 
fk 1 #
, k ∈ −5, 5
1 k2 Runge
function
• Next slide shows what happens when you select 11 equally‐
spaced grid points and interpolate by fitting a 10th order 
polynomial. 
– As you increase the number of grid points on a fixed interval 
grid, oscillations in tails grow more and more violent. 

N¥mAL
• Chebyshev approximation theorem: distribute more points in 
of
Magic
the tails (by selecting zeros of Chebyshev
convergence in sup norm.
polynomial) and get 

⇐ gsis k
.
Interpolation : situation

where I
Kwong g .

That makes it different

from what we have Now .

Similar because Cscvppose >


it is computationally
Very
expensive to evaluate
at
Any point , a

g
.

Try to Approximate it
function that is
by a

evaluate
fast to .

teen
••\

Xz Ks JC
, , .
"
?

"

f- Cx eo teapot . . .
.

ta.sc

AO g
A,
,
n - r

g
A
,
N 11

PARAMETER

a=a )
.

.
.

Pick values for ao ,


.
. .

,7a,o

FCK a ) =o
, ;

FC Kz ; a ) =o

:
I

f- ( )
'

x a
-

O
. .
;

¥ .

i :* :c:÷÷÷÷÷÷÷
To find on 's .

Y = X a a = do

÷?
X 'T = ⇐ X ) a + x' u
w

' kind
)
'
-

of , ⇐ x x' Y of
Zero
ix. y
'

Cx )
- -
'

= X

= x
-
'
Y TI O .
good
.

disturbing a

Approximating

o
Not
How You Select the Grid Points Matters !
function

protonation .

• @
&
&
( •
.

a
.÷:rmi

X
t
?
't

i @
-

at a

a & I d d d a t d l I
µ

Figure from Christiano‐Fisher, JEDC, 1990
Example of Importance of Grid Points
• Here is an example, taken from a related problem, the problem 
of interpolation.
– You get to evaluate a function on a set of grid points that you 
select, and you must guess the shape of the function 
between the grid points.

• Consider the function, 
fk 1 , k ∈ −5, 5
2
1 k

• Next slide shows what happens when you select 11 equally‐
spaced grid points and interpolate by fitting a 10th order 
polynomial. 
– As you increase the number of grid points on a fixed interval 
grid, oscillations in tails grow more and more violent. 

• Chebyshev approximation theorem: distribute more points in 
the tails (by selecting zeros of Chebyshev polynomial) and get 
convergence in sup norm.
ith basis function for T2 Chebyshev polynomial
I.
u

i -
-
0,32
, . .


.
'
Zeros:
O -
.

Chebyshev polynomials
Y Tito

Bo -

I
‐0.71 0.71
I
-
[

ask.IT/f-InB?9ieor:nvergenj 's

mat
Enya
"

Isis
" '
n

¥
.
8-
go
ers ooo
'
r
HE
Chebyshev
.

x?
G

Approxi m ati o n
)
ON
"

( interpol:^*
ation
(

'

¥
"

( .am

Eine : a .

points &
chosen
IN

Chebyshev
AS "

/
in
"

② Then
Ito S i D. ,
C- 5,5 ]
"
-

" ,

① bead of lith order Chebyshev


it Zeros
computed these zeros onto

function ③ Punt interval


basis the C- 5,5 ] Rac ?before
using tf
{ Ifcx ) fncxll
If fnl
-

- =

}
.

: xex .

f :
X → IR
.

X → R
f- n :

the
tf -
fnl is

all differences
set of
fncx )l
#
xD -

max I f -

fnl biggest
KEX
in
error
using
f, to Approximate .

"
f .

hi If fnl =0
T7E×
-

rn -76 I
"
' '
uniform converge
S fix ) dx
×

For MANY CASES


MANNY ,

CAN do this Analytically

Jax 't
-

our = Ex

ba

zCb3 as )
'
=
-
.

Symbolic algebra .

Economics :
mostly integration
tractable
Not
Analytically .

econometrics
BAYESIAN
'

constant have to

functions that
integrate
t
Aske
tractable
no
Most
Analytically
. Approximate
{ fax >

T
dux

beast
.mn

Suppose 5- WAS A

Polynomial .

Ssd dx
a
b

1a
't '

fn → f
nth order
A
che.by#nevafiuterpoLafion
of f S fax > dtnsfnlxdn
=
Quadrature integration .

Chebyshev is one

Method .

There Are
MANY .

Natural n irritate
Riemann definition

itntegral.EE#
of
I .

I
×
'

I f- Cx ) fn Csc 71 -

Iff
- -

,
Kio Kio
g) te EX

F'
pics , to
.
.
.
"
""'
' .

.
.
" " in
.
.

mini )
o

xijg
-

Projection, continued 
-

R(
" '
i
,

chgoseose • ‘Close to zero’: two methods

)
• Collocation, for n values of                                   
x : x1 , x2 , . . . , xn ∈ X
ai O1
choose n elements of                                so that   
n

R̂ x i ; h xi, ĝ xi; 0, i 1, . . . , n
- =

– how you choose the grid of x’s matters…
• Weighted Residual, for m>n values of 
Gealerkiwxxxxxxxx
x : x 1 , x 2 , . . . , x m ∈ X choose the n i ’s
=


m
f
evaluate ∑ wij h xj, ĝ xj; 0, i 1, . . . , n
the
j 1 xex, not just
R^Cxi£ ) old at 04
- .
-
ok "

grid points
: ,
f-

A *
.

Perturbation
F • Projection uses the ‘global’ behavior of the functional 

WITH
equation to approximate solution.

– Problem: requires finding zeros of non‐linear equations. 
' Iterative methods for doing this are a pain.

]%
– Advantage: can easily adapt to situations the policy rule is not 
continuous or simply non‐differentiable (e.g., occasionally 
binding zero lower bound on interest rate).
robust

s.r.risi.org
- • Perturbation method uses Taylor series expansion 
-

(computed using implicit function theorem) to approximate 
model solution.
↳d÷÷÷÷i
:-( – Advantage:  can implement procedure using non‐iterative 
methods. 
we
only .

– Possible disadvantages: 

(
• Global properties of Taylor series expansion not necessarily very good.

Tx
• Does not work when there are important non‐differentiabilities (e.g., 
occasionally binding zero lower bound on interest rate).
"

EXPANSION
Taylor 's
"

Zeng

-
Taylor Series Expansion
• Let                   be k+1
f :R→R differentiable on the open  -

interval and continuous on the closed interval 
-

unpaid
between a and x. 
– Then,
error typo ASS
rn
"
different
f x Pk x Rk x is
:b.%er
ihfroispproxirat
"
f

±
– where  us at
it to
a
Taylor series expansion about x a:
1 1 2 2 1 k k
Pk x f a f a x−a 2!
f a x−a ... k!
f a x−a
"
- -
t -
-

ax ' 1--6 by ! )
'
X
kick x - -

a.
. .
i

remainder:
-
a
-

-
2
z

3!
"
1 k 1 k 1
.it R k x k 1 !
f x−a , for some between x and a
is O
u! 24
– Question: is the Taylor series expansion a good 
"

r
,

approximation for f? f
:#
¥¥
5!

a!

a
think
might fcx
)
You →
)
pace
a
"
peg
Taylor Series Expansion
• It’s not as good as you might have thought.

• The next slide exhibits the accuracy of the 
Taylor series approximation to the Runge
function.
– In a small neighborhood of the point where the 
approximation is computed (i.e., 0), higher order 
Taylor series approximations are increasingly 
accurate.
– Outside that small neighborhood, the quality of 
the approximation deteriorates with higher order 
approximations.
Taylor Series Expansions about 0 of Runge Function
1.4
Increasing order of approximation leads to improved
accuracy in a neighborhood of zero and reduced accuracy
1.2 further away.

5th order Taylor series expansion
1 A

0.8

0.6 25th order


expansion
10th order expansion 1
1 x2
0.4

0.2
-1.5 -1 -0.5 0 0.5 1 1.5

I "
I
x sick ) -

Tsi
-

fix ) - .

p.ec ,
Taylor Series Expansion
• Another example: the log function
– It is often used in economics. 
– Surprisingly, Taylor series expansion does not provide 
a great global approximation.

• Approximate log(x) by its kth order Taylor series 


approximation at the point, x=a:
k
log x log a ∑ −1 i 1 1 x−a i
i a
i 1

– This expression diverges as N→∞ for 

x such that x −a a ≥1
Taylor Series Expansion of Log Function 
2
About x=1
1.8
25th order approximation
-

1.6

5th order approximation
1.4 -

1.2
Taylor series approximation deteriorates
infinitely for x>2 as order of approximation log(x)

if
1 increases. X

0.8

0.6

2nd order approximation
-
0.4

0.2

10th order approximation
-
0

I 1 1.2 1.4 1.6 1.8


*
2 2.2 2.4 2.6 2.8 3

x
I
Taylor Series Expansion
• In general, cannot expect Taylor series 
expansion to converge to actual function, 
globally. 

– There are some exceptions, e.g.,  Taylor’s series 
f x e x , cos x , sin x
expansion of                                           about x=0
-

converges to f(x) even for x far from 0.

– Problem: in general it is difficult to say for what 
values of x the Taylor series expansion gives a 
good approximation.            

Taylor Versus Weierstrass
• Problems with Taylor series expansion does not represent a 
problem with polynomials per se as approximating functions.

• Weierstrass approximation theorem: for every continuous function, 
f(x), defined on [a,b], and for every 
-
0, there exists a finite‐
ordered polynomial, p(x), on [a,b] such that
:3:*
ecnebysheoe arm s=.%%:
interpolation |f x − p x | , for all x ∈ a, b JD
"
.

theoremWeiserstr
.
.

y
£
e

interpolation
find Chebyshev
CAN
• Weierstrass – polynomials may approximate well, even if  E

p by sometimes the Taylor series expansion is not very good. Emilio
interpolating ,


nkgrid
• Weierstrass theorem asserts that there exists some sequence of 
points polynomials the limit of which approximates a function well. 
correspond
.

– We used the Runge function to illustrate two sequences that do not 
zeros -

to
chebgshe series expansion about zero. 
approximate well: interpolation with fixed interval weights and Taylor 
f
point
'

• n

potty D – Interpolation with Chebyshev zeroes gives an excellent approximation. 



U
even s .


Pu

convergence

hi Pa →
P

k-so.chebyshevINtereolatioNFheorr.eu
Matter
,

how
For .

small
every
is )
soo

E. AK
s.tn . for all k> K

imaxlfcx ) -

Paca Is
> E

where f is a continuous function .

Chebyshev interpolation .

¥
he


Perturbation method
-
uses
Taylor Approximation

to the fongecti ON
, g ,

that we Are
trying
to find

hcx gcx > 3=0 ,

ht oc EX

it
by Approximating using
"

Taylor
"

's extrusion

D. coded
goitre up

Perturbation .

Method .
Q
• Ok, we’re done with the digression on the 
Taylor series expansion.

• Now, back to the discussion of the 
perturbation method.
– It approximates a solution using the Taylor series 
expansion.
easement hugged
)
g.
Perturbation Method

§
t x
r
.

• Suppose there is a point,             , where we 
x∗ ∈ X
know the value taken on by the function, g, 
that we wish to approximate:
L
e
∗ ∗ ∗
gx g , some x
-

• Use the implicit function theorem to 
approximate g in a neighborhood of  x ∗
• Note:
R x; g 0 for all x ∈ X

d j
j
R x; g ≡ j
R x; g 0 for all j, all x ∈ X.
dx
I I←
→ Insight
A
h )
R ( x
; g) = ( x
, gcx )
I = O

tf k E X .

Ida R ( x
; g) =o

Rex ; g)

=

R'
'

Gaia
'
.

b)
p ( ; g)
x
-

-
o

-
it
-

R( x
; g ) = h ( x
, gcxs )
p p
-

" >

→ R Cx
; g) = h Cx
, gcx ) )
,

+ ha ( x
, gcx ) ) g'
Cx )

= o Ix C-
X

unknowns
Two
g Cx ) ,g' Cx)
:

suppose 3- x* EX '
) known
gcx

"?
R ( x* ; g) =
h .
( xigcxi ) )

ha ( ) g' Coca )
g- CoE )
"
+ x
,

so

'cx* >
-

hiCx;gc)
g
=

hack
;gcx⇐ ) )
better have q, fo !
"
Csi )
g co -
se)

Gca ) I
g Cock ) + g' CoE g
)

"

) 'T:S
"

t z
'
g ( xx ,
t

j n .

.si#@c-0d)RCxjg7--hCx,gcx
gcxia )
x. →

) )

R' Cx h ( gcx ) )
;g )
: . x
,

+ ha ( x
, gcx ) > g' Cx )
"
R Cx ; g) =
he .
Coc ,
god )
7- his Coc , gce ) > g' Cx)

+ ha ,
Coc
, gcx ) > g' Cx )

V-xEX t hzcx , six > 3g


+
hzzcx ,
gcxDfg' ( xD
'
" >
R ( xx ;
g) =o
g "Cx* )

that got
give I

" ?
g' Csc "
) from B Cscteig )
=D

Go on forever .

As
you go
to higher order

derivatives ,
it's sort of

a mess because there Are

A Lot o f derivates .

* Interesting
-
: to find gkiscxa )
5=1,223 ,
.
.
-
.

Always solve
Linear equation
A
, using
gls ) Loc ) .
s ai
Basic ideas
- carry
over to
dynamic
Models ( the toy example

is static saw that


you
~

in the simple economic

model I used to

hoz motivate hcsggcx >7=0


AS AN equilibrium condition
Two
things do not
carry
models
to dynamic
.

* is true
,
but Not for
j in Models
=L
Dynamic .

Another
thing p#iitftion
theorem does
Not
Apply indya . .
Implicit function

theorem Addresses
A situation like

this :

hcsqgcx > 7=0


where h is known t

is Not known but is


g ,

of interest .
If T :

if F of inhere

① gcse ) known

② hzcx
'

, gcx 'S> fo
then )
ci
g in differentiable
of
A
Neighborhood
sci

-hdxTg
cis
gtx .
> =

heck ,g )
'
.
IN
dynamic economics

blog .gcx ) ,
xD -3=0

when we
do perturbations
in
dynamic setting
we write cross our

fingers And Assume g


is differentiable And

def then compute derives


like in cii > of IFT .
Perturbation, cnt’d
• Differentiate R with respect to   and evaluate 
x
x∗
the result at       :
R 1
x∗ d h x, g x | h1 x∗ , g∗ h2 x∗ , g∗ g′ x∗ 0
x x∗
dx
Implicit
′ ∗ h1 x∗ , g∗ function
→g x −
.

h2 x∗ , g∗ ←
• Do it again!
R 2 x∗ d 2 h x, g x | ∗ h 11 x ∗ , g ∗ 2h 12 x ∗ , g ∗ g ′ x ∗
2 x x
dx
2
h 22 x ∗ , g ∗ g ′ x ∗ h 2 x ∗ , g ∗ g ′′ x ∗ .


→ Solve this linearly for g ′′ x ∗ .
Perturbation, cnt’d
• Preceding calculations deliver (assuming 
enough differentiability, appropriate 
invertibility, a high tolerance for painful 
notation!), recursively:
g
ex → g ′ x ∗ , g ′′ x ∗ , . . . , g n x ∗ ←

• Then, have the following Taylor’s series 

€2Ae&
approximation:
gx ≈ĝx
ĝx g∗ g′ x∗ x − x∗
1 g ′′ x ∗ x − x∗ 2
... 1 g n x∗ x − x∗ n
2 n!
¥
Perturbation, cnt’d
• Check….

• Study the graph of
r is

R x; ĝ evaluated
g

outside
region
x∈X
– over                to verify that it is everywhere close  of eat
to zero (or, at least in the region of interest).  convert
Rex
.gr#&-iBlxig7
means :O.

is Accurate is big enough ,

a
great !
Interesting
Example: a Circle ①¥I¥F¥¥e .

• Function:
-

YI 4 - X

② perturbation
does Not

h x, y x2 y2 − 4
-
0. reo-uire-th.at
g is unique
-

• For each x except x=‐2, 2, there are two distinct y
that solve h(x,y)=0:

y g1 x ≡ 4 − x2 , y g2 x ≡ − 4 − x2 .
#
• The perturbation method does not require that 
the function g that solves h(x,g(x))=0 be unique. 
– When you specify the value of the function,  g, at the 
point of the expansion, you select the function whose 
Taylor series expansion is delivered by the 
perturbation method.
Example of Implicit Function Theorem
y
h x, y x2 y2 − 4 0.
om h( gcsc
) )
=

Grid
2 x
,
.

* *

g x ≃ g∗ − x∗ x − x∗

g
‐2
a- 2
• a as $0 x

hz to .

‐2

h x∗ , g∗ x ∗
I goes

g′ x∗ − 1 ∗ ∗ − ∗ h 2 had better not be zero!


h2 x , g g
-
Introduce
uncertainty
in AN
interesting
-
way .

X random variable .

= I t
E

{
E = V w - P .
Vz
✓ w.o.vn

EE =
'zV t
Iz C- V ) to

(V )
-

Vance ) = t -
o

z⇐V
-
+ -
o )
-

= V

y
chosen before
s is known ,

but After I ,V
Are known .
① I
,
V observed .

② y is chosen

③ x is realized .

④ function hcx ,
y)
Itv


x

hearty?

impossible for

" both to be Zero


.

E-
up
if I -
V
Natural thing with

uncertainty is

F- Chloe , y)
=o

tzhlx + V
, y)
at Ih ( E V )
y o
-
=
, .

V parameter that is fixed .

KNOWN .

I E X

CE )
y g
-
-

solution is

€ h ( Itv
, gcse ) )
DSG # t
zh (
'
E - r
, goes )
EX
A) =O FI
Clever trick used .

The reason clever trick

is that in
general
*

there is No e- s -
t .

GCE ) known
's

is .

Need A
' '
trick
"

to
get
the perturbation method
to work ,provideto us

with Approximate .

§ Introduce A SMALL

with
change ,
A parameter ,J .

The parameters is
Alway
A scalar , extend even when

vectors
things Like oc
,
y
are

and
hlx ,
y
D=
0

mi ,
z h L E -

V
, SLE ,o7 )

037=0
I h CE tov CE
+
, g ,

thing to Note r
. 5=1

original model .

T - o is model with zero

uncertainty .

TE ( o
,
D

T N creates A sequence

of models , starting with


with "
5=0
ending
"

our
,

model F- I
, .
Within this
NATURAL )
Cadmittedly
modification

of the model we CAN


,

the perturbation
Apply
Method .

why ?

It turns out that


when 5=0 ( No randomness )

Easy to find E- s.t.

)
'

CE
g ,
o

is known .

Now we are
in a
position
to
Apply the Perturbation

Methods
, Altoogh it

is e A Little more
complicated
because has A
g
ex t.R.tt VALK's Able o
,
.

RC.sc ,
o
; g)
=
'

z HCE #
, g
CE , o ) )
+
2h LE tov CE , o) )
, g
a O tf T of
,

. →⇐f÷÷÷÷÷
know CE
'
)

:
g
o
,

on
CI ) J
-

(I
-
E
g
+ o -

, ,

) (T )
'

t
ga CI ,
o - o

get g. CE ) by using
'

,
o

too
RICE ,
Oj g) so
Roe CE Tj D= Eh ( I - ru
,

J
,
,

GCE ,D)
+
'
z ha ( Eto V ,
GCE ,
o ))

= O

etiolate at E = I *
5=0
,

Re ( E
's oig )

?)
"
h.CI
'

= , g- ( E ,
o

t hack
"

, GCE D)
×
g. CE :o)
=o

g. ( I :O> = -

hi

Let's get coefficient
ON T
, g , CET ,
O )

R CE
't
o )
, , ; g
= -

zh
'
,
CE ; gcse:o) )
XV
+
thick 'T g CE
.

. 033N

hz ( E
'
CE ; o ))
+
, g

go.CI ;D x

hack .ge#,o37golE5o)
.
=

( As
'
E
)
G ,
O same

"
to first ,
"
before we

order Approximation introduce


uncertainty
}
.

'

CE 07=0 because
gz ,

n.es#okotdEIaEIe;
.
Tiff
-

-
I
"

Neighborhood

'

of #

for
sufficiently small

fl
"

Neighborhood of E
,
-
- s
.

for ANY
soo .
what is the intuition

behind certainty
eauilivrtlence ?

F- h CE )
y
te
=o
,

Approximating th
by a Linear function

H is A Linear Approx ofh


# ( Etc
, g) = Axl )
Etc

+
Bay
E HCE t c
, y)

C
A E >
By
+
= tea
-
IO

y=
-

IB
F- {A €+5 +
By }
Not A Linear function ,

so F- E is Not the

only thing that enters,

'
also EE
Eskil
EE
F- [ A- ( E
'

+ LEE +
O EZ )

+ By ]

No certainty equivalence
because V Matters .
Outline

unfearing
• A Toy Example to Illustrate the basic ideas.
– Functional form characterization of model solution.
– Projections and Perturbations.

• Neoclassical model. Done!
– Projection methods
– Perturbation methods

• Stochastic Simulations and Impulse Responses
– Focus on perturbation solutions of order two.
– The need for pruning.
Neoclassical Growth Model
with no hours
worked .

• Objective: Hh

E0 ∑ tu ct , u ct
1−
ct − 1 k :@ax%B
=
1−
t 0
)
• Constraints: It K⇐ EFCkt.at ,

t C l 8) Kt
≤ f kt, at , t
-

ct exp k t 1 0, 1, 2, . . . .

4. =
light 2
at a t−1 t, t~ E t 0, E t V
f- ( hee , at )= txt Lake t Ci - a > at ) ta -
s> exp Cup

f kt, at exp k t exp a t 1− exp k t


why Logs ?

① r Economics Addicted
to Logs .

dkt dlg
-

② IN special ease

ucc ) =
lgc
8=1

Solution CAN be

verified Analytically
to be

K* ,
-
Pak :

xexpca-aia.D-gfz.ae
> A
⑨ cont 'd .

Notice

htt = a
ke t Cc -
a) af

creates hope that s Cl

perturbation works well .

Exercise Neoclassical
cord IN
Efficiency .
Model

icq )=E¢fa' Ceta ) .

[ Fk f 'D
-

't

cost
+

#
,

of investing
Verify that in case

recess lgc ,
s -
b & satisfies
condition
efficiency .
Then show that
when Uce e )
,

=
lgetxlgci.es
then when s -

-
I
,

optimal least Ant .

@ Does Not change with

a ,
k )
CAD

Kt .
-
-

Ba KI @ exposed

→ Long a Blosser
,
"
"
REAL Business

TPE 1982 .

Finance people Attribute this to Menton


Efficiency Condition
ct 1

E t u ′ f k t , a t − exp k t 1

ct 1 period t 1 marginal product of capital

− u′ f kt 1 , at t 1 − exp k t 2 fK kt 1 , at t 1 0.

k t , a t ~given numbers
• Here, t 1 ~random variable
time t choice variable, k t 1

• Parameter,  , indexes a set of models, with 
the model of interest corresponding to
1
Solution
• A policy rule,
kt 1 g kt, at, .
• With the property:
ct

R k t , a t , ; g ≡ E t u ′ f k t , a t − exp g k t , a t ,

ct 1

kt 1 at kt 1 at
1 1

− u′ f g kt, at, , at t 1 − exp g g k t , a t , , at t 1,

kt 1 at 1

fK g kt, at, , at t 1 0,

• for all   a t , k t , .
Projection Methods
• Let                     
ĝ kt, at, ;

– be a function with finite parameters (could be either 
spectral or finite element, as before).

• Choose parameters,   , to make
R kt, at, ; ĝ
– as close to zero as possible, over a range of values of 
the state.
– use weighted residuals or Collocation. 
Occasionally Binding Constraints
• Suppose we add the non‐negativity constraint on 
investment:
exp g k t , a t , − 1− exp k t ≥ 0
• Express problem in Lagrangian form and optimum is 
characterized in terms of equality conditions with a 
multiplier and with a complementary slackness condition 
associated with the constraint.

• Conceptually straightforward to apply preceding method. 
For details, see Christiano‐Fisher, ‘Algorithms for Solving 
Dynamic Models with Occasionally Binding Constraints’, 
2000, Journal of Economic Dynamics and Control.
– This paper describes alternative strategies, based on 
parameterizing the expectation function, that may be easier, 
when constraints are occasionally binding constraints.
Perturbation Approach
• Straightforward application of the perturbation approach, as in the simple 
example, requires knowing the value taken on by the policy rule at a point.

• The overwhelming majority of models used in macro do have this 
property. 

– In these models, can compute non‐stochastic steady state without any 
knowledge of the policy rule, g.
k∗
– Non‐stochastic steady state is      such that

a 0 (nonstochastic steady state in no uncertainty case) 0 (no uncertainty)

k∗ g k∗ , 0 , 0

1
– and    1−
k∗ log .
1− 1−
Perturbation
• Error function:
ct

R k t , a t , ; g ≡ E t u ′ f k t , a t − exp g k t , a t ,

ct 1

− u′ f g kt, at, , at t 1 − exp g g k t , a t , , at t 1,

fK g kt, at, , at t 1 0,

kt, at, .
– for all values of                
• So, all order derivatives of R with respect to its 
arguments are zero (assuming they exist!).
Four (Easy to Show) Results About 
Perturbations
• Taylor series expansion of policy rule:
linear component of policy rule

g kt, at, ≃ k gk kt − k ga at g

second and higher order terms

1 g kt − k 2
g aa a 2t g 2 g ka k t − k a t gk kt − k ga at ...
2 kk

– g 0 : to a first order approximation, ‘certainty equivalence’ 
– All terms found by solving linear equations, except coefficient on past 
gk
endogenous variable,        ,which requires solving for eigenvalues

– To second order approximation: slope terms certainty equivalent –
gk ga 0
– Quadratic, higher order terms computed recursively.
First Order Perturbation
• Working out the following derivatives and 
evaluating at  k t k ∗ , a t 0

R k kt , at , ; g R a kt , at , ; g R kt , at , ; g 0

‘problematic term’ Source of certainty equivalence
• Implies: In linear approximation

Rk u ′′ f k − e g g k − u ′ f Kk g k − u ′′ f k g k − e g g 2k f K 0

Ra u ′′ f a − e g g a − u ′ f Kk g a f Ka − u ′′ f k g a fa − eg gk ga ga fK 0

R − u′eg u ′′ f k − e g g k f K g 0

Absence of arguments in these functions reflects they are evaluated in  k t k∗ , at 0
Technical notes for following slide
u ′′ f k − e g g k − u ′ f Kk g k − u ′′ f k g k − e g g 2k f K 0
1 f − e g g − u ′ f Kk g − f g − e g g 2 f K 0
k k k k k k
u ′′
1 f − 1 e g u ′ f Kk f f K g e g g 2 f K 0
k k k k
u ′′
1 fk − 1 u ′ f Kk fk
g k g 2k 0
g
g
e fK fK u e fK
′′ g e
1 − 1 1 u ′ f Kk gk g 2k 0
u ′′ e g f K

• Simplify this further using:
f K ~steady state equation
fK K −1 exp a 1− , K ≡ exp k
exp −1 k a 1−
fk exp k a 1− exp k fK eg
f Kk − 1 exp −1 k a
f KK −1 K −2 exp a − 1 exp −2 k a f Kk e −g

• to obtain polynomial on next slide. 
First Order, cont’d
Rk 0
• Rewriting                term:
1 − 1 1 u ′ f KK gk g 2k 0
u ′′ f K

0 g k 1, g k 1
• There are two solutions,                                   
– Theory (see Stokey‐Lucas) tells us to pick the smaller 
one. 
– In general, could be more than one eigenvalue less 
than unity: multiple solutions.
gk ga
• Conditional on solution to     ,         solved for 
Ra 0
linearly using                equation.
• These results all generalize to multidimensional 
case
Numerical Example
• Parameters taken from Prescott (1986):

0. 99, 2 20 , 0. 36, 0. 02, 0. 95, V 0. 01 2

• Second order approximation:
3.88 0.98 0.996 0.06 0.07 0
ĝ k t , a t−1 , t , k∗ gk kt − k∗ ga at g
0.014 0.00017 0.067 0.079 0.000024 0.00068 1
1 g kk kt − k 2
g aa a 2t g 2
2
−0.035 −0.028 0 0
g ka kt − k at gk kt − k ga at
• Following is a graph that compares the policy 
rules implied by the first and second order 
perturbation.

• The graph itself corresponds to the baseline 
parameterization, and results are reported in 
parentheses for risk aversion equal to 20.
‘If initial capital is 20 percent away from steady state,  then capital
choice differs by 0.03 (0.035) percent between the two approximations.’

‘If shock is 6 standard deviations away from its mean, then capital 
choice differs by 0.14 (0.18) percent between the two approximations’

0.04
0.18
0.035 0.16
order) - k t+1 (1 order) )

order) - k t+1 (1 order) )


0.03 0.14
st

st
0.025 0.12

=2 0.1
0.02
= 20
0.08
nd

nd
0.015
100*( kt+1 (2

100*( kt+1 (2 0.06


0.01
0.04

0.005 0.02

0 0
-20 -10 0 10 20 -20 -10 0 10 20
* 100*at, percent deviation of initial shock from steady state
100*( kt - k ), percent deviation of initial capital from steady state

Number in parentheses at top correspond to       = 20.
Conclusion
• For modest US‐sized fluctuations and for 
aggregate quantities, it is reasonable to work 
with first order perturbations.

• First order perturbation: linearize (or, log‐
linearize) equilibrium conditions around non‐
stochastic steady state and solve the resulting 
system. 
– This approach assumes ‘certainty equivalence’. Ok, as 
a first order approximation.
List of endogenous variables determined at t
Solution by Linearization
• (log) Linearized Equilibrium Conditions:
Et 0 zt 1 1zt 2 z t−1 0 st 1 1 st 0

• Posit Linear Solution:
s t − Ps t−1 − t 0.
zt Az t−1 Bs t Exogenous shocks
• To satisfy equil conditions, A and B must:
2
0A 1A 2I 0, F 0 0B P 1 0A 1 B 0

• If there is exactly one A with eigenvalues less 
than unity in absolute value, that’s the solution. 
Otherwise, multiple solutions.

• Conditional on A, solve linear system for B. 

You might also like