[go: up one dir, main page]

0% found this document useful (0 votes)
105 views78 pages

Ode PDF

This document discusses ordinary differential equations (ODEs) and their importance in science. It provides examples of ODEs that describe dynamics in physics, biology, chemistry, and other fields. The document then introduces the initial value problem for first-order ODEs and describes the Euler method for numerical solution of ODEs. It notes that the Euler method has a local error of O(h^2) and global error of O(h). Finally, it discusses how using the derivative at the midpoint rather than the start point leads to a more accurate integration method.

Uploaded by

Ananda Dasgupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views78 pages

Ode PDF

This document discusses ordinary differential equations (ODEs) and their importance in science. It provides examples of ODEs that describe dynamics in physics, biology, chemistry, and other fields. The document then introduces the initial value problem for first-order ODEs and describes the Euler method for numerical solution of ODEs. It notes that the Euler method has a local error of O(h^2) and global error of O(h). Finally, it discusses how using the derivative at the midpoint rather than the start point leads to a more accurate integration method.

Uploaded by

Ananda Dasgupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Ordinary Differential Equations

Part 1

Ananda Dasgupta

PH3105, Autumn Semester 2017


ODEs in science
ODEs in science
I ODEs are among the most important mathematical structures
in science.
ODEs in science
I ODEs are among the most important mathematical structures
in science.
I Dynamical systems in physics are described by second order
equations of motion
d 2x
 
dx
m 2 = F x, , t
dt dt
ODEs in science
I ODEs are among the most important mathematical structures
in science.
I Dynamical systems in physics are described by second order
equations of motion
d 2x
 
dx
m 2 = F x, , t
dt dt
I This can be recast as two coupled first order equations
dx p
=
dt m
dp  p 
= F x, , t
dt m
ODEs in science
I ODEs are among the most important mathematical structures
in science.
I Dynamical systems in physics are described by second order
equations of motion
d 2x
 
dx
m 2 = F x, , t
dt dt
I This can be recast as two coupled first order equations
dx p
=
dt m
dp  p 
= F x, , t
dt m
I More generally
∂H
q̇i =
∂pi
∂H
ṗi = −
∂qi
ODEs in science
I ODEs abound in other scientific disciplines as well!
ODEs in science
I ODEs abound in other scientific disciplines as well!
I In biology, for example, we have the famous predator-prey
model
ODEs in science
I ODEs abound in other scientific disciplines as well!
I In biology, for example, we have the famous predator-prey
model
I also known as the Lotka-Volterra model
ODEs in science
I ODEs abound in other scientific disciplines as well!
I In biology, for example, we have the famous predator-prey
model
I also known as the Lotka-Volterra model
I In this prey population x and predator population y obeys
dx
= αx − βxy
dt
dy
= δxy − γy
dt
ODEs in science
I ODEs abound in other scientific disciplines as well!
I In biology, for example, we have the famous predator-prey
model
I also known as the Lotka-Volterra model
I In this prey population x and predator population y obeys
dx
= αx − βxy
dt
dy
= δxy − γy
dt
I Reaction rates in chemistry : for example the rate of the
reaction 2H2 + 2NO → N2 + 2H2 O
I
d d
[H2 ] = −k [H2 ] [NO ]2 = [NO ]
dt dt
The initial value problem
First order differential equations
First order differential equations
I First order differential equation (Initial Value Problem):
dy
= f (y , t) subject to y (t0 ) = y0
dt
First order differential equations
I First order differential equation (Initial Value Problem):
dy
= f (y , t) subject to y (t0 ) = y0
dt
I The Euler algorithm :
y (t + h) ≈ y (t) + hf (y (t) , t)
First order differential equations
I First order differential equation (Initial Value Problem):
dy
= f (y , t) subject to y (t0 ) = y0
dt
I The Euler algorithm :
y (t + h) ≈ y (t) + hf (y (t) , t)
I or more concisely
yn+1 = yn + hf (yn , tn )
where tn ≡ t0 + nh, yn ≡ y (tn ).
First order differential equations
I First order differential equation (Initial Value Problem):
dy
= f (y , t) subject to y (t0 ) = y0
dt
I The Euler algorithm :
y (t + h) ≈ y (t) + hf (y (t) , t)
I or more concisely
yn+1 = yn + hf (yn , tn )
where tn ≡ t0 + nh, yn ≡ y (tn ).
Local error (error per step) is O h2 .

I
First order differential equations
I First order differential equation (Initial Value Problem):
dy
= f (y , t) subject to y (t0 ) = y0
dt
I The Euler algorithm :
y (t + h) ≈ y (t) + hf (y (t) , t)
I or more concisely
yn+1 = yn + hf (yn , tn )
where tn ≡ t0 + nh, yn ≡ y (tn ).
Local error (error per step) is O h2 .

I
I Global error in this method is O (h).
First order differential equations
I First order differential equation (Initial Value Problem):
dy
= f (y , t) subject to y (t0 ) = y0
dt
I The Euler algorithm :
y (t + h) ≈ y (t) + hf (y (t) , t)
I or more concisely
yn+1 = yn + hf (yn , tn )
where tn ≡ t0 + nh, yn ≡ y (tn ).
Local error (error per step) is O h2 .

I
I Global error in this method is O (h).
I For a system of first order ODEs
dyi
= fi (y1 , . . . , yn ; t) , i = 1, 2, . . . n
dt
we have
yi (t + h) ≈ yi (t) + hfi (y1 (t) , . . . , yn (t) ; t)
Improved integration methods
Improved integration methods
I The Euler algorithm uses the derivative at time tn as an
approximation for the rate of change over the entire interval
from tn to tn + h.
Improved integration methods
I The Euler algorithm uses the derivative at time tn as an
approximation for the rate of change over the entire interval
from tn to tn + h.
I Wouldn’t it be better if we used the derivative at the midpoint
tn + h/2?
Improved integration methods
I The Euler algorithm uses the derivative at time tn as an
approximation for the rate of change over the entire interval
from tn to tn + h.
I Wouldn’t it be better if we used the derivative at the midpoint
tn + h/2?
I Easy to do if the derivative f (y , t) = f (t) depends only on t.
Improved integration methods
I The Euler algorithm uses the derivative at time tn as an
approximation for the rate of change over the entire interval
from tn to tn + h.
I Wouldn’t it be better if we used the derivative at the midpoint
tn + h/2?
I Easy to do if the derivative f (y , t) = f (t) depends only on t.
I Then  
h
yn+1 ≡ y (tn + h) ≈ y (tn ) + hf tn +
2
Improved integration methods
I The Euler algorithm uses the derivative at time tn as an
approximation for the rate of change over the entire interval
from tn to tn + h.
I Wouldn’t it be better if we used the derivative at the midpoint
tn + h/2?
I Easy to do if the derivative f (y , t) = f (t) depends only on t.
I Then  
h
yn+1 ≡ y (tn + h) ≈ y (tn ) + hf tn +
2
I This is better:  
h˙ 2

RHS = y (tn ) + h f (tn ) + f (tn ) + O h
2
h 2
= y (tn ) + hẏ (tn ) + ÿ (tn ) + O h3

2
= y (tn + h) + O h3
Improved integration methods
I The Euler algorithm uses the derivative at time tn as an
approximation for the rate of change over the entire interval
from tn to tn + h.
I Wouldn’t it be better if we used the derivative at the midpoint
tn + h/2?
I Easy to do if the derivative f (y , t) = f (t) depends only on t.
I Then  
h
yn+1 ≡ y (tn + h) ≈ y (tn ) + hf tn +
2
I This is better:  
h˙ 2

RHS = y (tn ) + h f (tn ) + f (tn ) + O h
2
h 2
= y (tn ) + hẏ (tn ) + ÿ (tn ) + O h3

2
= y (tn + h) + O h3
Local error O h3 ,

I
Improved integration methods
I The Euler algorithm uses the derivative at time tn as an
approximation for the rate of change over the entire interval
from tn to tn + h.
I Wouldn’t it be better if we used the derivative at the midpoint
tn + h/2?
I Easy to do if the derivative f (y , t) = f (t) depends only on t.
I Then  
h
yn+1 ≡ y (tn + h) ≈ y (tn ) + hf tn +
2
I This is better:  
h˙ 2

RHS = y (tn ) + h f (tn ) + f (tn ) + O h
2
h 2
= y (tn ) + hẏ (tn ) + ÿ (tn ) + O h3

2
= y (tn + h) + O h3
Local error O h3 , global error O h2 .
 
I
Improved integration methods
I The Euler algorithm uses the derivative at time tn as an
approximation for the rate of change over the entire interval
from tn to tn + h.
I Wouldn’t it be better if we used the derivative at the midpoint
tn + h/2?
I Easy to do if the derivative f (y , t) = f (t) depends only on t.
I Then  
h
yn+1 ≡ y (tn + h) ≈ y (tn ) + hf tn +
2
I This is better:  
h˙ 2

RHS = y (tn ) + h f (tn ) + f (tn ) + O h
2
h 2
= y (tn ) + hẏ (tn ) + ÿ (tn ) + O h3

2
= y (tn + h) + O h3
Local error O h3 , global error O h2 .
 
I
I What if f (y , t) depends explicitly on y (as it usually would)?
The Midpoint Euler method (explicit version)
I Slope at beginning : pn = f (yn , tn )
The Midpoint Euler method (explicit version)
I Slope at beginning : pn = f (yn , tn)
I Approximate f y tn + h2 , tn + h2 by
 
h h
qn = f yn + pn , tn +
2 2
The Midpoint Euler method (explicit version)
I Slope at beginning : pn = f (yn , tn)
I Approximate f y tn + h2 , tn + h2 by
 
h h
qn = f yn + pn , tn +
2 2
I Taylor series in several variables says that
 
h ∂f ∂f
qn = f + pn +
2 ∂y ∂t
2
 2
∂2f ∂ 2 f

h ∂ f 2
+ O h3

+ 2
pn + 2 pn + 2
8 ∂y ∂y ∂t ∂t (yn ,tn )
The Midpoint Euler method (explicit version)
I Slope at beginning : pn = f (yn , tn)
I Approximate f y tn + h2 , tn + h2 by
 
h h
qn = f yn + pn , tn +
2 2
I Taylor series in several variables says that
 
h ∂f ∂f
qn = f + pn +
2 ∂y ∂t
2
 2
∂2f ∂ 2 f

h ∂ f 2
+ O h3

+ 2
pn + 2 pn + 2
8 ∂y ∂y ∂t ∂t (yn ,tn )
But
ẏ (t) = f (y , t)
∂f ∂f
ÿ (t) = ẏ +
∂y ∂t
... ∂2f 2 ∂2f ∂f ∂2f ∂2f
y (t) = ẏ + ẏ + ÿ + ẏ +
∂y 2 ∂t∂y ∂y ∂y ∂t ∂t 2
The Midpoint Euler method (explicit version)
I Slope at beginning : pn = f (yn , tn)
I Approximate f y tn + h2 , tn + h2 by
 
h h
qn = f yn + pn , tn +
2 2
I Taylor series in several variables says that
 
h ∂f ∂f
qn = f + pn +
2 ∂y ∂t
2
 2
∂2f ∂ 2 f

h ∂ f 2
+ O h3

+ 2
pn + 2 pn + 2
8 ∂y ∂y ∂t ∂t (yn ,tn )
But
ẏn = f (yn , tn ) = pn

∂f ∂f ∂f ∂f
ÿn = ẏ + = pn +
∂y ∂t
(yn ,tn ) ∂y ∂t (yn ,tn )
∂2f 2 ∂2f ∂ 2 f

... ∂f
yn = p +2 pn + ÿn + 2
∂y 2 n ∂y ∂t ∂y ∂t (yn ,tn )
The Midpoint Euler method (explicit version)
I Slope at beginning : pn = f (yn , tn)
I Approximate f y tn + h2 , tn + h2 by
 
h h
qn = f yn + pn , tn +
2 2
I Taylor series in several variables says that
h2 ...
 
h ∂f
+ O h3

qn = ẏn + ÿn + y n− ÿn
2 8 ∂y (yn ,tn )
But
ẏn = f (yn , tn ) = pn

∂f ∂f ∂f ∂f
ÿn = ẏ + = pn +
∂y ∂t (yn ,tn ) ∂y ∂t (yn ,tn )
∂2f 2 ∂2f ∂ 2 f

... ∂f
yn = p +2 pn + ÿn + 2
∂y 2 n ∂y ∂t ∂y ∂t (yn ,tn )
The Midpoint Euler method (explicit version)
I Slope at beginning : pn = f (yn , tn)
I Approximate f y tn + h2 , tn + h2 by
 
h h
qn = f yn + pn , tn +
2 2
I Taylor series in several variables says that
h2 ...
 
h ∂f
+ O h3

qn = ẏn + ÿn + y n− ÿn
2 8 ∂y (yn ,tn )

I On the other hand,


h2 h3
ÿn + ÿn + O h4

yn+1 ≡ y (tn + h) = yn + hẏn +
2! 3!
The Midpoint Euler method (explicit version)
I Slope at beginning : pn = f (yn , tn)
I Approximate f y tn + h2 , tn + h2 by
 
h h
qn = f yn + pn , tn +
2 2
I Taylor series in several variables says that
h2 ...
 
h ∂f
+ O h3

qn = ẏn + ÿn + y n− ÿn
2 8 ∂y (yn ,tn )

I On the other hand,


h2 h3
ÿn + ÿn + O h4

yn+1 ≡ y (tn + h) = yn + hẏn +
2! 3!
= yn + hqn
h3 ...
 
∂f
+ O h4

+ y n+3 ÿn
24 ∂y (yn ,tn )
The Midpoint Euler method (explicit version)
I Slope at beginning : pn = f (yn , tn)
I Approximate f y tn + h2 , tn + h2 by
 
h h
qn = f yn + pn , tn +
2 2
I Taylor series in several variables says that
h2 ...
 
h ∂f
+ O h3

qn = ẏn + ÿn + y n− ÿn
2 8 ∂y (yn ,tn )

I On the other hand,


h2 h3
ÿn + ÿn + O h4

yn+1 ≡ y (tn + h) = yn + hẏn +
2! 3!
= yn + hqn
h3 ...
 
∂f
+ O h4

+ y n+3 ÿn
24 ∂y (yn ,tn )
3 - global

I The midpoint method has a local error of O h
error O h2 .

Generalization : the Runge-Kutta method
Generalization : the Runge-Kutta method
I Instead of the derivative at midpoint, we could use
pn + qn 1 1
= f (yn , tn ) + f (yn + hpn , tn + h)
2 2 2
- aka the modified Euler method.
Generalization : the Runge-Kutta method
I Instead of the derivative at midpoint, we could use
pn + qn 1 1
= f (yn , tn ) + f (yn + hpn , tn + h)
2 2 2
- aka the modified Euler method.
I A more general approximation

β1 pn + β2 qn

where pn = f (yn , tn ) = ẏn and q = f (yn + αhpn , tn + αh).


Generalization : the Runge-Kutta method
I Instead of the derivative at midpoint, we could use
pn + qn 1 1
= f (yn , tn ) + f (yn + hpn , tn + h)
2 2 2
- aka the modified Euler method.
I A more general approximation

β1 pn + β2 qn

where pn = f (yn , tn ) = ẏn and q = f (yn + αhpn , tn + αh).


I Taylor series says

α2 h 2
fyy pn2 + 2fyt pn + ftt (yn ,tn )

qn = pn + αh (fy ẏn + ft )(yn ,tn ) +
2
Generalization : the Runge-Kutta method
I Instead of the derivative at midpoint, we could use
pn + qn 1 1
= f (yn , tn ) + f (yn + hpn , tn + h)
2 2 2
- aka the modified Euler method.
I A more general approximation

β1 pn + β2 qn

where pn = f (yn , tn ) = ẏn and q = f (yn + αhpn , tn + αh).


I Taylor series says

α2 h 2
fyy pn2 + 2fyt pn + ftt (yn ,tn )

qn = pn + αh (fy ẏn + ft )(yn ,tn ) +
2
α2 h2 ...
[ y n − fy ÿn ](yn ,tn ) + O h3

= ẏn + αhÿn +
2
Generalization : the Runge-Kutta method
Generalization : the Runge-Kutta method
I The approximation
y (tn + h) = yn + h (β1 pn + β2 qn )
Generalization : the Runge-Kutta method
I The approximation
y (tn + h) = yn + h (β1 pn + β2 qn )
h2
+ (1 − β1 − β2 ) hẏn + (1 − 2αβ2 ) ÿn
2
h3 h ... ... i
y − 3α2 β2 ( y n − fy ÿn )(y ,t ) + O h4

+
6 n n
Generalization : the Runge-Kutta method
I The approximation
y (tn + h) = yn + h (β1 pn + β2 qn )
h2
+ (1 − β1 − β2 ) hẏn + (1 − 2αβ2 ) ÿn
2
h3 h ... ... i
y − 3α2 β2 ( y n − fy ÿn )(y ,t ) + O h4

+
6 n n

I Minimizing error requires


β1 + β2 = 1
2αβ2 = 1
giving a local error of O h3 .

Generalization : the Runge-Kutta method
I The approximation
y (tn + h) = yn + h (β1 pn + β2 qn )
h2
+ (1 − β1 − β2 ) hẏn + (1 − 2αβ2 ) ÿn
2
h3 h ... ... i
y − 3α2 β2 ( y n − fy ÿn )(y ,t ) + O h4

+
6 n n

I Minimizing error requires


β1 + β2 = 1
2αβ2 = 1
giving a local error of O h3 .


I Possibilities :
Generalization : the Runge-Kutta method
I The approximation
y (tn + h) = yn + h (β1 pn + β2 qn )
h2
+ (1 − β1 − β2 ) hẏn + (1 − 2αβ2 ) ÿn
2
h3 h ... ... i
y − 3α2 β2 ( y n − fy ÿn )(y ,t ) + O h4

+
6 n n

I Minimizing error requires


β1 + β2 = 1
2αβ2 = 1
giving a local error of O h3 .


I Possibilities :
I α = 21 , β1 = 0, β2 = 1 - the midpoint formula.
Generalization : the Runge-Kutta method
I The approximation
y (tn + h) = yn + h (β1 pn + β2 qn )
h2
+ (1 − β1 − β2 ) hẏn + (1 − 2αβ2 ) ÿn
2
h3 h ... ... i
y − 3α2 β2 ( y n − fy ÿn )(y ,t ) + O h4

+
6 n n

I Minimizing error requires


β1 + β2 = 1
2αβ2 = 1
giving a local error of O h3 .


I Possibilities :
I α = 21 , β1 = 0, β2 = 1 - the midpoint formula.
I α = 1, β1 = β2 = 12 - the corrected Euler method, aka Heun’s
method.
Generalization : the Runge-Kutta method
I The approximation
y (tn + h) = yn + h (β1 pn + β2 qn )
h2
+ (1 − β1 − β2 ) hẏn + (1 − 2αβ2 ) ÿn
2
h3 h ... ... i
y − 3α2 β2 ( y n − fy ÿn )(y ,t ) + O h4

+
6 n n

I Minimizing error requires


β1 + β2 = 1
2αβ2 = 1
giving a local error of O h3 .


I Possibilities :
I α = 21 , β1 = 0, β2 = 1 - the midpoint formula.
I α = 1, β1 = β2 = 12 - the corrected Euler method, aka Heun’s
method.
I α = 32 , β1 = 14 , β2 = 34 - Ralstone’s method.
The 4th order Runge-Kutta algorithm
The 4th order Runge-Kutta algorithm
I Approximate the derivative in the interval (tn , tn + h) by the
weighted average
1
(pn + 2qn + 2rn + sn )
6
The 4th order Runge-Kutta algorithm
I Approximate the derivative in the interval (tn , tn + h) by the
weighted average
1
(pn + 2qn + 2rn + sn )
6
I The approximants are
pn = f (yn , tn )
 
h h
qn = f yn + pn , tn +
2 2
 
h h
rn = f yn + qn , tn +
2 2
sn = f (yn + hr , tn + h)
The 4th order Runge-Kutta algorithm
I Approximate the derivative in the interval (tn , tn + h) by the
weighted average
1
(pn + 2qn + 2rn + sn )
6
I The approximants are
pn = f (yn , tn )
 
h h
qn = f yn + pn , tn +
2 2
 
h h
rn = f yn + qn , tn +
2 2
sn = f (yn + hr , tn + h)
I Thus
h
yn+1 ≈ yn + (pn + 2qn + 2rn + sn )
6
The 4th order Runge-Kutta algorithm
I Approximate the derivative in the interval (tn , tn + h) by the
weighted average
1
(pn + 2qn + 2rn + sn )
6
I The approximants are
pn = f (yn , tn )
 
h h
qn = f yn + pn , tn +
2 2
 
h h
rn = f yn + qn , tn +
2 2
sn = f (yn + hr , tn + h)
I Thus
h
yn+1 ≈ yn + (pn + 2qn + 2rn + sn )
6 
This has a local error of O h5 and global error of O h4 .

I
The 4th order Runge-Kutta algorithm
I Approximate the derivative in the interval (tn , tn + h) by the
weighted average
1
(pn + 2qn + 2rn + sn )
6
I The approximants are
pn = f (yn , tn )
 
h h
qn = f yn + pn , tn +
2 2
 
h h
rn = f yn + qn , tn +
2 2
sn = f (yn + hr , tn + h)
I Thus
h
yn+1 ≈ yn + (pn + 2qn + 2rn + sn )
6 
This has a local error of O h5 and global error of O h4 .

I
I Both simple and accurate - arguably the most widely used
method.
How big should the step size be?
How big should the step size be?

The local error in RK4 is O h5 .



I
How big should the step size be?

The local error in RK4 is O h5 .



I

I How big must h be for a desired accuracy ?


How big should the step size be?

The local error in RK4 is O h5 .



I

I How big must h be for a desired accuracy ?


I One approach - the step size can be changed depending on
the error at each step.
How big should the step size be?

The local error in RK4 is O h5 .



I

I How big must h be for a desired accuracy ?


I One approach - the step size can be changed depending on
the error at each step.
I But ... how to estimate the error?
How big should the step size be?

The local error in RK4 is O h5 .



I

I How big must h be for a desired accuracy ?


I One approach - the step size can be changed depending on
the error at each step.
I But ... how to estimate the error?
I Use one RK4 step with step-size h and two RK4 steps with
step-size h2 to get two estimates for y (tn + h) - y(1) and y(2) ,
respectively.
How big should the step size be?

The local error in RK4 is O h5 .



I

I How big must h be for a desired accuracy ?


I One approach - the step size can be changed depending on
the error at each step.
I But ... how to estimate the error?
I Use one RK4 step with step-size h and two RK4 steps with
step-size h2 to get two estimates for y (tn + h) - y(1) and y(2) ,
respectively.
I The difference ∆ = y(2) − y(1) is an estimate for the error.
How big should the step size be?

The local error in RK4 is O h5 .



I

I How big must h be for a desired accuracy ?


I One approach - the step size can be changed depending on
the error at each step.
I But ... how to estimate the error?
I Use one RK4 step with step-size h and two RK4 steps with
step-size h2 to get two estimates for y (tn + h) - y(1) and y(2) ,
respectively.
I The difference ∆ = y(2) − y(1) is an estimate for the error.
I In fact, this also yields a better estimate for y (tn + h),

namely y(2) + 15 !
How big should the step size be?
How big should the step size be?
I If |∆| < , accept the step.
How big should the step size be?
I If |∆| < , accept the step.
I If |∆| > , change the step-size to
 1/5
 
h
|∆|
and repeat.
How big should the step size be?
I If |∆| < , accept the step.
I If |∆| > , change the step-size to
 1/5
 
h
|∆|
and repeat.
I Warning : guard against the step-size becoming too small!
How big should the step size be?
I If |∆| < , accept the step.
I If |∆| > , change the step-size to
 1/5
 
h
|∆|
and repeat.
I Warning : guard against the step-size becoming too small!
I Once the step size decreases, it stays small - may make the
method very slow.
How big should the step size be?
I If |∆| < , accept the step.
I If |∆| > , change the step-size to
 1/5
 
h
|∆|
and repeat.
I Warning : guard against the step-size becoming too small!
I Once the step size decreases, it stays small - may make the
method very slow.
I A modification by Press et al - at each step,
How big should the step size be?
I If |∆| < , accept the step.
I If |∆| > , change the step-size to
 1/5
 
h
|∆|
and repeat.
I Warning : guard against the step-size becoming too small!
I Once the step size decreases, it stays small - may make the
method very slow.
I A modification by Press et al - at each step,
I accept the result if |∆| < .
How big should the step size be?
I If |∆| < , accept the step.
I If |∆| > , change the step-size to
 1/5
 
h
|∆|
and repeat.
I Warning : guard against the step-size becoming too small!
I Once the step size decreases, it stays small - may make the
method very slow.
I A modification by Press et al - at each step,
I accept the result if |∆| < .
I In each step , define the new step-size as
 η

Rh
|∆|
where R ∼ 0.9, and
(
0.25 where ∆ > 
η=
0.20 where ∆ < 
Second order ODEs : The Runge-Kutta-Nystrom method
Second order ODEs : The Runge-Kutta-Nystrom method
For the second order equation
ÿ = f (y , ẏ , t)
Second order ODEs : The Runge-Kutta-Nystrom method
For the second order equation
ÿ = f (y , ẏ , t)
we can use
 
h
yn+1 = yn + h ẏn + [An + Bn + Cn ]
3
1
ẏn+1 = ẏn + [An + 2Bn + 2Cn + Dn ]
3
Second order ODEs : The Runge-Kutta-Nystrom method
For the second order equation
ÿ = f (y , ẏ , t)
we can use
 
h
yn+1 = yn + h ẏn + [An + Bn + Cn ]
3
1
ẏn+1 = ẏn + [An + 2Bn + 2Cn + Dn ]
3
where
h
An = f (yn , ẏn , tn )
2    
h h h An
Bn = f yn + βn , ẏn + An , tn + , where βn = ẏn +
2 2 2 2
 
h h
Cn = f yn + βn , ẏn + Bn , tn +
2 2
h
Dn = f (yn + δn , ẏn + 2Cn , tn + h) , where δn = h (ẏn + Cn )
2
Second order ODEs : The Runge-Kutta-Nystrom method
For the second order equation
ÿ = f (y , ẏ , t)
we can use
 
h
yn+1 = yn + h ẏn + [An + Bn + Cn ]
3
1
ẏn+1 = ẏn + [An + 2Bn + 2Cn + Dn ]
3
where
h
An = f (yn , ẏn , tn )
2    
h h h An
Bn = f yn + βn , ẏn + An , tn + , where βn = ẏn +
2 2 2 2
 
h h
Cn = f yn + βn , ẏn + Bn , tn +
2 2
h
Dn = f (yn + δn , ẏn + 2Cn , tn + h) , where δn = h (ẏn + Cn )
2
 
System of equations

I A system of differential equations

ẏi (t) = fi (y1 , . . . yn ; t) , i = 1, 2, . . . , n

can be handled by the RK4


I However care has to be used to evaluate all the derivatives
together.
I The n initial slopes p1 , . . . pn must be evaluated at the values
of y1 , . . . , yn at the beginning of the interval
I These values must be used to predict the values of all the yi
at the middle of the interval to get the q1 , . . . , qn
I and so on ...
I Using numpy helps greatly here!

You might also like