[go: up one dir, main page]

0% found this document useful (0 votes)
3 views14 pages

c Motion

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 14

PY 502, Computational Physics, Fall 2024

Numerical Solutions of Classical Equations of Motion


Anders W. Sandvik, Department of Physics, Boston University

1 Introduction

Classical equations of motion, i.e., Newton’s laws, govern the dynamics of systems ranging from
very large, such as solar systems and galaxies, to very small, such as molecules in liquids and gases
(in cases where quantum mechanical fluctuations can be neglected, which is often the case). In
between these extremes, Newton’s equations of motion apply, litterally, to ”everything that moves”.

Exact analytical solutions of the equations of motion exist only for simple systems, of the types
that are discussed in elementary classical mechanics courses, and therefore numerical integration
methods are very important in practice. Here we will discuss some commonly used differential
equation solvers and use them to study the dynamics of mechanical systems, including ones that
exhibit chaotic dynamics. We will limit the discussion to a single moving body, although the
methods can be easily generalized to many-body systems as well—dynamics of many-body systems
will be discussed later in connection with molecular dynamics simulations.

While some of the numerical schemes that we will discuss here are particularly suitable for integrat-
ing classical equations of motions, we will also described methods, such as the classic Runge-Kutta
algorithm, that are more generally applicable to a large class of ordinary differential equations. The
discussion of systems with chaotic dynamics, although here introduced in the context of classical
mechanics, is also of relevance more broadly in studies of nonlinear dynamics.

2 Basic algorithms for equations of motion

Consider a single object (here regarded as a point particle) with mass m moving in one dimension.
With its time-dependent position denoted x(t), the differential equation governing its dynamics is
1
ẍ(t) = F [x(t), ẋ(t), t], (1)
m
where F is the total force acting on the particle, and ẋ and ẍ are the first and second time derivatives
of x. We have indicated that the force may depend on x, ẋ and t. These dependencies typically
come from from a position dependent static potential, a velocity dependent damping (friction),
and a time-dependent driving force, but there are other natural posibilities as well, e.g., a position
dependent friction.

To study the system numerically, it is convenient to rewrite the second-order differential equation
(1) as two coupled first-order equations. Giving the velocity its standard symbol v(t), we have

ẋ(t) = v(t)
1
v̇(t) = F [x(t), v(t), t]. (2)
m

1
1.5

1 0.8 ∆t=0.01
∆t=0.001
0.5
0.7

E(t)
x(t)

-0.5
0.6
-1

-1.5 0.5
0 10 20 30 40 50 0 10 20 30 40 50
t t

Figure 1: Time dependence of the position x and the energy E = (1/2)kx2 +(1/2)mv 2 of a harmonic
oscillator with k = m = 1 (which gives an oscillation period 2π) integrated using the Euler method
with two different time steps; ∆t = 10−2 and 10−3 .

To integrate this set of equations, we discretize the time-axis as t = t0 , t1 , . . ., with a constant time
step tn+1 − tn = ∆t . The initial values x0 = x(t0 ) and v0 = v(t0 ) are used to start the integration
process.

2.1 Euler algorithm

The simplest way to advance the time from tn to tn+1 is to use the first-order approximation;

xn+1 = xn + ∆t vn ,
vn+1 = vn + ∆t an , (3)

where an = F (xn , vn , tn )/m is the acceleration. Clearly, the error made in each step of this
algorithm is O(∆2t ). The method, which is called Euler’s forward method, is in general not very
useful in practice. For example, in systems with no damping or driving force, the energy should
be conserved. However, with the Euler method the energy typically diverges with time, whereas in
most higher-order methods the energy errors are bounded. Fig. 1 shows results obtained with the
Euler method for the energy and the position of a harmonic oscillator with k = m = 1 (F = −kx).
Even for a time step as as small as 10−3 , the energy error is as large as ≈ 5% at t = 50 (corresponding
to less than 9 oscillations); it grows faster than linearly with t.

There are several ways to proceed to derive more accurate, higher-order integration schemes; we
will here discuss manipulations leading to a few practically useful formulas.

2.2 Leapfrog and Verlet algorithms

To simplify the discussion initially, we will here first assume that there is no damping, i.e., the
force and the acceleration are velocity independent. We begin by writing the Taylor expansion of

2
initial values

v(-1/2) x(0) v(1/2) x(1) v(3/2) x(2) v(5/2) x(3)

Figure 2: Position and velocity grids used in the leapfrog method. The position is calculated at
integer multiples of the time step, tn = n∆t , n = 0, 1, . . ., and the velocity is evaluated at the times
tn−1/2 = (n − 1/2)∆t between these points. The integration starts with given n = 0 values.

xn+1 = x(tn + ∆t ) to second order in ∆t ;


1
x(tn+1 ) = x(tn ) + ∆t v(tn ) + ∆2t a(xn , tn ) + O(∆3t ). (4)
2
Noting that v(tn ) + (∆t /2)a(xn , tn ) = v(tn+1/2 ) with an error of order ∆2t , we can rewrite this as

x(tn+1 ) = x(tn ) + ∆t v(tn + ∆t /2) + O(∆3t ). (5)

We thus need a formula that propagates the velocity on a time grid with points tn+1/2 = tn + ∆t /2,
i.e., between the integer-labeled time points tn = t0 + n∆t used for the position. If we use the first-
order expansion of the velocity, v(tn+1/2 ) = v(tn−1/2 ) + ∆t a(tn−1/2 ) + O(∆2t ), the error remains
O(∆3t ) in Eq. (5), but we have a problem since this requires the acceleration, and hence the position
x (on which the force depends), on the grid points tn+1/2 used only for the velocity. However, it
appears intuitively clear, by symmetry that we should actually use the acceleration at tn+1 to
propagate the velocity from tn+1/2 to tn+3/2 (and we will further confirm below that this is the
correct way), , i.e.,
v(tn+3/2 ) = v(tn+1/2 ) + ∆t a(xn+1 , tn+1 ). (6)
The scheme combining Eqs. (5) and (6) is often called the leapfrog method;

vn+1/2 = vn−1/2 + ∆t an ,
xn+1 = xn + ∆t vn+1/2 . (7)

The leapfrog grid is illustrated in Fig. 2.

We normally have initial conditions in the form of x0 and v0 . In order to start the leapfrog method
we need v−1/2 , which we can get, up to an error ∼ ∆2t , using v−1/2 = v0 − ∆2t a0 + O(∆2t ).

It should be noted here that when we discuss the step error, we normally have in mind the error
in x(t). The error in v(t) is normally larger, exemplified by the O(∆2t ) velocity error in the above
derivation.

It turns out that the position x in the leapfrog method in fact has a single-step error of order ∆4t ,
i.e., smaller than what might have been expected from the initial expansion (5). This can be seen
most easily by deriving the method in another way, starting from two different Taylor expansions:
1 1 3
xn+1 = xn + ∆t vn + ∆2t an + ∆ ȧn + O(∆4t ),
2 6 t
1 1 3
xn−1 = xn − ∆t vn + ∆2t an − ∆ ȧn + O(∆4t ). (8)
2 6 t

3
Adding these two equations gives

xn+1 = 2xn − xn−1 + ∆2t an + O(∆4t ), (9)

which is called the Verlet algorithm. It does not contain any velocities explicitly; the next x-value
is obtained from two preceding x values.

The Verlet algorithm is in fact completely equivalent to the leapfrog method (7). To see this,
consider first the difference xn − xn−1 that is contained in Eq. (9). To cubic order in an expansion
about the half-point xn+1/2 we have

xn − xn−1 = [xn−1/2 + (∆t /2)vn−1/2 + 21 (∆t /2)2 an−1/2 + 16 (∆t /2)3 ȧn−1/2 + . . .] (10)
1 2 1 3
−[xn−1/2 − (∆t /2)vn−1/2 + 2 (∆t /2) an−1/2 − 6 (∆t /2) ȧn−1/2 + . . .] (11)
= ∆t vn−1/2 + O(∆3t ), (12)

which we can use in Eq. (9) to obtain the form

xn+1 = xn + ∆t vn−1/2 + ∆2t an + O(∆4t ). (13)

Treating the difference vn+1/2 − vn−1/2 in the same way as done for the similar position difference
in Eq. (12), we have
vn+1/2 − vn−1/2 = ∆t an + O(∆3t ), (14)
which confirms the assertion previously used in Eq. (6) but now conforming the size of the error.
Using Eq. (14) in Eq. (13) we obtain

xn+1 = xn + ∆t vn+1/2 + O(∆4t ). (15)

Comparing Eqs. (14) and (13) with Eqs. (7), we see that we have derived forms identical to the
leapfrog algorithm. This, the Verlet and Leapfrog algorithms give identical results for x(t), both
with step error O(∆4 ). The step error in the velocity is O(∆3 ). In the Verlet form (9) the velocity
does not appear explicitly, but it can still be obtained if needed by using Eq. (12) in the form

vn+1/2 = (xn − xn−1 )/∆t + O(∆2t ). (16)

Here the step error is larger than in the Leapfrog method, but this is normally not a problem
because the error is not propagating or accumulating from step to step. If neede be, three x values
can be used to calculate the velocity with P (∆3 ) error at each step. In Sec. 2.3 we will consider the
accumulated errors over an entire long time interval, which can be much larger than the individual
step errors.

An important feature of the Leapfrog algorithm is that it is time-reversible, i.e., if we calculate


xn+1 from xn and vn+1/2 according to Eqs. (7), and then reverse the time direction to go back to
xn , we use the same velocity vn+1/2 as in the forward step, up to the change of its sign.. Therefore,
in the absence of numerical round-off errors, we arrive back exactly at the original xn . The same
can be seen in the case of the Verlet formulation. This time-reversal symmetry is violated in the
Euler metod, Eqs. (3), where we use vn to go from xn to xn+1 , but −vn+1 to go backwards from
xn+1 to xn . If we carry out several leapfrog integration steps backwards in time, t = tN , . . . , t0 ,
after having completed a calculation from t0 to tN , we should arrive back at the original initial

4
Figure 3: Time dependence of the energy of a harmonic oscillator with k = m = 1, integrated using
the leapfrog method with ∆t = 10−1 (thinner curve) and 10−2 (thicker curve).

conditions at t = t0 . This way the time-reversibility can in fact be used as a check of the sensitivity
of a calculation to round-off errors (which will slightly break the symmetry).

An important consequence of the time-reversibility is that the errors are bounded for a system with
periodic motion. This follows because integrating backward or forward in time for a full period T
involves the acceleration at exactly the same x points (assuming now that the period is exactly
a multiple of the time step). Hence, the error δ in some quantity, e.g., the energy, has to be the
same at t = ±T ; δ(T ) = δ(−T ). Integrating forward from t = −T to 0, starting from the point
reached in a previous integration from t = 0 to −T , we would expect an additional error similar to
δ(T ), because the initial conditions of these two integrations only differ by the very small amount
δ(−T ), so that the total error after a backward and a forward integration should be ≈ 2δ(T ). To be
consistent with the time-reversibility, according to which this second integration in fact must bring
us back to the original starting point and hence zero error, we must have δ(±T ) = 0. The error
only vanishes completely at times t − t0 = T, 2T, . . ., but the important point is that there can be
no steady error increase, which does affect methods with no time-reversal symmetry (as shown for
the Euler method in Fig. 1). The general form of the long-time error scaling is discussed in detail
in Sec. 2.3.

The bounded error of the leapfrog method when applied to systems with periodic motion is illus-
trated in Fig. 3, which shows results for the energy of the same harmonic oscillator as the one
studied in Sec. 2.1 using the Euler method. The energy here oscillates with a period half of the
periodicity 2π of the system. The vanishing of the error at every half period in this case is a
consequence of the symmetric potential, which was not assumed in the discussion above. Note also
how small the deviations are from the true energy E = 1/2 (compare with Fig. 1), even for ∆t as
large as 0.1.

There is yet a third equivalent formulation of the Verlet method, called the velocity Verlet algorithm.

5
It is obtained from the original Verlet algorithm as follows: Adding xn+1 on both sides of Eq. (9),

2xn+1 = xn+1 + 2xn − xn−1 + ∆2t an + O(∆4t ). (17)

we can define the velocity using a two-step difference;


1
vn = (xn+1 − xn−1 ), (18)
2∆t
which when used in (17) gives the position in terms of x and v at the previous step only;
1
xn+1 = xn + ∆t vn + ∆2t an . (19)
2
In order to obtain an equation for the velocity, we first write the original Verlet equation (9) for
xn instead of xn+1 ;
xn = 2xn−1 − xn−2 + ∆2t an−1 , (20)
which we add to (9). Rearranging the result in the following way;

xn+1 − xn−1 = xn − xn−2 + ∆2t (an−1 + an ), (21)

and using the velocity definition (18), we obtain an equation for vn ;


1
vn = vn−1 + ∆t (an−1 + an ). (22)
2
Shifting the time step by one and again writing down Eq. (19) for the position, we arrive at the
velocity Verlet algorithm:
1
xn+1 = xn + ∆t vn + ∆2t an ,
2
1
vn+1 = vn + ∆t (an+1 + an ). (23)
2
This formulation of the Verlet algorithm is completely equivalent to (9) and (7) as far as the
propagation of the position is concerned. It may preferrable to the leapfrog method in cases
where there is some reason to use the same time grid for x and v, but note that more operations
are required at each step of the velocity Verlet algorithm (however, the number of calls to the
force/acceleration function is the same, which is typically the part dominating the processor time).
It should also be noted that although the algorithms give identical results (up to round-off errors)
for xn , the accuracy in the velocity is four times higher in the leapfrog method becuase it is there
defined using x points separated by a single time step, instead of two steps in the velocity Verlet
method. However, nothing of course prohibits us from also calculating vn+1/2 in terms of xn and
xn+1 in the velocity Verlet method or vn in terms of xn−1 and xn+1 in the leapfrog method. In
view of this, all results are completely equivalent in the two formulations.

2.3 Error build-up in the Verlet/leapfrog method

As we have seen above, the single-step error in the Verlet/leapfrog algorithm is O(∆4t ) for the
position and O(∆2t ) for the velocity. We are clearly also very interested in the accumulated errors
after a large number of steps have been carried out.

6
f(t)

Figure 4: A function that is evaluated on a grid with spacing ∆t (solid circles) can be approximated
by an interpolating polynomial between those points (dashed curve), provided that the function is
smooth on the scale ∆t .

To study the accumulated error in the position x calculated using the Verlet/leapfrog method, we
use the original Verlet form of the algorithm, Eq. (9). We introduce a symbol for the deviation δn
of the calculated value xn from the exact solution xex
n ,

xn = xex
n + δn , (24)
and insert this in Eq. (9). After rearranging, this gives
δn−1 − 2δn + δn+1 = −(xex ex ex 2 4
n−1 − 2xn + xn+1 ) + ∆t an + O(∆t ). (25)
Under the assumption that the true solution, the acceleration (i.e., the force), and the deviation
are all smooth functions on a time scale set by ∆t (which is true for ∆t smaller than some ∆max t if
the true solution and the force are well behaved functions), we can imagine constructing high-order
polynomials that go through all the (tn , δn ) and (tn , xexn ) points, as illustrated in Fig. 4. We can
then use these interpolating polynomials to work in continuum time and use the second derivative
of a function [f (t) = δ(t) or xex (t)] in place of the discretized versions appearing in Eq. (25),
d2 f (tn ) 1 d 1
2
= [f (tn+1/2 ) − f (tn−1/2 )] = 2 (fn−1 − 2fn + fn+1 ), ∆t → 0, (26)
dt ∆t dt ∆t
to obtain from (25)
δ̈(tn ) = −ẍex (tn ) + a(tn ) + O(∆2t ), (27)
even though ∆t is fixed and finite. The error introduced by this ”quasi-continuum” approach is
given by the order of the imagined interpolating polynomials and will clearly be much smaller than
the O(∆4t ) error of the Verlet formula we are working with (under the assumption of smoothness
on the scale set by ∆t ). By definition of the exact solution, ẍex (tn ) = a(tn ), and hence in Eq. (25)
we are left with
δ̈(tn ) ∼ ∆2t . (28)
We can write an expression for the error δ(T ) after a large number of integration steps in terms of
the second derivative (T = tN − t0 ; we will set t0 = 0):
Z T Z T Z t 
′ ′
δ(T ) = dtδ̇(t) + δ(0) = dt dt δ̈(t ) − δ̇(0) + δ(0). (29)
0 0 0

7
By definition of the initial conditions, the error δ(0) = 0. Since the Verlet formula is completely
symmetric with respect to reversal of the time direction (n → −n), the Taylor expansion of the
error around t0 can have no contribution of first order (or any odd order), and hence also δ̇(0) = 0.
With the second derivative bounded by the quadratic form (28), the integral (29) clearly can scale
no worse than as T 2 ∆2t . Hence, the error in x after a finite number of steps scales in the worst case
scenario as ∆2t in the time discretization and as T 2 in the total integration time. Since there is an
unknown prefactor in the ∆t dependence of the error in Eq. (28)—the sign can even be mixed—the
T -scaling can be much better in practice, as we have seen explicitly in Fig. 3 in the case of the
harmonic oscillator. Note again that the above arguments hold only when ∆t is sufficiently small
for the solution to be smooth on this scale. The way the velocity is defined as a discrete derivative
of the coordinate, its error will clearly also scale as ∆2t for any t.

2.4 Verlet/leapfrog methods for damped systems

In the derivation of the Verlet algorithm, and its leapfrog formulation, we assumed that the force-
function contains no explicit velocity dependence. With a velocity dependent force, F (x, v, t), the
problem is that the acceleration an = a(xn , vn , tn ) in (7) should be evaluated at the time-points
corresponding to the position xn , on which we do not have the velocity. To circumvent this problem,
we first separate the damping term from the rest of the force terms and write
1
a(x, v, t) = [F (x, t) − G(v)]. (30)
m
If we simply approximate a(xn , vn , tn ) by [F (xn , tn ) − G(vn−1/2 )]/m, we make an error of order ∆t .
Since a is multiplied by ∆2t to obtain xn+1 , the resulting algorithm acquires a contribution to the
x error scaling as ∆3t ; an order lower than the Verlet algorithm without damping term. To remedy
this, we can use a scheme based on an intermediate estimation x̂n+1 of the position, obtained
using the above approximation G(vn−1/2 ) in place of G(vn ). We can subsequently correct for the
O(∆3t ) error in x̂n+1 by doing a second step, where we use a better approximation of the velocity;
vn = (x̂n+1 − xn−1 )/(2∆t ). The error in this velocity, and hence in G(vn ), is O(∆2t ), i.e., one order
smaller than in the first approximation and sufficient to render xn+1 calculated with it accurate to
the same order as in the original leapfrog algorithm. To summarice this modified algorithm, these
are the steps to be performed in the leapfrog version of the Verlet algorithm with a damping term:

v̂n+1/2 = vn−1/2 + ∆t [F (xn , tn ) − G(vn−1/2 )]/m,


x̂n+1 = xn + ∆t v̂n+1/2 ,
vn = (x̂n+1 − xn−1 )/(2∆t ), (31)
vn+1/2 = vn−1/2 + ∆t [F (xn , tn ) − G(vn )]/m,
xn+1 = xn + ∆t vn+1/2 .

This algorithm requires more work than the simple leapfrog method without damping. However,
in cases where the processor time is dominated by evaluating the function F , the differences are in
practice only minor.

In the important special case of linear damping, i.e.,


xn+1 − xn−1
G(v) = γv = + O(∆2t ), (32)
2∆t

8
the algorithm can be simplified. Starting from the Verlet form (9) we can write

∆2t
xn+1 = 2xn − xn−1 + [Fn − γ(xn+1 − xn−1 )/(2∆t )] + O(∆4t ), (33)
m
which we can rearrange as

∆2t
xn+1 (1 + γ∆t /2m) = 2xn − xn−1 (1 − γ∆t /2m) + Fn + O(∆4t ), (34)
m
or
(1 − γ∆t /2m)(xn − xn−1 ) + ∆2t Fn /m
xn+1 = xn + ,
1 + γ∆t /2m
∆t (1 − γ∆t /2)vn−1/2 + ∆2t Fn
= xn + . (35)
1 + γ∆t /2

In this expression we can identify the velocity vn+1/2 and obtain the leapfrog algorithm in the
presence of linear damping;

(1 − γ∆t /2m)vn−1/2 + ∆t Fn /m
vn+1/2 = ,
1 + γ∆t /2m
xn+1 = xn + ∆t vn+1/2 . (36)

Using the velocity Verlet algorithm (23) with damping, we have a problem analogous to that in
the leapfrog method; to evaluate vn+1 we need an+1 , which itself depends explicitly on vn+1 . We
can proceed in a way similar to what we did above, first obtaining an estimate v̂n+1 based on
approximating an+1 = a(xn+1 , vn+1 , tn+1 ) by a(xn+1 , vn + an ∆t , tn+1 ) and then refining;
1
xn+1 = xn + ∆t vn + ∆2t an ,
2
1
v̂n+1 = vn + ∆t [an + a(xn+1 , vn + ∆t an , tn+1 )], (37)
2
1
vn+1 = vn + ∆t [an + a(xn+1 , v̂n+1 , tn+1 )].
2
Also in this case the algorithm can be further simplified in the case of linear damping. The
expression corresponding to Eq. (21) in the presence of linear damping is

∆2t γ
xn+1 − xn−1 = xn − xn−2 + (Fn−1 + Fn ) − ∆t (xn+1 − xn−1 + xn − xn−2 ), (38)
m 2
which leads to the following algorithm [the formula for the position in the original velocity Verlet
algorithm with no damping, Eq. (23), remains unchanged]:
1
xn+1 = xn + ∆t vn + ∆2t an ,
2
1 ∆t
vn+1 = [vn (1 − ∆t γ/2m) + (Fn−1 + Fn )]. (39)
1 + ∆t γ/2m 2m

9
2.5 The Runge-Kutta method

The Runge-Kutta (RK) method can be considered the most classic of all high-order schemes. There
is a whole range of methods of this type of different orders, but the RK name is most commonly
associated with the fourth-order variant [discretization error O(∆5t )]. We will here outline one way
of constructing this algorithm; however, without proving the error scaling. We will first give a
complete proof of the much simpler second-order RK method.

For simplicity, before considering the equations of motion, we will consider the slightly easier case
of a single first-order differential equation,

ẋ(t) = f [x(t), t]. (40)

The second-order RK method for this equation corresponds to the mid-point rule of function inte-
gration: If we know the value of the function f [x(t), t] at the mid-point of an interval [tn , tn+1 ], its
integral is known up to an error O(∆3t ):
Z tn+1
f [x(t), t]dt = ∆t f [x(tn+1/2 ), tn+1/2 ] + O(∆3t ). (41)
tn

In the case of a function f (t), we could simply evaluate it at tn+1/2 , but in the present case we do
not know the value of xn+1/2 = x(tn+1/2 ) and hence we need to approximate it. We only need an
approximant linear in ∆t to keep the error of the integral cubic, so we can simply use the Euler
formula for (40):
∆t
x̂n+1/2 = xn + f (xn , tn ) + O(∆2t ), (42)
2
where ˆ is again used to indicate an intermediate step of the calculation. We then arrive at the
second-order RK formula,

xn+1 = xn + ∆t f (x̂n+1/2 , tn+1/2 ) + O(∆3t ), (43)

which is typically written in the algorithm form

k1 = ∆t f (xn , tn ),
k2 = ∆t f (xn + k1 /2, tn+1/2 ), (44)
xn+1 = xn + k2 .

In the case of the equations of motion for x and v, we use the Euler formula for both quantities
in the first step to obtain approximants for xn+1/2 and vn+1/2 and then use these to obtain better
estimates in the same way as we did above, in Sec. 2.4, giving

k1 = ∆t a(xn , vn , tn ),
l1 = ∆t vn ,
k2 = ∆t a(xn + l1 /2, vn + k1 /2, tn+1/2 ),
l2 = ∆t (vn + k1 /2), (45)
vn+1 = vn + k2 ,
xn+1 = xn + l2 .

10
Figure 5: Illustration of the scheme used to estimate the values of the function f at the true (t, x)
points (tn+1/2 , xn+1/2 ) and (tn+1 , xn+1 ) (here shown as open circles) based on the known point
z0 = (tn , xn ) (large solid circle). A point z1 is first generated using f (z0 ) as the derivative. The
points z2 and z3 are obtained using f (z1 ) as the derivative.

This algorithm is rarely used in practice, since the Verlet/leapfrog method is both simpler and
more accurate. The algorith does, however, serve as a good warm-up for studying the significantly
more complicated fourth-order RK algorithm.

We again first consider the single equation (40). Again using results from simple function integra-
tion, we know that if the values of the function f [x(t), t] are evaluated at the time points tn , tn+1/2 ,
and tn , we can apply Simpson’s formula to obtain the integral over the range [tn , tn+1 ] up to an
error O(∆5t ), i.e.,
∆t
xn+1 = xn + (fn + 4fn+1/2 + fn+1 ) + O(∆5t ). (46)
6
We hence need to find approximations for fn+1/2 and fn+1 up to errors O(∆4t ). The somewhat
obscure scheme used for this purpose is illustrated in Fig. 5. As in the second-order derivation
above, we begin by using an Euler formula to obtain an approximation for xn+1/2 . This time there
will be two such estimates, and we use a superscript to distinguish between them;
∆t
x̂1n+1/2 = xn + f (xn , tn )
2
∆t
x̂2n+1/2 = xn + f (x̂1n+1/2 , tn+1/2 ). (47)
2
Since f is the derivative of x, what we have done is to first extrapolate xn+1/2 from xn using
derivative information at the initial point (xn , tn ), denoted z0 in Fig. 5. The second extrapolation
is based on derivative information at the estimated point (x̂1n+1/2 , tn+2 ), denoted z1 in the figure,
but still using (xn , tn ) as the point of departure. This second point at tn+1/2 is denoted z2 in the
figure. The idea of this procedure is, roughly, that if the function f is smooth on the scale ∆t , the
actual point xn+1/2 should be between z1 and z2 . In the RK method the desired function value
f1+n/2 is approximated by the average of the function at these two estimated points;

1
fn+1/2 ≈ [f (x1n+1/2 , tn+1/2 ) + f (x2n+1/2 , tn+1/2 )]. (48)
2

11
We will not prove here the remarkable fact that this approximation has the required accuracy, i.e.,
an O(∆4t ) error. In order to estimate xn+1 , the function value at (x̂2n+1/2 , tn+1/2 ) (i.e., z2 ) is used
to obtain an approximation of the derivative,

x̂n+1 = xn + ∆t f (x2n+1/2 , tn+1/2 ). (49)

This is in the spirit that the derivative should be evaluated at the mid-point between (xn , tn ) and
(xn+1 , tn+1 ). Not knowing the actual point, we use the estimate z2 for this. Again, it can be proven
that this leads to an error O(∆4t ) in xn+1 . The fourt-order RK algorithm resulting from the above
procedures is traditionally written as

k1 = ∆t f (xn , tn ),
k2 = ∆t f (xn + k1 /2, tn+1/2 ),
k3 = ∆t f (xn + k2 /2, tn+1/2 ),
k4 = ∆t f (xn + k3 , tn+1 ),
1
xn+1 = xn + (k1 + 2k2 + 2k3 + k4 ). (50)
6
The correctness of this scheme up to an error of order O(∆5t ) can be proven using a Taylor expansion.

Before adapting the RK algorithm to equations of motion, we write down a more general form of
the RK scheme for two coupled equations;

ẋ(t) = f (x, y, t),


ẏ(t) = g(x, y, t), (51)

for which the RK algorithm generalizes to

k1 = ∆t f (xn , yn , tn ),
l1 = ∆t g(xn , yn , tn ),
k2 = ∆t f (xn + k1 /2, yn + l1 /2, tn+1/2 ),
j2 = ∆t g(xn + k1 /2, yn + l1 /2, tn+1/2 ),
k3 = ∆t f (xn + k2 /2, yn + l2 /2, tn+1/2 ),
l3 = ∆t g(xn + k2 /2, yn + l2 /2, tn+1/2 ),
k4 = ∆t f (xn + k3 , yn + l3 , tn+1 ),
l4 = ∆t g(xn + k3 , yn + l3 , tn+1 ),
1
xn+1 = xn + (k1 + 2k2 + 2k3 + k4 ),
6
1
yn+1 = yn + (l1 + 2l2 + 2l3 + l4 ), (52)
6
which in turn easily generalizes to any number of coupled equations.

In the case of the equations of motion, we make the identification x → v, y → x, f → a, g → v

12
(note that v corresponds to a function g(x, y, t) = x in the notation used above). We then obtain

k1 = ∆t a(xn , vn , tn ),
l1 = ∆t vn ,
k2 = ∆t a(xn + l1 /2, vn + k1 /2, tn+1/2 ),
l2 = ∆t (vn + k1 /2),
k3 = ∆t a(xn + l2 /2, vn + k2 /2, tn+1/2 ),
l3 = ∆t (vn + k2 /2),
k4 = ∆t a(xn + l3 , vn + k3 , tn+1 ),
l4 = ∆t (vn + k3 ),
1
vn+1 = yn + (k1 + 2k2 + 2k3 + k4 ),
6
1
xn+1 = xn + (l1 + 2l2 + 2l3 + l4 ). (53)
6
The RK method is clearly not time-reversal symmetric, and hence the errors are unbounded even
for periodic motion. Fig. 6 shows the time dependence of the energy obtained using the RK method
for the same harmonic oscillator that was previously studied with the Euler (Fig. 1) and leapfrog
(Fig. 3) methods. Comparing the RK and leapfrog methods at ∆ = 0.1, it can be seen that the
errors are considerably smaller in the case of the RK method within the time range shown. However,
the error grows linearly with time, and hence for long times the leapfrog method will in fact perform
better in this case although its single-step error scaling is worse. However, for aperiodic motion, or
motion with a very long period, there may be no practical advantage of the bounded error of the
leapfrog method—the error can still grow large within a period—and then the RK method is often
preferrable.

An advantage of the RK method is that it can be used with a variable time step. This is not
possible in the Verlet/leapfrog method, because data at tn and tn+1/2 are needed to advance the
time to tn+1 . Using different time-steps tn+1 − tn and tn − tn−1 would clearly ruin the symmetry
of the scheme and lead to a worse error scaling. In the RK method, only data at tn are needed to
advance to tn+1 , and hence ∆t can be chosen arbitrarily in each step. There are adaptive methods
that use this property of the Runge-Kutta scheme to automatically adjust the time step to keep
the error within specified limits.

2.6 Motion in more than one dimension

The methids we have discussed above generalize very easily to motion in more than one dimension.
The equations of motion become vector equations;

ẋ(t) = ⃗v (t)
1 ⃗
⃗v˙ (t) = F [⃗x(t), ⃗v (t), t], (54)
m
which separate into equations for the components xα , vα (α = 1, 2, 3 in three dimensions). These
equations are coupled only through the force function, the components Fα of which typically depend
on all components of ⃗x and, in the case of damped motion, ⃗v .

13
0.50000

0.49998 ∆t=0.2
∆t=0.1
0.49996

E(t)
0.49994

0.49992

0.49990

0.49988
0 10 20 30 40 50
t

Figure 6: Time dependence of the energy of a harmonic oscillator with k = m = 1 integrated using
the 4th-order Runge-Kutta method with time steps ∆t = 0.2 and 0.1.

As a simple example, we consider planetary motion, which takes place in a single plane (in the case
of a single planet considered here), i.e., we study the equations of motion in two dimensions. For
a planet of mass m moving in the gravitational field of a star with mass M , the force experienced
by m is given by
GM m
F⃗ (r) = − ⃗r, (55)
r3
where ⃗r is the position vector of the planet with respect to the star and r = |⃗r|. In principle this is
a two-body problem, but a result of elementary mechanics is that it can be reduced to a one-body
problem for a reduced mass µ = M m/(m + M ). We will here for simplicity assume that M ≫ m
and treat the star as a stationary body, using the original m for the mass of the planet. We then
have the equations of motion for the x and y coordinates of the planet:

ẋ = vx ,
v̇x = −GM x/r3 ,
ẏ = vy , (56)
3
v̇y = −GM y/r ,
p
where r = x2 + y 2 . The leapfrog algorithm (7) for these equations is

vx (n + 1/2) = vx (n − 1/2) − ∆t GM x(n)[x2 (n) + y 2 (n)]−3/2 ,


x(n + 1) = x(n) + ∆t vx (n + 1/2),
vy (n + 1/2) = vy (n − 1/2) − ∆t GM y(n)[x2 (n) + y 2 (n)]−3/2 , (57)
y(n + 1) = y(n) + ∆t vy (n + 1/2).

This scheme of course generalizes to other types of forces. The Runge-Kutta method can also easily
be used in more than one dimension.

14

You might also like