Internal Error Propagation in Explicit Runge--Kutta Methods
Item Type
Article
Authors
Ketcheson, David I.; Loczi, Lajos; Parsani, Matteo
Citation
Internal Error Propagation in Explicit Runge--Kutta Methods
2014, 52 (5):2227 SIAM Journal on Numerical Analysis
Eprint version
Publisher's Version/PDF
DOI
10.1137/130936245
Publisher
Society for Industrial & Applied Mathematics (SIAM)
Journal
SIAM Journal on Numerical Analysis
Rights
Archived with thanks to SIAM Journal on Numerical Analysis
Download date
14/06/2020 00:42:01
Link to Item
http://hdl.handle.net/10754/333640
c 2014 Society for Industrial and Applied Mathematics
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
SIAM J. NUMER. ANAL.
Vol. 52, No. 5, pp. 2227–2249
INTERNAL ERROR PROPAGATION IN EXPLICIT RUNGE–KUTTA
METHODS∗
DAVID I. KETCHESON† , LAJOS LÓCZI† , AND MATTEO PARSANI‡
Abstract. In practical computation with Runge–Kutta methods, the stage equations are not
satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example
that propagation of such errors within a single step can have catastrophic effects for otherwise
practical and well-known methods. We perform a general analysis of internal error propagation,
emphasizing that it depends significantly on how the method is implemented. We show that for a
fixed method, essentially any set of internal stability polynomials can be obtained by modifying the
implementation details. We provide bounds on the internal error amplification constants for some
classes of methods with many stages, including strong stability preserving methods and extrapolation
methods. These results are used to prove error bounds in the presence of roundoff or other internal
errors.
Key words. Runge–Kutta methods, internal stability, roundoff error, strong stability preservation, extrapolation, ordinary differential equations
AMS subject classifications. 65L05, 65L06, 65L70, 65G50
DOI. 10.1137/130936245
1. Error propagation in Runge–Kutta methods. Runge–Kutta (RK) methods are used to approximate the solution of initial value ODEs:
′
U (t) = F (t, U (t)),
(1.1)
U (t0 ) = U0 ,
often resulting from the semidiscretization of partial differential equations (PDEs).
An s-stage RK method approximates the solution of (1.1) as follows:
(1.2a)
(1.2b)
Yi = Un + τ
Un+1 = Un + τ
s
j=1
s
aij F (tn + cj τ, Yj )
(1 ≤ i ≤ s),
bj F (tn + cj τ, Yj ).
j=1
Here Un is a numerical approximation to U (tn ), τ = tn+1 − tn is the step size, and
the stage values Yj are approximations to the solution at times tn + cj τ .
∗ Received by the editors September 9, 2013; accepted for publication (in revised form) April
8, 2014; published electronically September 11, 2014. The authors were supported by award
FIC/2010/05 2000000231 made by KAUST.
http://www.siam.org/journals/sinum/52-5/93624.html
† King Abdullah University of Science and Technology, Thuwal 23955-6900, Saudi Arabia
(david.ketcheson@kaust.edu.sa, lajos.loczi@kaust.edu.sa).
‡ Computational Aerosciences Branch, NASA Langley Research Center, Hampton, VA 23681
(parsani.matteo@gmail.com). This author was supported in part by an appointment to the NASA
postdoctoral program at Langley Research Center, administered by Oak Ridge Associates Universities.
2227
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2228
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
Most analysis of RK methods assumes that the stage equations (1.2a) are solved
exactly. In practice, perturbed solution and stage values are computed:
(1.3a)
(1.3b)
n + τ
Yi = U
n+1 = U
n + τ
U
s
j=1
s
j=1
aij F (tn + cj τ, Yj ) + r̃i
(1 ≤ i ≤ s)
bj F (tn + cj τ, Yj ) + r̃s+1 .
The internal errors (or stage residuals) r̃j include errors due to
• roundoff,
• finite accuracy of an iterative algebraic solver (for implicit methods).
to be
The perturbed equations (1.3) are also used to study accuracy by taking Y , U
exact solution values to the ODE or PDE system, in which case the stage residuals
include
• temporal truncation errors,
• spatial truncation errors,
• perturbations due to imposition of boundary conditions.
Such analysis is useful for explaining the phenomenon of order reduction due to stiffness [8] or imposition of boundary conditions [4, 1]. The theory of BSI-stability and
B-convergence has been developed to understand these phenomena, and the relevant
method property is usually the stage order [8].
The study of both kinds of residuals (due to roundoff or truncation errors) is
referred to as internal stability [26, 24, 23, 22, 29]. We focus on the issue of amplification of roundoff errors in explicit RK (ERK) schemes, although we will see that some
of our results and techniques are applicable to other internal stability issues. Since
roundoff errors are generally much smaller than truncation errors, their propagation
within a single step is not usually important. However for ERK methods with a large
number of stages, the constants appearing in the propagation of internal errors can be
so large that amplification of roundoff becomes an issue [27, 26, 28]. Amplification of
roundoff errors in methods with many stages is increasingly important because there
now exist several classes of practical RK methods that use many stages, including
Runge–Kutta–Chebyshev (RKC) methods [29], extrapolation methods [10], deferred
correction methods [5], some strong stability preserving (SSP) methods [9], and other
stabilized ERK methods [19, 20]. Furthermore, these methods are naturally implemented not in the Butcher form (1.2) but in a modified Shu–Osher form [7, 12, 9]:
(1.4)
Yi = vi Un +
s
j=1
Un+1 = Ys+1 .
αij Yj + τ βij F (tn + cj τ, Yj ) (1 ≤ i ≤ s + 1),
As we will see, propagation of roundoff errors in these schemes should be based on
the perturbed equations
(1.5)
n +
Yi = vi U
n+1 = Ys+1
U
s
αij Yj + τ βij F (tn + cj τ, Yj ) + r̃i
j=1
(1 ≤ i ≤ s + 1),
rather than on (1.3), because internal error propagation (in contrast to traditional
error propagation) depends on the form used to implement the method. Through an
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
INTERNAL STABILITY OF RK METHODS
2229
example in section 1.2 we will see that even when methods (1.2) and (1.4) are equivalent, the corresponding perturbed methods (1.3) and (1.5) may propagate internal
errors in drastically different ways. Thus the residuals in (1.5) and in (1.3) will in
general be different. In section 2, we elaborate on this difference and derive, for the
first time, completely general expressions for the internal stability polynomials.
We emphasize here that the difference between (1.4) and (1.2) is distinct from the
reordering of step sizes that was used to improve internal stability in [26]. Methods
(1.4) and (1.2) can have different internal stability properties even when they are
algebraically equivalent stage for stage.
In section 2.2, we introduce the maximum internal amplification factor, a simple
characterization of how a method propagates internal errors. Although we follow
tradition and use the term internal stability, it should be emphasized that this topic
does not relate to stability in the usual sense, as there is no danger of unbounded
blow-up of errors, only their substantial amplification. In this sense, the maximum
internal amplification factor is similar to a condition number in that it is an upper
bound on the factor by which errors may be amplified. In section 2.4 we show that
for a fixed ERK method, essentially any set of internal stability polynomials can be
obtained by modifying the implementation.
In sections 3 and 4, we analyze internal error propagation for SSP and extrapolation methods, respectively. Theorem 3.1 shows that SSP methods exhibit no internal
error amplification when applied under the usual assumption of forward Euler contractivity. Additional results in these sections provide bounds on the internal amplification
factor for general initial value problems. Much of our analysis follows along the lines
of what was done in [29] for RKC methods. First we determine closed-form expressions for the stability polynomials and internal stability polynomials of these methods.
Then we derive bounds and estimates for the maximum internal amplification factor.
Using these bounds, we prove error bounds in the presence of roundoff error for whole
families of methods where the number of stages may be arbitrarily large.
1.1. Preliminaries. In this subsection we define the basic setting and notation
for our work. We consider the initial value problem (1.1), where U : [t0 , T ] → Rm
and F : R × Rm → Rm . To shorten the notation, we will sometimes omit the first
argument of F , writing F (U ) when there is no danger of confusion.
The RK method (1.2) and its properties are fully determined by the matrix A =
[aij ] ∈ Rs×s and column vector b = [bj ] ∈ Rs , which are referred to as the Butcher
coefficients [3].
Let us define
(1.6)
α=
β=
v=
α1:s
αs+1
0
,
,
0
β1:s
0
βs+1
0
v1:s
vs+1
= (Is+1 − α)1,
α1:s
β1:s
⎛
α11
⎜ .
=⎜
⎝ ..
···
αs1 · · ·
⎛
β11 · · ·
⎜ .
=⎜
⎝ ..
βs1
···
⎞
α1s
.. ⎟
⎟
. ⎠
αss
⎞
β1s
.. ⎟
⎟
. ⎠
αs+1 = (αs+1,1 , . . . , αs+1,s ),
βs+1 = (βs+1,1 , . . . , βs+1,s ),
βss
v1:s = (v1 , . . . , vs )⊤ ,
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2230
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
where 1 is the column vector of length s+1 with all entries equal to unity, and Ik is the
k × k identity matrix. We always assume that (Is − α1:s )−1 exists; methods without
this property are not well defined [9]. The methods (1.2) and (1.4) are equivalent
under the conditions
A = (Is − α1:s )−1 β1:s ,
(1.7)
b = βs+1 + αs+1 (Is − α1:s )−1 β1:s .
We assume that all methods satisfy the conditions for stage consistency of order
one, i.e.,
(1.8a)
αij ,
vi = 1 −
j
c = A1 = (Is − α1:s )−1 β1:s 1.
(1.8b)
Finally, define
(1.9)
Y = (Y1 , . . . , Ys+1 )⊤ ,
(1.10)
α = α ⊗ Im ,
F(Y) = (F (Y1 ), . . . , F (Ys ), 0)⊤ ,
β = β ⊗ Im ,
v = v ⊗ Im ,
where ⊗ denotes the Kronecker product. Method (1.4) can also then be written
(1.11)
Y = vUn + αY + τ βF(Y),
Un+1 = Ys+1 .
Recall that m denotes the dimension of U and s denotes the number of stages; boldface
symbols are used for vectors and matrices with dimension(s) of size m(s+1) whenever
m ≥ 2. When considering scalar problems (m = 1), we use nonbold symbols for
simplicity.
When studying internal error amplification over a single step, we will sometimes
omit the tilde over Un to emphasize that we do not consider propagation of errors
from previous steps.
Remark 1.1. The Butcher representation of an RK method is the particular
Shu–Osher representation obtained by setting αij to zero for all i, j and setting
A 0
.
β = β0 := ⊤
b
0
1.2. An example. Here we present an example demonstrating the effect of internal error amplification. We consider the following initial value problem (problem
D2 of the nonstiff DETEST suite [13]), whose solution traces an ellipse with eccentricity 0.3:
(1.12a)
(1.12b)
(1.12c)
x′′ (t) = −x/r3 ,
y ′′ (t) = −y/r3 ,
2
2
2
r =x +y .
x(0) = 0.7,
y(0) = 0,
x′ (0) = 0,
y ′ (0) = 13/7,
We note that very similar results would be obtained with many other initial value
problems. We first compute the solution at t = 20 using Fehlberg’s 5(4) RK pair,
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
INTERNAL STABILITY OF RK METHODS
(a) Global error versus input tolerance
2231
(b) Number of steps versus global error
Fig. 1. Internal instability of a 12th-order extrapolation method applied to problem (1.12).
which is not afflicted by any significant internal amplification of error. Results, shown
in Figure 1, are typical and familiar to any student of numerical analysis. As the
tolerance is decreased, the step size controller uses smaller steps and achieves smaller
local—and global—errors, at the cost of an increased amount of work. Eventually,
the truncation errors become so small that the accumulation of roundoff errors is
dominant and the overall error cannot be decreased further.
Next we perform the same computation using a 12th-order extrapolation method
based on the first-order explicit Euler method [15]; this method has 67 stages. The
(embedded) error estimator for the extrapolation method is based on the 11th-order
diagonal extrapolation entry. This pair is naturally implemented in a certain Shu–
Osher form (see Algorithm 1 in section 4). The results, also shown in Figure 1, are
similar to those of Fehlberg’s method for large tolerances, although the number of
steps required is much smaller for this 12th-order method. However, for tolerances
less than 10−9 , the extrapolation method fails completely. The step size controller
rejects every step and continually reduces the step size; the integration cannot be
completed to the desired tolerance in a finite number of steps—even though that
tolerance is six orders of magnitude larger than roundoff of 32 bit calculation!
Finally, we perform the same computation using an alternative implementation
of the 12th-order extrapolation method. The Butcher form (1.2) is used for this
implementation; it seems probable that no extrapolation method has ever previously
been implemented in this (unnatural) way. The results are again shown in Figure 1; for
large tolerances they are identical to the Shu–Osher implementation. For tolerances
below 10−9 , the Butcher implementation is able to complete the integration, albeit
using an excessively large number of steps, and with errors much larger than those
achieved by Fehlberg’s method at the same tolerance.
What is the cause for the surprising behavior of the extrapolation method? We
will return to and explain this example after describing the relevant theory.
2. Internal error amplification.
2.1. Internal stability functions. Internal stability can be understood by considering the linear, scalar (m = 1) initial value problem
(2.1)
U ′ (t) = λU (t),
λ ∈ C,
U (t0 ) = U0 .
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2232
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
We will consider the application of an RK method in Shu–Osher form (1.11); recall
that the Butcher form (1.2) is included as a special case. Application of the RK
method (1.11) to the test problem (2.1) yields
(2.2a)
(2.2b)
Y = v1:s Un + α1:s Y + zβ1:s Y,
Un+1 = vs+1 Un + αs+1 Y + zβs+1 Y,
where z = τ λ. By solving (2.2a) for Y and substituting into (2.2b) we obtain
(2.3)
Un+1 = P (z)Un ,
where P (z) is the stability function, which in Shu–Osher variables takes the form
(2.4)
P (z) := vs+1 + (αs+1 + zβs+1 ) (Is − α1:s − zβ1:s )−1 v1:s .
If the stage equations are satisfied exactly, then P (z) completely determines the
behavior of the numerical scheme for linear problems. However, it is known that RK
methods with many stages may exhibit loss of accuracy even when |P (z)| ≤ 1, due
to propagation of errors within a single time step [27, 26, 28]. In order to investigate
the internal stability of an RK method we apply the perturbed scheme (1.5), which
for problem (2.1) yields
(2.5a)
(2.5b)
n + α1:s Y + zβ1:s Y + r̃1:s ,
Y = v1:s U
n+1 = vs+1 U
n + αs+1 Y + zβs+1 Y + r̃s+1 .
U
n − Un and d = [dj ] = [Yj − Yj ]. Subtracting (2.2) from (2.5) gives
Let ǫn = U
(2.6)
(2.7)
d = v1:s ǫn + α1:s d + zβ1:s d + r̃1:s ,
ǫn+1 = vs+1 ǫn + αs+1 d + zβs+1 d + r̃s+1 .
By solving (2.6) for d and substituting the resulting expression in (2.7), one arrives
at the error formula
(2.8)
ǫn+1 = P (z)ǫn + Q(z; α, β)r̃1:s + r̃s+1 .
The stability function P (z) has already been defined in (2.4), and the internal stability
functions Q(z; α, β) are
(2.9)
−1
Q(z; α, β) ≡ (Q1 (z), Q2 (z), . . . , Qs (z)) := (αs+1 + zβs+1 ) (Is − α1:s − zβ1:s )
.
Note that for convenience we have omitted the last component, Qs+1 (z), which is
always equal to 1. We will often suppress the explicit dependence of Q on α, β to
keep the notation simple. Using (1.7), we can obtain the expression
(2.10)
Q(z) = zb⊤ (I − zA)−1 (I − α1:s )−1 + αs+1 (I − α1:s )−1 .
We will refer to ǫ as “error,” though its exact interpretation depends on what U
refer to. If r̃ represents roundoff error, then (2.8) indicates the effect of roundoff
and U
on the overall solution. The one-step error is given by the sum of two terms: one
governed by P (z), accounting for propagation of errors committed in previous steps,
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
INTERNAL STABILITY OF RK METHODS
2233
and one governed by Q(z), accounting for propagation of the internal errors within
the current step. In particular, Qj (z) governs the propagation of the perturbation
r̃j , appearing in stage j. Clearly we must have |P (z)| ≤ 1 for stable propagation of
errors, but if the magnitude of |Qj (z)|r̃j is larger than the magnitude of the desired
tolerance, then the second term is also important.
Note that Qj (z) herein is denoted by Qsj (z) in [29]. We are mostly interested in
ERK methods, for which Qj (z) is a polynomial of degree at most s − j and the first
component of r̃ is zero, since no error is made in setting Y1 = Un .
Remark 2.1. For a method in Butcher form (i.e., with α = 0), (2.10) reads
(2.11)
QB (z) := Q(z; 0, β0) = zb⊤ (I − zA)−1 .
Formulas (2.10) and (2.11) differ in an important way: we have QB (z) → 0 as z → 0,
so that the effects of internal errors vanish in the limit of infinitesimal step size. On the
other hand, (2.10) does not have this property, so internal errors may still be amplified
by a finite amount, no matter how small the step size is. As we will see, this explains
the different behavior of the two extrapolation implementations in section 1.2.
2.1.1. Local defects. Equation (2.8) can also be used to study the discretization
n+1 = U (tn+1 ), then ǫn+1 is the
n = U (tn ), Yi = U (tn + ci τ ), U
error. If we take U
global error and (2.8) describes how the stage errors contribute to it. We have
r̃1:s =
p
τk
(k)
θk uh (tn ) + O(τ p+1 ),
k!
k=0
where
(2.12)
θ0 = 1 − v − α1,
1
(Is − α1:s ) ck − kAck−1
θk =
k!
(1 ≤ k ≤ p).
Note that here c ∈ Rs+1 with cs+1 = 1 and ck denotes the vector with jth entry equal
to ckj . Substituting the above into (2.8), we obtain
(2.13)
ǫn+1 = P (z)ǫn +
p
τk
k=2
k!
(k)
uh (tn )Q(z)θk + O(τ p+1 ).
For a method of order p, it can be shown that τ k Q(z)θk = O(z p+1 ), where p is the
classical order of the method, so the expected rate of convergence will be observed
in the limit z → 0. On the other hand, in problems arising from semidiscretization
of a PDE, it often does not make sense to consider the limit z → 0 but only the
limit τ → 0. In that case, it can be shown only that τ k Q(z)θk = O(τ p̃+1 ), where p̃
is the stage order of the method; for all explicit methods, p̃ = 1. This difference is
responsible for the phenomenon of order reduction [23].
If the stage equations are solved exactly, then the stage and solution values computed using the Shu–Osher form (1.5) are identically equal to those computed using
the Butcher form (1.3). By comparing the Butcher and Shu–Osher forms, one finds
that in general the residuals are related by
B
SO
= (Is − α1:s )−1 r̃1:s
,
r̃1:s
SO
SO
B
.
= αs+1 (Is − α1:s )−1 r̃1:s
+ r̃s+1
r̃s+1
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2234
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
Here r̃B , r̃SO denote the residuals in (1.3) and (1.5), respectively. Thus if the stage
equations are solved exactly, the product of Q(z) with the residuals is independent of
the form used for implementation.
If we wish to study the overall error in the presence of roundoff (i.e., the combined
effect of discretization error and roundoff error), we may take Un to be the solution
n = U (tn ). This leads to
given by the RK method in the presence of roundoff and U
(2.14a)
ǫn+1 = P (z)ǫn + Q(z)r̃ +
p
τ k (k)
u (tn )Q(z)θk + O(τ p+1 )
k! h
k=2
(2.14b)
= P (z)ǫn + Q(z)r̃ + O(τ p+1 )
for a method of order p, where now r̃ denotes only the roundoff errors. The effect of roundoff becomes significant when the last two terms in (2.14b) have similar
magnitude.
2.1.2. Internal stability polynomials and implementation: An example.
A given RK method can be rewritten in infinitely many different Shu–Osher forms;
this rewriting amounts to algebraic manipulation of the stage equations and has no
effect on the method if one assumes the stage equations are solved exactly. However,
the internal stability of a method depends on the particular Shu–Osher form used to
implement it.
For example, consider the two-stage, second-order optimal SSP method [9]. It is
often written and implemented in the following modified Shu–Osher form:
(2.15a)
(2.15b)
(2.15c)
Un+1
Y1 = Un ,
Y2 = Y1 + τ F (Y1 ),
1
1
= Y3 = Un + (Y2 + τ F (Y2 )).
2
2
Applying (2.15) to the test problem (2.1) and introducing perturbations in the stages
we have
Y1 = Un ,
Y2 = (1 + z)Un + r̃2 ,
n+1 = 1 Un + 1 (1 + z)Y2 + r̃3 .
U
2
2
n+1 yields
Substituting the equation for Y2 into that for U
n+1 = 1 + z + 1 z 2 Un + 1 + z r̃2 + r̃3 ,
U
2
2
from which we can read off the stability polynomial P (z) = 1 + z + z 2 /2 and the
second-stage internal stability polynomial
Q2 (z) = (1 + z)/2.
However, in the Butcher form, the equation for Un+1 is written as
1
Un+1 = Un + τ (F (Y1 ) + F (Y2 )),
2
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
INTERNAL STABILITY OF RK METHODS
2235
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
which leads by a similar analysis to
Q2 (z) = z/2.
More generally, the equation for Un+1 can be written
1
1
1
+ β31 Un +
− β31 Y2 + β31 τ F (Y1 ) + τ F (Y2 )
Un+1 =
2
2
2
with an arbitrary parameter β31 ∈ R. By choosing a large value of β31 , the internal
stability of the implementation can be made arbitrarily poor. For this simple method
it is easy to see what a reasonable choice of implementation is, but for methods with
very many stages it is far from obvious. In section 2.4 we study this further.
Note that the stability polynomial P (z) is independent of the choice of Shu–Osher
form.
2.2. Bounds on the amplification of internal errors. We are interested in
bounding the amount by which the residuals r̃ may be amplified within one step,
under the assumption that the overall error propagation is stable. In the remainder
of this section, we introduce some basic definitions and straightforward results that
are useful in obtaining such bounds. It is typical to perform such analysis in the
context of the autonomous linear system of ODEs [29]:
′
U (t) = LU (t) + g(t),
(2.16)
U (t0 ) = U0 ,
where L ∈ Rm×m , g : R → Rm . Results based on such analysis are typically useful
in the context of more complicated problems, whereas analyzing nonlinear problems
directly does not usually yield further insight [26, 22, 29, 4].
Application of the perturbed RK method (1.5) to problem (2.16) leads to (2.14b)
but with z = τ L. Taking norms of both sides one may obtain
(2.17)
ǫn+1 ≤ P (z)ǫn +
s+1
j=1
Qj (z)r̃j + O(τ p+1 ).
It is thus natural to introduce the following.
Definition 2.2 (maximum internal amplification factor). The maximum internal amplification factor of an s-stage RK method (1.4) with respect to a set S ⊂ C is
(2.18)
M ≡ M(α, β, S) :=
max
sup |Qj (z)|,
j=1,2,...,s z∈S
where Qj (z) is defined in (2.10). When the set S is not specified, it is taken to be
the absolute stability region of the method S = {z ∈ C : |P (z)| ≤ 1} with P (z) given
by (2.4).
In order to control numerical errors, the usual strategy is to reduce the step size.
To understand the behavior of the error for very small step sizes, it is therefore useful
to consider the value
M0 := M(α, β, {0}) =
max
j=1,2,...,s
|Qj (0)|.
To go further, we need to make an assumption about σ(L), the spectrum of
L. The next theorem provides bounds on the error in the presence of roundoff or
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2236
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
iterative solution errors, when L is normal. Similar results could be obtained when L
is nonnormal by considering pseudospectra.
Theorem 2.3. Let an RK method of order p with coefficients α, β be applied to
n − U (tn ), where U
n
(2.16), where L is a normal matrix and τ σ(L) ∈ S. Let ǫn = U
satisfies the perturbed RK equations (1.5). Then
(2.19)
ǫn+1 ≤ ǫn + (s + 1)M(α, β, S) max r̃j + O(τ p+1 ).
1≤j≤s+1
The factor s + 1 can be replaced by s for explicit methods, since then r̃1 = 0.
Proof.
Use (2.17) and the fact that since L is normal, Qj (τ L) =
maxλ∈σ(L) |Qj (τ λ)|.
Remark 2.4. Combining the above with analysis along the lines of [29], one
obtains error estimates for application to PDE semidiscretizations. The amplification
of spatial truncation errors does not depend on the implementation, as can be seen
by working out the product Q(Z)β.
As an example, in Table 1 we list approximate maximum internal amplification
factors for some common RK methods. All these methods, like most RK methods,
have relatively small factors so that their internal stability is generally not a concern.
2.3. Understanding the example from section 1.2. Using the theory of the
last section, we can fully explain the results of section 1.2. First, in Table 2, we give
the values of M and M0 for the three methods. Observe that Fehlberg’s method, like
most methods with few stages, has very small amplification constants. Meanwhile,
the Euler extrapolation method, with 67 stages, has a very large M. However, when
implemented in Butcher form, it necessarily has M0 = 0.
For the extrapolation method in Shu–Osher form, the local error will generally be
at least M0 · ǫmachine ≈ 10−10 for double precision calculations. Therefore, the pair
will fail when the requested tolerance is below this level, which is just what we observe.
Table 1
Approximate maximum internal amplification factors for some RK methods. For RK pairs, the
amplification factor of the higher-order method is listed.
Method
Three-stage, third-order SSP [25]
Three-stage, third-order Heun [11]
Classic four-stage, fourth-order method [17]
Merson 4(3) pair [18]
Fehlberg 5(4) pair [6]
Bogacki–Shampine 5(4) pair [2]
Prince–Dormand order 8 [21]
10-stage, fourth-order SSP [14]
10-stage, first-order RKC [29]
18-stage, second-order RKC [29]
M
M0
1.7
3.2
1.7
5.6
5.4
7.0
138.8
2.4
10.0
27.8
0
0
0
0
0
0
0
0.6
10.0
22.6
Table 2
Maximum internal amplification factors for methods used in the example in section 1.2.
Method
Fehlberg 5(4)
Euler extrapolation 12(11) (Shu–Osher)
Euler extrapolation 12(11) (Butcher)
M
M0
5.4
3.4 · 105
1.7 · 105
0
1.3 · 105
0
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
INTERNAL STABILITY OF RK METHODS
2237
The extrapolation method in Butcher form also begins to be afflicted by amplification
of roundoff errors at this point (since it has a similar value of M). Reducing the step
size has the effect of reducing the amplification constant, but only at a linear rate
(i.e., in order to reduce the local error by a factor of 10, roughly 10 times as many
steps are required). This is observed in Figure 1(b). Meanwhile, the global error does
not decrease at all, since the number of steps taken (hence the number of roundoff
errors committed) is increasing just as fast as the local error is decreasing. This is
observed in Figure 1(a).
2.4. Effect of implementation on internal stability polynomials. In section 2.1.2, we gave an example showing how the internal stability functions of a
method could be modified by the choice of Shu–Osher implementation. It is natural
to ask just how much control over these functions is possible.
B
B
Given an s-stage ERK method, let QB
1 , Q2 , . . . , Qs denote the internal stability
polynomials corresponding to the Butcher form implementation. Let dj denote the
SO
SO
SO
dj
degree of QB
in QB
j . Let Q1 , Q2 , . . . , Qs be
j and let wj denote the coefficient of z
any set of polynomials with the same degrees dj and leading coefficients wj , but otherwise arbitrary. Then it is typically possible to find a Shu–Osher implementation of
SO
SO
the given method such that the internal stability polynomials are QSO
1 , Q2 , . . . , Qs .
How can this be done? Comparing (2.10) and (2.11), it is clear that, except for
the constant terms, the internal stability polynomials corresponding to a given Shu–
Osher implementation are linear combinations of the internal stability polynomials of
the Butcher implementation:
(2.20)
QSO (z) = QB (z)γ + αs+1 γ.
Here γ = (I − α1:s )−1 can be any lower-triangular matrix with unit entries on the
main diagonal. Given QB (z), one can choose γ to obtain the desired polynomials
QSO (z) except for the constant terms and then choose αs+1 to obtain the desired
constant terms. We have added the qualifier typically above because it is necessary
that successive subsets of the QB
j span the appropriate polynomial spaces.
3. Internal amplification factors for explicit strong stability preserving
methods. In this section we prove bounds on and estimates of the internal amplification factors for optimal explicit SSP RK methods of orders 2 and 3. These methods
have extraordinarily good internal stability properties.
We begin with a result showing that no error amplification occurs when an SSP
method is applied to a contractive problem. Recall that an SSP RK method with SSP
coefficient C can be written in the canonical Shu–Osher form, where α = Cβ, αij ≥ 0
for all i, j, and the row sums of α are no greater than 1 [9].
Theorem 3.1. Suppose that F is contractive with respect to some seminorm ·:
+ τ F (U
)) ≤ U − U
(3.1) U + τ F (U ) − (U
and for 0 ≤ τ ≤ τ0 .
for all U , U
Let an ERK method with SSP coefficient C > 0 be given in canonical Shu–Osher
form α, β and applied with step size τ ≤ Cτ0 . Then
(3.2)
ǫn+1 ≤ ǫn +
s+1
j=1
r̃j .
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2238
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
Proof. The proof proceeds along the usual lines of results for SSP methods. From
(1.4) and (1.5), we have
Yi = vi Un +
n +
Yi = vi U
τ
αij Yj + F (Yj ) ,
C
j=1
i−1
τ
αij Yj + F (Yj ) + r̃i .
C
j=1
i−1
Taking the difference of these two equations, we have
di = vi ǫn +
τ
αij dj +
F (Yj ) − F (Yj ) + r̃i ,
C
j=1
i−1
where d and ǫ are defined in section 2.1. Applying · and using convexity, (3.1),
and (1.8a), we find
di ≤ vi ǫn +
≤ vi ǫn +
τ
F (Yj ) − F (Yj ) + r̃i
αij dj +
C
j=1
i−1
i−1
j=1
αij dj + r̃i
≤ max ǫn , max dj + r̃i .
1≤j≤i−1
Applying the last inequality successively for i = 1, 2, . . . , s + 1 shows that di ≤
i
ǫn + j=1 r̃j . In particular, for i = s + 1 this gives (3.2).
The above result is the only one in this work that applies directly to nonlinear
problems. It is useful since it can be applied under the same assumptions that make
SSP methods useful in general. On the other hand, SSP methods are very often
applied under circumstances in which the contractivity assumption is not justified—
for instance, in the time integration of WENO semidiscretizations. In the remainder
of this section, we develop bounds on the amplification factor for some common SSP
methods. Such bounds can be applied even when the contractivity condition is not
fulfilled.
3.1. Optimal second-order SSP methods. Here we study the family of optimal second-order SSP RK methods [9], corresponding to the most natural implementation. For any number of stages s ≥ 2, the optimal method is
(3.3)
Y1 = Un ,
τ
F (Yj−1 ),
s−1
1
τ
s−1
F (Ys ) .
= Un +
Ys +
s
s
s−1
Yj = Yj−1 +
Un+1
2 ≤ j ≤ s,
Theorem 3.2. The maximum internal amplification factor for the method (3.3)
satisfies
MSSP2
≤
s
s+1
.
s
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
2239
INTERNAL STABILITY OF RK METHODS
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
Proof. It is convenient to define
(3.4)
ν(z) = 1 +
z
.
s−1
Then the stability and internal stability functions are
P (z) =
1 s−1
+
ν(z)s ,
s
s
Qj (z) =
s−1
ν(z)s−j+1
s
(2 ≤ j ≤ s).
(For brevity, we omit the details of the derivation here, but we will give a detailed
proof of the analogous SSP3 case in Lemma 3.3.) We have
s
1
s
≤
Ss = z ∈ C : ν(z) +
.
s − 1 s − 1
s
Let z ∈ Ss be given; we will show that |Qj (z)| ≤ s+1
s for each j. We see that ν(z)
lies in a disk of radius s/(s − 1) centered at −1/(s − 1). Hence
s+1
′
s
Ss ⊂ Ss := z ∈ C : |ν(z) | ≤
.
s−1
For |ν| ≤ 1 the desired result is immediate. For |ν| > 1, we have
s−1
|ν(z)|s−j+1
s
s−1
|ν(z)|s−j+1
≤ max sup
j=1,...,s z∈S ′
s
s
s+1
s−1
s−1 s+1
|ν(z)|s ≤
·
=
.
≤ sup
′
s
s
s
−
1
s
z∈Ss
max sup |Qj (z)| = max sup
j=2,...,s z∈Ss
j=2,...,s z∈Ss
Some scaled and shifted stability regions are depicted in Figure 2.
Combining the above result with Theorem 2.3, one obtains full error estimates
for application to linear systems and linear PDEs.
Fig. 2. Scaled and shifted stability regions (in the sense of (3.4)) of the optimal second-order
SSP RK methods for s ∈ {2, 3, 5, 8}. The boundary curves on the plot are given by ν ∈ C :
s−1 s
ν + 1 = 1 . For any s, the corresponding region has s-fold rotational symmetry.
s
s
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2240
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
3.2. Optimal third-order SSP methods. In this section we give various upper and lower bounds on the maximum internal amplification factor for the optimal
third-order SSP RK methods with s ≡ sn := n2 (n ≥ 2) stages that were proposed in
[14]. The optimal third-order SSP RK method with s = n2 stages can be written as
follows:
(3.5)
Y1 = Un ,
τ
F (Yj−1 ),
2 ≤ j ≤ s, j = kn ,
n2 − n
τ
n
n−1
Ykn −1 +
Ymn +
F (Ykn −1 ),
=
2n − 1
2n − 1
n(2n − 1)
τ
F (Ys ),
= Ys+1 = Ys + 2
n −n
Yj = Yj−1 +
Ykn
Un+1
where n ≥ 2, kn := n(n+1)
+ 1 and mn := (n−1)(n−2)
+ 1.
2
2
Just as in the proof of Theorem 3.2, it is convenient to apply some normalization
and introduce the scaling and shift
νn (z) := 1 +
(3.6)
z
n2 − n
(n ≥ 2, z ∈ C).
The following lemma (with a proof in [16]) shows that the stability polynomial
and the internal stability polynomials of this method can simply be expressed in terms
of νn .
Lemma 3.3. For any n ≥ 2, the stability function of the optimal third-order SSP
RK method with s = n2 stages is
P (z) =
2
2
n
n−1
νn (z)n +
νn (z)(n−1) ,
2n − 1
2n − 1
while the internal stability functions are
⎧
n
n−1
2
2
⎪
⎪
νn (z)n −j+1 +
νn (z)(n−1) −j+1 ,
⎪
⎪
2n
−
1
2n
−
1
⎪
⎪
⎨
n−1
2
Qj (z) =
νn (z)n −j+1 ,
⎪
⎪
2n − 1
⎪
⎪
⎪
⎪
2
⎩
νn (z)n −j+1 ,
2 ≤ j ≤ mn ,
mn + 1 ≤ j ≤ kn − 1,
kn ≤ j ≤ n2 .
This means that MSSP3
≡ MSSP3
(Ss ) = maxj=2,...,s supz∈Ss |Qj (z)| with s = n2
s
s
(n ≥ 2) and
Ss =
n−1
2
2
n
z ∈ C :
νn (z)n +
νn (z)(n−1) ≤ 1 .
2n − 1
2n − 1
First, the triangle inequality shows that {z ∈ C : |νn (z)| ≤ 1} ⊂ Ss ; the set
{z ∈ C : |νn (z)| ≤ 1} appears as the unit disk in Figure 3. By taking into account
the explicit forms of the Qj polynomials provided by Lemma 3.3, we see, again by
the triangle inequality, that |Qj (z)| ≤ 1 for all z ∈ C with |νn (z)| ≤ 1. The following
trivial corollary is established.
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
INTERNAL STABILITY OF RK METHODS
2241
Fig. 3. Scaled and shifted stability regions (see (3.6)) of the optimal third-order SSP
RK
methods with n2 stages for n ∈ {2, 3, 5, 8}. The boundary curves on the plot are given by ν ∈ C :
n−1 n2
2
n
ν + 2n−1
ν (n−1) = 1 . For any n, the corresponding region has (2n − 1)-fold rotational
2n−1
symmetry.
Table 3
Maximum internal amplification factors for the first few optimal third-order SSP RK methods
with s = n2 stages. The values in the table have been rounded up.
n
MSSP3
s
n
MSSP3
s
2
3
4
5
6
1.575
1.794
1.956
2.091
2.209
7
8
9
10
2.314
2.411
2.501
2.585
Corollary 3.4. We have
MSSP3
(α, β, DC ) ≤ 1,
s
where C = n2 − n and DC is the disk {z ∈ C : |z + C| ≤ C}.
As a consequence, in order to obtain estimates of MSSP3
(Ss ), it is enough to
s
bound |Qj | only on the “petals” in Figure 3, that is, for z ∈ Ss with |νn (z)| ≥ 1.
In the rest of this section, we just summarize our results: most of the proofs are
technical; hence, due to space limitations, they are given in [16].
For any ̺ ≥ 1 and n ≥ 2 we define
µ−
n (̺)
:= −1 −
n̺(n−1)
2
1 − 1 − n1 ̺2n−1
2n − 1
and denote by νn∗ the unique root of µ−
n in the interval [1, +∞). Then we prove that
MSSP3
= (νn∗ )
s
(n2 −n)/2
(s = n2 , n ≥ 2).
By computing νn∗ numerically, MSSP3
can easily be determined for small values of n;
s
see Table 3.
For larger n values the following inequalities hold.
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2242
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
Theorem 3.5. For any n ≥ 9 and with s = n2 , the maximum internal amplification factor for the optimal s-stage, third-order SSP RK method satisfies
9
10
n2 −n)/2
ln(n) ln(ln(n)) (
1+
−
< MSSP3
s
n2
n2
n2 −n)/2
√
n
ln(n) ln(ln(n)) (
−
<
< 1+
.
16
n2
8n2
ln(n)
n
<
ln(n)
Remark 3.6. As a trivial consequence, for s = n2 with n ≥ 4 we see that
√
√
MSSP3
< 4 s = n.
s
We illustrate the chain of inequalities in Theorem 3.5 for s = 102 , s = 104 , and
s = 1012 :
1.875 < 1.927 < MSSP3
102 ≈ 2.585 < 2.661 < 3.002,
4.193 < 4.587 < MSSP3
104 ≈ 5.757 < 8.887 < 9.090,
242.135 < 269.038 < MSSP3
1012 ≈ 302.551 < 848.642 < 848.647.
Moreover, if we assume that the asymptotic expansion of νn∗ starts as
νn∗ ∼ 1 +
ln(n) ln(ln(n)) ln(ln(n)) ln(ln(n))
−
−
+ 2
n2
n2
n ln(n)
n2 ln2 (n)
(n → +∞),
a formula obtained as a result of a heuristic argument but not proved rigorously in
[16], then we would have, for example,
MSSP3
2
lim n
= 1.
n→+∞
n
ln(n)
4. Internal amplification factors of extrapolation methods. In this section we give values, estimates, and bounds for the maximum internal amplification
factor (2.18) for two classes of extrapolation methods: explicit Euler (EE) extrapolation and explicit midpoint (EM) extrapolation, both of which can be interpreted as
ERK methods. These methods are described in section 4.1. The internal stability
polynomials are given explicitly in section 4.2. A brief summary of our findings and
techniques is presented in section 4.3. Detailed bounds are provided in sections 4.4
and 4.5. Again, due to space limitations, in this work we only list the main theorems.
For complete proofs of all results, see [16].
EM
Throughout this section, we write MEE
p (Mp ) to denote the maximum internal
amplification factor of the order p EE (EM) extrapolation method.
4.1. The extrapolation algorithm. The extrapolation algorithm [10, section
II.9] consists of two parts. In the first part, a base method and a step-number sequence
are chosen. In general, the step-number sequence is a strictly monotone increasing
sequence of positive integers nj (j = 1, 2, . . .). The base method is applied to compute
multiple approximations to the ODE solution at time tn+1 based on the solution value
at tn : the first approximation, denoted by T1,1 , is obtained by dividing the interval
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
INTERNAL STABILITY OF RK METHODS
Algorithm 1. EE extrapolation.
for m = 1 → p do
Ym,0 = Un
for j = 1 → m do
τ
Ym,j = Ym,j−1 + m
F (Ym,j−1 )
end for
Tm,1 = Ym,m
end for
for k = 2 → p do
for j = k → p do
T
−T
Tj,k = Tj,k−1 + j,k−1 j j−1,k−1
−1
2243
⊲ Compute first-order approximations
⊲ Extrapolate to get higher order
⊲ Aitken–Neville formula for extrapolation
j−k+1
to order k
end for
end for
Un+1 = Tp,p
⊲ New solution value
Algorithm 2. EM extrapolation.
r = p2
for m = 1 → r do
Ym,0 = Un
τ
Ym,1 = Ym,0 + 2m
F (Ym,0 )
for j = 2 → 2m do
τ
F (Ym,j−1 )
Ym,j = Ym,j−2 + m
end for
Tm,1 = Ym,2m
end for
for k = 2 → r do
for j = k → r do
T
−T
Tj,k = Tj,k−1 + j,k−1j2 j−1,k−1
(j−k+1)2
to order 2k
end for
end for
Un+1 = Tr,r
−1
⊲ Compute second-order approximations
⊲ Initial Euler step
⊲ Midpoint steps
⊲ Extrapolate to get higher order
⊲ Aitken–Neville formula for extrapolation
⊲ New solution value
tn+1 − tn into n1 step(s), the second approximation, T2,1 is obtained by dividing
it into n2 steps, and so on. In the second part of the extrapolation algorithm, these
Tm,1 values (that is, the low-order approximations) are combined by using the Aitken–
Neville interpolation formula to get a higher-order approximation to the ODE solution
at time tn+1 .
Here, the base method is the EE method (Algorithm 1) or the EM rule
(Algorithm 2), and the step-number sequence is chosen to be the harmonic sequence
nj := j, being the “most economic” [10, formula (9.8)]. In the EM case, we also
assume that the desired order of accuracy p is even. We do not consider the effects
of smoothing [10].
4.2. The internal stability polynomials and the absolute stability
region. By analyzing the perturbed scheme along the lines of Algorithms 1 and 2,
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2244
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
we get the internal stability polynomials Qp,m,ℓ —this time it is natural to use two
indices, say, m and ℓ, to label them, with the dependence on p also indicated. The
following results are proved in [16].
Lemma 4.1. For any p ≥ 2 and z ∈ C, the internal stability polynomials of the
EE extrapolation method are given by
QEE
p,m,ℓ (z) =
z ℓ−1
(−1)m+p mp−1
1+
(p − m)!(m − 1)!
m
(1 ≤ m ≤ p, 1 ≤ ℓ ≤ m).
Lemma 4.2. For any even p ≥ 2 and z ∈ C, the internal stability polynomials of
the EM extrapolation method are given by
QEM
p,m,ℓ (z) =
2(−1)m+r m2r
qm,2m−ℓ+1 (z)
(r − m)!(r + m)!
(1 ≤ m ≤ r, 1 ≤ ℓ ≤ 2m)
with r = p2 , and qm,j is defined by qm,0 (z) := 0, qm,1 (z) := 1, and
qm,j (z) :=
z
qm,j−1 (z) + qm,j−2 (z)
m
(2 ≤ j ≤ 2m).
Let finally Sp denote the absolute stability region of the EE or EM extrapolation
method of order p. Then one easily verifies (see [16]) that
p
z k
Sp = z ∈ C :
≤1 .
k!
k=0
4.3. Estimates of the maximum internal amplification factors. In sections 4.4 and 4.5 we give estimates of the absolute value of the internal stability
EM
polynomials QEE
p,m,ℓ and Qp,m,ℓ over three regions, starting with the entire stability
region Sp . We know that Sp may include regions far into the right half-plane. In
order to achieve tighter practical bounds, we also focus on the value of the internal
stability polynomials over Sp ∩ C−,0 , where C−,0 denotes the set of complex numbers
with negative or zero real part. Finally, to understand the internal stability of the
methods with very small step sizes, we investigate a third quantity, MEE
p ({0}) and
MEM
({0}),
as
well.
p
Using the explicit expressions for the internal stability polynomials, one can employ Mathematica to compute exact values of Mp (S) for small to moderate values of
p, for both extrapolation methods and for the sets S = Sp , S = Sp ∩ C−,0 or S = {0}.
Decimal approximations of these exact values are given in tables in the following sections. (In this work we present only two tables; the remaining ones can be found in
[16].) The values for Euler extrapolation corroborate the behavior observed with this
method in earlier sections. The values for midpoint extrapolation show that it has
much better internal stability.
Then we give upper (and sometimes lower) bounds on the amplification factors for
arbitrary p values. For p sufficiently large, some improved estimates are also provided.
In [16] we indicate certain conjectured asymptotically optimal growth rates as well.
Although neither of the proved upper bounds is tight, these theorems again suggest
that midpoint extrapolation is much more internally stable than EE extrapolation.
and MEM
in the following sections are
We remark that all the bounds on MEE
p
p
based on bounds on Sp and Sp ∩ C−,0 of the form
z ∈ Sp =⇒ |z| ≤ c1 p
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
INTERNAL STABILITY OF RK METHODS
2245
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
or
z ∈ Sp ∩ C−,0 =⇒ |z| ≤ c2 p
with suitable constants c1 > 0 and c2 > 0 presented in [16].
We make use of the Lambert W -function (ProductLog in Mathematica): for
x ≥ − 1e , there is a unique W (x) ≥ −1 such that
x = W (x)eW (x) .
(4.1)
4.4. Bounds on MEE
p (S).
4.4.1. The case S = Sp . The first theorem gives an upper estimate of the
maximum internal amplification factor over all the absolute stability region and for
all p ≥ 2.
Theorem 4.3. We have
MEE
2 (S2 ) ≤
while for any p ≥ 3
MEE
p (Sp ) ≤
13e2
√ < 5.42,
10 π
2.6
p
1
9.34p
√e
√
<
.
5.2π p − 1
5.2π p − 1
W
In [16] we show how one can reduce the constant 9.34 to get the following asymptotically better bound.
Theorem 4.4. For any ε > 0 there is a p(ε) ∈ N+ such that for all p ≥ p(ε)
p
2+ε
p
W 1e
(7.19 + 3.6ε)
√
√
MEE
(S
)
≤
<
.
p
p
(4 + 2ε)π p − 1
(4 + 2ε)π p − 1
In the corresponding section of [16], two additional tables are presented: one
with the exact values of MEE
p (Sp ) for small p (2 ≤ p ≤ 14), and its extension to the
range 15 ≤ p ≤ 20 in the form of upper bounds on MEE
p (Sp ). These tables contain
much better upper estimates than Theorem 4.3. In [16] we also propose a heuristic
argument about the conjectured asymptotically optimal upper estimate of MEE
p (Sp )
p
being ≈ 6.868
.
2πp
4.4.2. The case S = Sp ∩ C−,0 . The maximum internal amplification factors
with z restricted to Sp ∩ C−,0 play an important role in practical computations. The
first few exact values of MEE
p (S) are displayed in Table 4 for 2 ≤ p ≤ 14. In [16], we
give the details on how these values were obtained. Moreover, by using some simple
estimates, we extend Table 4 to the range 15 ≤ p ≤ 20 in the form of upper bounds
on MEE
p (S).
An estimate valid for all p ≥ 3 is cited next.
Theorem 4.5. For any p ≥ 3,
p
MEE
p (Sp
∩ C−,0 ) ≤
1.95
W ( 1e )
7.01p
√
√
<
.
3.9π p − 1
3.9π p − 1
The counterpart of Theorem 4.4 is the following result.
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2246
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
Table 4
Maximum internal amplification factors for the first few EE extrapolation methods of order p
with respect to the absolute stability region of the method in the left half of the complex plane. For
p = 3 or p ≥ 6, the exact maximum values are rounded up. (The exact values corresponding to
p ∈ {4, 5} are simple algebraic numbers.)
p
2
3
4
5
6
7
8
MEE
p (Sp ∩ C−,0 )
√
2 1 + 2 ≈ 2.198
6.192
51/2 = 25.5
√ 3/2 √
/ 18 ≈ 96.305
47 + 65
190.163
631.328
2549.961
p
MEE
p (Sp ∩ C−,0 )
9
11631.367
10
11
46860.486
98425.587
12
13
14
336910.368
1.444 · 106
6.561 · 106
Theorem 4.6. For any ε > 0 there is a p(ε) ∈ N+ such that for all p ≥ p(ε)
p
MEE
p (Sp ∩ C−,0 ) ≤
1+ 1e +ε
W ( e1 )
(2 +
2
e
p
(4.92 + 3.6ε)
<
.
√
√
+ 2ε)π p − 1
(2 + 2e + 2ε)π p − 1
Similarly to the previous section, in [16] we also sketch a heuristic argument about
the conjectured asymptotically optimal upper estimate of MEE
p (Sp ∩ C−,0 ), now
3.885p
≈ 0.342 · p .
p
m
4.4.3. The case S = {0}. In this case MEE
p ({0}) = max1≤m≤p (p−m)!m! . The
following theorem gives a two-sided estimate of MEE
p ({0}).
Theorem 4.7. For any p ≥ 4, we have
p
1
√
√
p
4
W ( e1 )
3e
3
3.577p
3.592p
√
< 2·
≤ MEE
0.117 ·
< √
.
p ({0}) ≤
p
2e
p
2π p − 1
2π p − 1
One can also find the first few exact values of MEE
p ({0}) for 2 ≤ p ≤ 20 in
[16], along with the p → +∞ asymptotic growth rate of the numbers MEE
p ({0}). In
addition, by taking into account the obvious relation
EE
EE
MEE
p (Sp ) ≥ Mp (Sp ∩ C−,0 ) ≥ Mp ({0}),
EE
Theorem 4.7 also provides us with some lower estimates of MEE
p (Sp ) or Mp (Sp ∩
C−,0 ).
4.5. Bounds on MEM
p (S). Turning to midpoint extrapolation, in this section
p ≥ 2 denotes an even integer and we let r = p/2. The corresponding section of [16]
contains all the details, proofs, and tables mentioned below.
4.5.1. The case S = Sp . A theorem for all p and a result with an asymptotically better bound are presented first.
Theorem 4.8. For any 1 ≤ r = p2 ≤ 8,
2 4.74p
EM
· √ ,
Mp (Sp ) <
π
p
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
INTERNAL STABILITY OF RK METHODS
2247
Table 5
Maximum internal amplification factors for the first few EM extrapolation methods of order p
with respect to the absolute stability region of the method in the left half of the complex plane. For
p ≥ 4, the exact maximum values are rounded up. The algebraic degrees of the entries corresponding
to p = 2, 4, 6, 8 and returned by Mathematica are 4, 38, 5, 7, respectively.
p
2
4
MEM
p (Sp ∩ C−,0 )
√
2 1 + 2 ≈ 2.198
7.332
p
MEM
p (Sp ∩ C−,0 )
6
25.378
8
88.755
while for any r ≥ 9 we have
MEM
p (Sp ) <
4.986p
√
.
π p−1
Theorem 4.9. For any ε > 0 there is a p(ε) ∈ N+ such that for all even p ≥ p(ε)
MEM
p (Sp ) <
(3.539 + ε)p
√
.
π p−1
The exact values of MEM
p (Sp ) for even 2 ≤ p ≤ 8 together with some further
(S
)
upper bounds on MEM
p extended up to p = 20 are also given in our cited work.
p
4.5.2. The case S = Sp ∩ C−,0 . As for the estimates in the left half-plane,
the exact values of MEM
p (S) for even 2 ≤ p ≤ 8 are contained in Table 5. As an
extension, [16] presents some upper bounds up to p = 20, including the estimate
MEM
10 (S10 ∩ C−,0 ) ≤ 836.
Concerning larger p values, we have the following two theorems.
Theorem 4.10. For any even p ≥ 12,
3.423p
MEM
.
p (Sp ∩ C−,0 ) < √
π p−1
Theorem 4.11. For any ε > 0 there is a p(ε) ∈ N+ such that for all even
p ≥ p(ε)
MEM
p (Sp ∩ C−,0 ) <
(2.157 + ε)p
√
.
π p−1
4.5.3. The case S = {0}. In [16] we list the exact values of MEM
p ({0}) for
p
EM
2 ≤ p ≤ 20, as well as a p → +∞ asymptotic formula Mp ({0}) ∼ 1.1524 · 1.509
p .
5. Conclusion. Roundoff is not usually a significant source of error in traditional
RK methods with typical error tolerances and double-precision arithmetic. However,
when very high accuracy is required, it is natural to use very high order methods,
which necessitates the use of large numbers of stages and can lead to substantial
amplification of roundoff errors. This amplification is problematic precisely when
high precision is desired. In fact, traditional error bounds and estimates that neglect
roundoff error become useless in this situation.
In the past, internal error amplification had been a practical issue only in rare
cases. However, current trends toward the use of higher-order methods and higheraccuracy computations suggest that it may be a more common concern in the future.
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
2248
DAVID I. KETCHESON, LAJOS LÓCZI, AND MATTEO PARSANI
The analysis and bounds given in this work can be used to accurately estimate at what
tolerance roundoff will become important in a given computation. More generally, the
maximum internal amplification factor that we have defined provides a single useful
metric for deciding whether internal stability may be a concern in a given computation.
We have emphasized that internal amplification depends on implementation and
that the choice of implementation can be used to modify the internal stability polynomials. However, it is not yet clear whether dramatic improvements in internal stability
can be achieved in this manner for methods of interest.
For implicit RK methods, internal stability may become important even when
the number of stages is moderate. The numerical solution of the stage equations is
usually performed iteratively and stopped when the stage errors are estimated to be
“sufficiently small.” If the amplification factor is large, then the one-step error also
may be large even if the stage errors (and truncation error) are driven iteratively
to small values. A study of the amplification of solver errors for practical implicit
methods constitutes interesting future work.
Acknowledgment. We are indebted to Yiannis Hadjimichael for input into the
bounds on the internal stability polynomials for third-order SSP methods. We thank
Umair bin Waheed for running computations that led to the example in section 1.2.
We also thank the referees for their suggestions that improved the presentation of the
material and prompted us to include Theorem 3.1.
REFERENCES
[1] S. Abarbanel, D. Gottlieb, and M. H. Carpenter, On the removal of boundary errors
caused by Runge–Kutta integration of nonlinear partial differential equations, SIAM J.
Sci. Comput., 17 (1996), pp. 777–782.
[2] P. Bogacki and L. F. Shampine, An efficient Runge–Kutta (4, 5) pair, Comput. Math. Appl.,
32 (1996), pp. 15–28.
[3] J. C. Butcher, Numerical Methods for Ordinary Differential Equations, Wiley, New York,
2008.
[4] M. H. Carpenter, D. Gottlieb, S. Abarbanel, and W.-S. Don, The theoretical accuracy of
Runge–Kutta time discretizations for the initial boundary value problem: A study of the
boundary error, SIAM J. Sci. Comput., 16 (1995), pp. 1241–1252.
[5] A. Dutt, L. Greengard, and V. Rokhlin, Spectral deferred correction methods for ordinary
differential equations, BIT, 40 (2000), pp. 241–266.
[6] E. Fehlberg, Klassische Runge–Kutta-Formeln fünfter und siebenter Ordnung mit
Schrittweiten-Kontrolle, Computing, 4 (1969), pp. 93–106.
[7] L. Ferracina and M. N. Spijker, An extension and analysis of the Shu–Osher representation
of Runge–Kutta methods, Math. Comp., 74 (2004), pp. 201–220.
[8] R. Frank, J. Schneid, and C. W. Ueberhuber, Stability properties of implicit Runge–Kutta
methods, SIAM J. Numer. Anal., 22 (1985), pp. 497–514.
[9] S. Gottlieb, D. I. Ketcheson, and C.-W. Shu, Strong Stability Preserving Runge–Kutta and
Multistep Time Discretizations, World Scientific, Singapore, 2011.
[10] E. Hairer, S. P. Nørsett, and G. Wanner, Solving Ordinary Differential Equations I:
Nonstiff Problems, 2nd ed., Springer Ser. Comput. Math., Springer, New York, 2010.
[11] K. Heun, Neue Methoden zur approximativen Integration der Differentialgleichungen einer
unabhängigen Veränderlichen, Z. Math. Phys, 45 (1900), pp. 23–38.
[12] I. Higueras, Representations of Runge–Kutta methods and strong stability preserving methods,
SIAM J. Numer. Anal., 43 (2005), pp. 924–948.
[13] T. E. Hull, W. H. Enright, B. M. Fellen, and A. E. Sedgwick, Comparing numerical
methods for ordinary differential equations, SIAM J. Numer. Anal., 9 (1972), pp. 603–637.
[14] D. I. Ketcheson, Highly efficient strong stability-preserving Runge–Kutta methods with lowstorage implementations, SIAM J. Sci. Comput., 30 (2008), pp. 2113–2136.
[15] D. I. Ketcheson and U. bin Waheed, A Theoretical Comparison of High Order Explicit
Runge–Kutta, Extrapolation, and Deferred Correction Methods, CAMCoS, 9 (2014),
pp. 175–200.
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Downloaded 11/03/14 to 109.171.137.210. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
INTERNAL STABILITY OF RK METHODS
2249
[16] D. I. Ketcheson, L. Lóczi, and M. Parsani, Propagation of Internal Errors in Explicit Runge–Kutta Methods and Internal Stability of SSP and Extrapolation Methods,
arXiv:1309.1317, 2013.
[17] W. Kutta, Beitrag zur näherungweisen Integration totaler Differentialgleichungen, Z. Math.
Phys., 46 (1901), pp. 435–453.
[18] R. H. Merson, An operational method for the study of integration processes, in Proceedings of
Conference on Data Processing and Automatic Computing Machines at Weapons Research
Establishments Salisbury, South Australia, 1957, pp. 1–25.
[19] J. Niegemann, R. Diehl, and K. Busch, Efficient low-storage Runge–Kutta schemes with
optimized stability regions, J. Comput. Phys., 231 (2011), pp. 372–364.
[20] M. Parsani, D. I. Ketcheson, and W. Deconinck, Optimized explicit Runge–Kutta schemes
for the spectral difference method applied to wave propagation problems, SIAM J. Sci.
Comput., 35 (2013), pp. A957–A986.
[21] P. J. Prince and J. R. Dormand, High order embedded Runge–Kutta formulae, J. Comput.
Appl. Math., 7 (1981), pp. 67–75.
[22] J. M. Sanz-Serna and J. G. Verwer, Stability and convergence at the PDE/stiff ODE interface, Appl. Numer. Math., 5 (1989), pp. 117–132.
[23] J. M. Sanz-Serna, J. G. Verwer, and W. H. Hundsdorfer, Convergence and order reduction
of Runge–Kutta schemes applied to evolutionary problems in partial differential equations,
Numer. Math., 50 (1986), pp. 405–418.
[24] L. F. Shampine, Stability of explicit Runge–Kutta methods, Comput. Math. Appl., 10 (1984),
pp. 419–432.
[25] C.-W. Shu and S. Osher, Efficient implementation of essentially non-oscillatory shockcapturing schemes, J. Comput. Phys., 77 (1988), pp. 439–471.
[26] P. J. van der Houwen and B. P. Sommeijer, On the Internal Stability of Explicit, m-Stage
Runge–Kutta Methods for Large m-Values, ZAMM Z. Angew. Math. Mech., 60 (1980),
pp. 479–485.
[27] J. G. Verwer, A class of stabilized three-step Runge–Kutta methods for the numerical integration of parabolic equations, J. Comput. Appl. Math., 3 (1977), pp. 155–166.
[28] J. G. Verwer, Explicit Runge–Kutta methods for parabolic partial differential equations,
Appl. Numer. Math., 22 (1996), pp. 359–379.
[29] J. G. Verwer, W. H. Hundsdorfer, and B. P. Sommeijer, Convergence properties of the
Runge–Kutta–Chebyshev method, Numer. Math., 57 (1990), pp. 157–178.
Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.