Advanced Numerical Methods Analysis
Advanced Numerical Methods Analysis
Higinio Ramos
Key words: Error analysis, stability analysis, Falkner implicit methods, direct In-
tegration Methods, second-order initial-value problems, predictor-corrector modes
3.1 Introduction
H. Ramos
Grupo de Computación Cientı́fica. Universidad de Salamanca, Escuela Politécnica Superior de
Zamora, Campus Viriato. 49022-Zamora. Spain
Tel.: +034-80-545000
Fax: +034-80-545002
e-mail: higra@usal.es
35
36 H. Ramos
the Falkner methods [9] is a class of schemes that may be used for this purpose.
The explicit Falkner method of k steps consists in a couple of formulas that can
be written in the form [5]
k−1
yn+1 = yn + h y′n + h2 ∑ β j ▽ j fn , (3.2)
j=0
k−1
y′n+1 = y′n + h ∑ γ j ▽ j fn , (3.3)
j=0
where h is the stepsize, yn and y′n are approximations to the values of the solution
and its derivative at xn = x0 + n h , fn = f (xn , yn ), and ▽ j fn is the standard notation
for the backward differences. The coefficients β j and γ j can be obtained using the
generating functions
∞
t + (1 − t) Log(1 − t)
Gβ (t) = ∑ βj t j = (1 − t)Log2(1 − t)
,
j=0
∞
−t
Gγ (t) = ∑ γ j t j = (1 − t)Log(1 − t) ,
j=0
which have been obtained similarly as that for the Störmer or Cowell methods (see
[14], p. 291).
The implicit Falkner method of k steps consists in two formulas that may be
written as [5]
3 Formulation and analysis of a class of direct implicit integration methods 37
k
yn+1 = yn + h y′n + h2 ∑ β j∗ ▽ j fn+1 , (3.4)
j=0
k
y′n+1 = y′n + h ∑ γ ∗j ▽ j fn+1 , (3.5)
j=0
Note that the formulas in (3.3) and (3.5) are respectively the Adams-Bashforth and
Adams-Moulton schemes for the problem (y′ )′ = f (x, y), which are used to follow
the values of the derivative. All the above formulas are of multistep type, specifically
k-step formulas, and so k initial values must be provided in order to proceed with
the methods (the Runge-Kutta methods are commonly used to obtain the starting
values). In this paper, a rigorous analysis of the errors involved in the formulation
of the implicit procedures in predictor-corrector modes (P-C) with fixed step size is
made (for a variable step size version of these methods see [30], and for an adapted
version of the methods in [23] in case of oscillatory systems see [21] and [20]. A
new technique for stepsize changing in case of Adam’s method has recently ap-
peared in [2]. Other approaches concerning different formulations of Falkner-type
methods have appeared in [24] or [25]).
The implicit formulas in (3.4-3.5) may be used to generate methods for solving
the I.V.P. in (3.1). A well-known procedure of this type is the Wilson method [32],
which is one of the Newmark family, and is commonly used in molecular dynamics
calculations. This method uses the two-step formula in (3.4), given by
h2
yn+1 = yn + h y′n + 2y′′n + y′′n+1 (3.6)
6
to obtain the positions, while the formula to update the velocities is the two-step
method in (3.5) given by
h
y′n+1 = y′n + (y′′n + y′′n+1) . (3.7)
2
Another scheme that uses a Falkner formula is a method due to Gear [11], that
consists in the two implicit formulas given by
38 H. Ramos
h2 ′′
yn+1 = yn + h y′n + yn+1 + 6y′′n − y′′n−1 (3.8)
12
h
y′n+1 = y′n + (5y′′n+1 + 8y′′n − y′′n−1) , (3.9)
12
where the second formula is the three-step method in (3.5) but the first of these for-
mulas is not the three-step one in (3.4).
In the last years it seems to be a competition to find the numerical method that
performs better in solving different types of problems. This is not the goal of this
article; in some cases the proposed methods will do better and in other cases they
will perform worst than other specialized methods. But the question is how these
general purposed methods can be formulated and why they do work or do not, de-
veloping an analysis of the errors involved and the stability characteristics. Anyway,
we can say that the methods presented here may constitute a reliable alternative to
other methods in the literature for solving special second order problems.
In the application of P-C modes, P indicates the application of the explicit method, in
our case the predictor given by the formula in (3.2), and C indicates the application
of the implicit method, that is, the corrector given by the formula in (3.4). For the
3 Formulation and analysis of a class of direct implicit integration methods 39
derivative, we will use P’ to indicate the application of the explicit formula in (3.3),
and C ’ to indicate the application of the implicit formula in (3.5). Finally, E refers
to the evaluation of the function f in order to use this value either for corrections,
or in the next step.
Two different implementations for the explicit methods were considered in [23],
and for the implicit methods three implementations will be considered here. They
are summarized in what follows.
The fist formulation of the explicit Falkner method on each step for solving the
problem in (3.1) consists in
1. Evaluate yn+1 using the formula in (3.2)
2. Evaluate y′n+1 using the formula in (3.3)
3. Evaluate fn+1 = f (xn+1 , yn+1 )
The second formulation of the explicit Falkner method on each step for solving the
problem in (3.1) reads
1. Evaluate yn+1 using the formula in (3.2)
2. Evaluate fn+1 = f (xn+1 , yn+1 )
3. Evaluate y′n+1 using the formula in (3.5)
which can be accomplished due to the absence of the derivative on the function
f = f (x, y(x)) . Thus, having obtained the value yn+1 it is straightforward to obtain
fn+1 to be used in the formula in (3.5). Note that in this way the ”implicit formula”
in (3.5) is no longer implicit, resulting in an explicit formulation of the method.
For the PP’E mode the accumulated truncation error is of order O(hk ) while for
the PEC ’ mode is of order O(hk+1 ) (for details see [23]). Thus, the second formu-
lation of the explicit Falkner method (PEC ’ mode) provides a better performance.
The first choice for the implementation of the implicit Falkner method is given by
40 H. Ramos
Obviously, many more choices could have been considered, taking formulas with
different steps and therefore, a different order. Specifically, we have considered a
predictor with k + 1 steps and a corrector with k steps, but we have not obtained
any improvements over the methods with k steps in the predictor and the corrector.
Or you could have considered further corrections, obtaining methods of the forms
P′ P(EC)n E, P(EC ′C)n E or P(EC)n EC ′ . We have done different experiments, and
in the latter case the interval of stability varies, resulting in an interval slightly
higher, but with so little difference that the computational effort is not worth. We
have considered only P-C methods with the same number of steps in all the formu-
las (as it has been used by Shampine and Gordon [26]), where it is performed only
a correction, as is usually done in the Adams methods in P-C mode (see [18]).
3 Formulation and analysis of a class of direct implicit integration methods 41
Consider the formula resulting after approximating the function f on the exact for-
mula
Z xn+1
y(xn+1 ) = y(xn ) + hy′ (xn ) + f (x, y(x))(xn+1 − x) dx (3.10)
xn
by the interpolating polynomial on the grid points xn−(k−1) , . . . , xn , and also consider
the formula in (3.2). Using the localization hypothesis that y(x j ) = y j , j = n − (k −
1), . . . , n, and following a similar procedure as for the Adams formulas [18] we
obtain that the local truncation error for the method in (3.2) is given by
Similarly, for the formula in (3.3) the local truncation error reads
P
LP′ [y′ (xn ); h] = y′ (xn+1 ) − y′ n+1 = hk+1 y(k+2) (ψ )γk , (3.12)
and similarly, for the formula in (3.5) the local truncation error is
C ∗
LC ′ [y′ (xn ); h] = y′ (xn+1 ) − y′ n+1 = hk+2 y(k+3) (ψ )γk+1 (3.14)
Assuming that we have to integrate the problem in (3.1) on the interval [x0 , xN ] and
that we know in advance the starting values needed to apply the numerical scheme,
we proceed by analyzing the accumulated errors on each successive application of
42 H. Ramos
the method along the grid points on the integration interval. We use a superscript P
to indicate the application of the explicit method and a superscript C to indicate the
application of the implicit one, regardless of whether the formula is for the solution
or for the derivative.
Assuming the localization hypothesis and using the formulas for the local trunca-
tion errors in (3.11) and (3.12), for the first step (n = 0) we have that the differences
between the true values, y(x1 ) , y′ (x1 ), and the approximated ones, yP1 , y′P
1 , are given
respectively by the local truncation errors, that is,
y′ (x1 ) − y′P
1 = h
k+1 (k+2)
y (ψ1 )γk , (3.15)
where, for convenience, in the sequel the ξ j and ψ j will denote appropriate internal
points.
After evaluating f (x1 , yP1 ), using the Mean Value Theorem we can put that
∂f
f (x1 , y(x1 )) − f1P = (x1 , ξ1 ) y(x1 ) − yP1
∂y
k
y(x1 ) − yC1 = y(x0 ) + h y′ (x0 ) + h2 ∑ β j∗ ▽ j f (x1 , y(x1 ))
j=0
!
k
k+3 (k+3) ∗
+h y (ξ1 )βk+1 − y0 + h y′0 + h2 ∑ β j∗ ▽ j f1P
j=0
∗
= hk+3 y(k+3) (ξ1 )βk+1 + O(hk+4 ) . (3.17)
After evaluating f (x1 , yC1 ), for the next step (n = 1) we have that
k−1
y′ (x2 ) − y′P ′
2 = y (x1 ) + h ∑ γ j ▽ j f (x1 , y(x1 ))
j=0
!
k−1
+ hk+1 y(k+2) (ψ2 )γk − y′P
1 +h ∑ γ j ▽ j f1C (3.18)
j=0
y(x2 ) − yP2 = hk+2 βk y(k+2) (ξ2 ) + hk+2 y(k+2) (ξ1 )γk + O(hk+3 ) . (3.20)
+ O(hk+4 ) . (3.21)
Repeating the procedure along the nodes on the integration interval we determine
that the accumulated error at the final point xN is given by
N−1 N
y(xN ) − yCN = hk+2 γk ∑ (N − j)y(k+2)(ψ j ) + hk+3βk+1
∗
∑ y(k+3) (ξ j )
j=1 j=1
+ O(hk+4 ) , (3.22)
Assuming that the derivatives y(k+2) (x) and y(k+3) (x) are continuous, after using the
Mean Value Theorem, the above formulas may be rewritten as follows:
1 k (k+2)
y(xN ) − yCN = h γk y (ψ )(xN − x0 )(xN − x1)
2
∗
+ hk+2 βk+1 y(k+3) (ξ )(xN − x0 ) + O(hk+4) (3.24)
and
y′ (xN ) − y′P k
N = h γk (xN − x0 )y
(k+2)
(ψ ) + O(hk+2 ) . (3.25)
In this mode, for the first step (n = 0), after the application of the predictor P and
the corrector C we have the same results as before, that is,
44 H. Ramos
and
∗
y(x1 ) − yC1 = hk+3 y(k+3) (ξ1 )βk+1 + O(hk+4 ) .
Now the difference with the mode before results in the application of the corrector
C ’ to obtain the approximation for the derivative. We have
k
y′ (x1 ) − y′C ′
1 = y (x0 ) + h ∑ γ ∗j ▽ j f (x1 , y(x1 ))
j=0
!
k
k+2 (k+3) ∗
+h y (ψ1 )γk+1 − y′C
0 + h ∑ γ ∗j ▽ j f1P
j=0
∗
= hk+2 y(k+3) (ψ1 )γk+1 + O(hk+3 ) . (3.26)
After the evaluation of f (x1 , yC1 ) and the application of the predictor P to obtain an
estimate yP2 for y(x2 ), the application of the corrector C results in
∗
y(x2 ) − yC2 = hk+3 βk+1 y(k+3) (ξ1 ) + y(k+3)(ξ2 ) + hk+3 y(k+3) (ψ1 )γk+1
∗
+ O(hk+4 ) , (3.27)
Repeating the procedure along the nodes on the integration interval we obtain
that the accumulated error at the final point xN is given by
N−1 N
∗ ∗
y(xN ) − yCN = hk+3 γk+1 ∑ (N − j)yk+3)(ψ j ) + hk+3βk+1 ∑ yk+3)(ξ j )
j=1 j=1
+ O(hk+4 ) , (3.29)
Assuming that y(k+3) (x) is continuous, the above formulas for the accumulated
errors at the final point xN may be written as
3 Formulation and analysis of a class of direct implicit integration methods 45
1 k+1 ∗ (k+3)
y(xN ) − yCN = h γk+1 y (ψ )(xN − x0 )(xN − x1)
2
∗
+ hk+2 βk+1 y(k+3) (ξ )(xN − x0 ) + O(hk+4) (3.31)
y′ (xN ) − y′C
N =h
k+1 ∗
γk+1 (xN − x0 )y(k+3) (ψ ) + O(hk+3 ) (3.32)
Now the difference with the mode before results in the application of the corrector
C ’ to obtain the values y′C C P
n , using f (xn , yn ) instead of f (xn , yn ). The first difference
to the previous mode is observed in the first step (n = 0) and results to be
k
y′ (x1 ) − y′C ′
1 = y (x0 ) + h ∑ γ ∗j ▽ j f (x1 , y(x1 ))
j=0
!
k
k+2 (k+3) ∗ ′
+h y (ψ1 )γk+1 − y′C
0 +h ∑ γ ∗j ▽ j f1C
j=0
Note that the only difference between the formulas in (3.26) and in (3.33) is that
f1P has been substituted by f1C , but this does not change the principal term of the
errors. This means that the principal terms in the errors for the final formulas are
the same as in the previous mode, and so the formulas in (3.31) and (3.32) remain
valid in this mode. This, however, does not mean that the errors are equal, since the
remaining terms are different. But since the principal terms are the same the errors
will be similar.
For comparison purposes we have included in Table 3.1 the principal terms of
the accumulated errors for the different implementations of the explicit and implicit
Falkner methods. These terms will be denoted by PT (y(xn ) − yN ) for the solution,
and PT (y′ (xn ) − y′N ) for the derivative, where the approximated values of the solu-
tion and the derivative at the final point are indicated by yN and y′N respectively. The
formulations of the explicit methods with k steps are named FEABk and FEAMk as
in [23], while the implicit method with k steps formulated in P-C modes are named
FI[1]k, FI[2]k and FI[3]k corresponding to the P’PECE, PEC ’CE and PECEC ’
modes respectively in the above section.
Remark 1. Note that the above methods may be applied to a system of second-order
differential equations of the form
46 H. Ramos
Table 3.1 Principal terms of the accumulated errors for the different implementations of the ex-
plicit and implicit Falkner methods.
(
y′′ (x) = f(x, y(x)) , x ∈ [x0 , xN ]
y(x0 ) = y0
where y : IRm → IRm , and f : [x0 , xN ] × IRm → IRm , using a componentwise imple-
mentation.
3.4 Stability
Yn = M[i]Yn−1 , (3.35)
T
where Yn is the k + 1-vector given by Yn = yn+1 , yn , . . . , yn−(k−2) , h y′n+1 , and
M[i] , i = 1, 2, 3, are (k + 1) × (k + 1) matrices whose elements depends on H 2 and
the coefficients of the method, where H = µ h. The matrices M[i] are called the
stability matrices and the index i in M[i] is used to denote each implementation ac-
cording to its appearance in the above section, that is, i = 1 stands for the first P-C
mode, i = 2 for the second P-C mode, and i = 3 for the last one.
!
k k−1
j s
M[1]1 l+1 = (−1) H l 2
∑ 2
+H ∑ βs β j∗ ,
j=0 l+1 s=l l
l = 1, . . . , k − 1 .
M[1] j l = δ j l+1 , j = 2, . . . , k l = 1, . . . , k + 1 .
k−1
j
M[1]k+1 l+1 = (−1)l+1 H 2 ∑ γj , l = 0, . . . , k − 1 .
j=l l
M[1]k+1 k+1 = 1
j
where δ j l+1 is the Kronecker delta and the notations of type refer to the bino-
l
mial coefficients.
For the matrices M[2] and M[3] all the entries are the same as for M[1] except for
the last row, which for M[2] reads
48 H. Ramos
!
k k−1
M[2]k+1 1 = H2 ∑ j − 1 + H 2 ∑ βs γ ∗j
j=0 s=0
!
k k−1
j s
M[2]k+1 l+1 = (−1) H l 2
∑ 2
+H ∑ βs γ ∗j ,
j=0 l+1 s=l l
l = 1, . . . , k − 1 .
k
M[2]k+1 k+1 = 1 − H 2 ∑ γ ∗j ,
j=0
" ! #
k k k−1
j t s
M[3]k+1 l+1 = (−1)l H 2 ∑ − H2 ∑ + H2 ∑ βs βt∗ γ ∗j ,
j=0 l+1 t=0 l+1 s=l l
l = 1, . . . , k − 1 .
!
k k
M[3]k+1 k+1 = 1 − H 2
∑ 1−H 2
∑ βs∗ γ ∗j .
j=0 s=0
and finally, for the third implementation the matrix M[3] is given by
3 Formulation and analysis of a class of direct implicit integration methods 49
19(19H 2 −132)H 2 H2 4 H 2 (19H 2 −28) 2
+1 − 19H 1 − 19H
4320 10 432 1440 180
1 0 0 0
0 1 0 0
2508H 4 −361H 6 −13440H 2 95H 6 −216H 4 +1200H 2 28H 4 −19H 6 −160H 2 19H 4 −180H 2
11520 5760 3840 480 +1
The behaviour of the numerical solution will depend on the eigenvalues of the
matrices M[i], and the stability properties of the methods will be characterized by
the spectral radius, ρ (M[i]). According to the terminology introduced by Coleman
and Ixaru [7] we have:
• (0, Hs ) is an interval of stability for a given method if for all H ∈ (0, Hs ) it is
|ri | < 1 where the ri , i = 1, . . . , k + 1 are the eigenvalues of the stability matrix.
• (0, H p ) is an interval of periodicity for a given method if for all H ∈ (0, H p ) the
eigenvalues of the stability matrix, ri , i = 1, . . . , k + 1, satisfy
A crucial difference between A-stability and P-stability is that for A-stable meth-
ods the stability matrix satisfies (M[i])n → 0 as n → ∞ because ρ (M[i]) < 1, but for
P-stable methods this fact is not possible because ρ (M[i]) = 1 , (see [10]). There-
fore, A-stable methods alleviate the initial errors whereas for P-stable methods the
initial errors do not diminish when the integration progresses in time.
If we let the value µ in (3.34) to be complex, instead of intervals we obtain stability
or periodicity regions in the H-complex plane.
h2
yn+1 = yn + h y′n + 4y′′n − y′′n−1
6
to obtain yPn+1 , after evaluating f (xn+1 , yPn+1 ) and applying the formula in (3.8) to
obtain yCn+1 we can proceed in two ways:
1. we can evaluate f (xn+1 , yCn+1 ) with the best approximation we have for yn+1 and
then we use the formula in (3.9) to obtain the approximate value y′n+1 . We note
this formulation as P2 ECEC′ , where the subindex 2 corresponds to the order of
the predictor.
2. or we can use the previous evaluation f (xn+1 , yPn+1 ) in the formula in (3.9) to
obtain the approximate value y′n+1 . The notation for this formulation results in
P2 ECC′ .
For the first choice the stability matrix results in
H4 7H 2 1 2 2−6 H2
18 − 12 + 1 − 72 H H 1 − 12
1 0 0
6 4
−10H +105H −468H 2 H (( )
2 5 H 2 −6 H 2 +72
) (
5 H )
2 −12 H 2 +1
and the stability interval is given by (0, 1.906123). Although at first glance one
could intuitively suppose that the first procedure will be the best this is not true. Not
only the first procedure requires more computational cost (two evaluations of the
function f per step versus one evaluation in the second case) but it also has worst
stability properties.
and the stability interval is given by (0, 0.585417). In the second case the stability
matrix is given by
19H 4 2 4 2
288 − 7H12 + 1
1
144 H
2 12 − 5H 2 H96 1 − H12
1 0 0 0
0 1 0 0
1
1
4 5H 2
288 H
2 95H 2 − 312 144 H
2 12 − 25H 2 5H96 1 − 12
for which there are no stability nor periodicity intervals. This time the first procedure
is the best concerning the stability characteristics.
In Figure 3.1 the stability regions are depicted (left: taking as predictor the two-
step explicit Falkner method and using f (xn+1 , yPn+1 ) in the formula in (3.9); right:
taking as predictor the three-step explicit Falkner method and using f (xn+1 , yCn+1 )
in the formula in (3.9)).
0.3 0.00001
0.2
5. ´ 10-6
0.1
0.0 0
-0.1
-5. ´ 10-6
-0.2
-0.3 -0.00001
-2 -1 0 1 2 -1.0 -0.5 0.0 0.5 1.0
Fig. 3.1 Absolute stability regions for different formulations of Gear’s method in P-C mode (left:
P2 ECC′ ; right: P3 ECEC′ )
These unexpected results show the importance of determining the stability re-
gions in order to obtain appropriate results when applying a numerical method. The
numerical results in the following section will confirm this assertion (Tables 3.3 and
3.5 provide a confirmation of the theoretical expectations on the stability of the de-
52 H. Ramos
rived methods).
For the methods presented in the previous sections we have obtained that they are
not A-stable nor P-stable up to k = 14 (it is also known that there are no A-stable
Adams methods for y′ = f (x, y) ). There exists only one interval of
√ periodicity cor-
responding to the third mode when k = 1 being this interval [0, 6]. In Table 3.2
the intervals of stability are presented from k = 1 up to k = 14, where k refers to
the number of steps, for the different formulations, named after FI[1]k, FI[2]k, and
FI[3]k respectively, according to the notation used in the above sections.
All these values have been obtained with the help of the Mathematica program.
We have considered only the primary stability intervals although for different meth-
ods may exist secondary stability intervals, as for the second P-C mode for k = 2,
for which [1, 2] is an interval where |ri | < 1 , i = 1, 2, 3.
1 ∅ (0, 2) ∅
2 ∅ ∅ ∅
3 (0, 0.853006) ∅ ∅
4 (0, 0.930949) (0, 0.534947) (0, 1.108998)
5 ∅ (0, 0.925569) (0, 1.409664)
6 ∅ ∅ ∅
7 (0, 0.401364) ∅ ∅
8 (0, 0.285341) (0, 0.274630) (0, 0.480033)
9 ∅ (0, 0.521074) (0, 0.821956)
10 ∅ ∅ ∅
11 (0, 0.100705) ∅ ∅
12 (0, 0.071103) (0, 0.185094) (0, 0.300133)
13 ∅ (0, 0.362275) (0, 0.390144)
14 ∅ ∅ ∅
Table 3.2 Intervals of stability for the different formulations of the Falkner implicit method in P-C
modes.
We have included different plots showing the stability intervals. Figure 3.2 shows
the absolute value of the eigenvalues of M[2] and M[3] for k = 1 versus H, and the
resulting stability interval (0, Hs ). For the matrix M[3] we have included the interval
of periodicity (0, H p ). Figure 3.3 shows the absolute value of the eigenvalues of M[1]
and M[2] for k = 3 versus H. This time, for the M[2] mode there is no primary stabil-
ity interval, although a secondary interval of stability exists, (1.731413, 1.895953).
Figure 3.4 shows the absolute value of the eigenvalues of M[1] and M[3] for k = 5,
showing in case of M[1] a secondary stability interval given by (0.626085, 0.752697).
Finally, Figure 3.5 shows two eigenvalues limiting the stability regions in case of
3 Formulation and analysis of a class of direct implicit integration methods 53
M[2] and M[3] when k = 9. We have included as a reference in all the plots the
horizontal line at a distance of one unit above the horizontal axis.
2.0 2.0
1.5 1.5
1.0 1.0
0.5 0.5
0 Hs =2 2.5 0 Hp= 6 3
Fig. 3.2 Absolute value of the eigenvalues |ri |, i = 1, 2 of the stability matrices for the P-C modes:
M[2] (left) and M[3] (right) when k = 1.
1.4 1.4
1.2 1.2
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 Hs =0.853006 1.5 0 2
Fig. 3.3 Absolute value of the eigenvalues |ri |, i = 1, 2, 3, 4 of the stability matrices for the P-C
modes: M[1] (left) and M[2] (right) when k = 3.
1.4
1.004 1.2
1.002 1.0
0.8
1.000 0.6
0.998 0.4
0.2
1 0 Hs =1.40966 2
Fig. 3.4 Absolute values of the eigenvalues |ri |, i = 1, . . ., 6 of the stability matrices for the P-C
modes: M[1] (left) and M[3] (right) when k = 5.
In Figure 3.6 appear the stability regions for the methods FI[i]4 , i = 1, 2, 3.
54 H. Ramos
1.000000 1.00010
1.000000 1.00005
1.000000 1.00000
0.999999 0.99995
0 Hs =0.52107 1 0 Hs =0.8211
Fig. 3.5 Absolute value of two eigenvalues limiting the stability region for each of the stability
matrices for the P-C modes: M[2] (left) and M[3] (right) when k = 9.
0.00 0 0.000
Fig. 3.6 Stability regions for each of the methods FI[i]4 , i = 1, 2, 3 (from left to right).
Remark 2. In the application of the P-C modes we may choose to evaluate the final
E in order to update the value of fn+1 for the next step or not. If we do not make
this further evaluation the mode FI[1]k results to be P’ PEC instead of P’ PECE, but
the modes FI[2]k and FI[3]k results to be the same, namely PEC’ C, and will be
f
denoted by FI[i]k, i = 1, 2. We make the observation that the order of the accumu-
lated truncation errors is the same with the evaluation or not of the final E, but the
stability characteristics differ markedly in both cases. In fact, if we do not make the
last evaluation, the stability regions are smaller. In Figure 3.7 we include a plot with
f
the stability regions for the modes FI[2]4 and FI[2]4.
In this section we apply to some examples the above formulations in P-C mode to
illustrate the performance of the implicit Falkner method. The numerical results ob-
tained with these methods are compared with the results obtained with other numer-
ical methods. The predictor-corrector formulations of the implicit Falkner methods
in this paper will be denoted by FI[j]k, where j refers to the implementations 1, 2 or
3 Formulation and analysis of a class of direct implicit integration methods 55
0.00003 1. ´ 10-6
0.00002
5. ´ 10-7
0.00001
0 0
-0.00001
-5. ´ 10-7
-0.00002
f
Fig. 3.7 Stability regions for the methods FI[2]4 and FI[2]4.
3 in the above sections, and k refers to the number of steps of the method. That is,
number 1 refers to the P’PECE mode in the above sections, 2 refers to the PEC ’CE
mode, and 3 refers to the PECEC ’ mode.
3.5.1 Example 1.
As a first example we take the classical problem used to test absolute stability, with
particular initial values
are included.
56 H. Ramos
We observe that the time needed by implicit methods is bigger than for the ex-
plicit ones, as expected. Nevertheless, the accuracies obtained by the explicit eight
order methods FEAB8 and FEAM8 are not acceptable. Concerning the FEAB7 and
FEAM7 methods, for N = 3000 the errors are enormous, while for N = 4000 they
are acceptable. These results are related with the stability intervals (see Table 2 in
[23]) since when N = 4000 the value h µ = 0.3926 is inside the absolute stability
intervals of the methods FEAB7 and FEAM7, and is not in other cases.
In the case of implicit methods there are only two situations in which the value
of hµ is within the ranges of the primary interval of absolute stability: for FI[1]7
and for FI[3]8 when N =4000 (see Table 3.2). Why in some of the other cases the
errors seem to behave well? There are two possible reasons, first, there may be a
secondary stability interval of the form [a, b] with a > 0, where the eigenvalues of
the stability matrix are less than unity. Nevertheless, this situation does not occur
with the data in Table 3.3 for this example.
On the other hand, it may occur that for some specific data, the eigenvalues of
the stability matrix although more than one, are very close to 1. In this situation
the errors grow very slowly in each step, indicating that for these values of N the
instability does not affect the results yet. This is the case for the data in Table 3.3.
For example, for the method FI[2]8 the largest eigenvalues are 1.0000079 when
N = 3000, and 1.00000024 when N = 4000, which justifies the not so bad behavior
of the method.
3.5.2 Example 2.
The next example has appeared different times in the literature (see [4], [22]). The
problem consists in
This problem has been solved in the interval [0, 100] with the methods FEAM4 and
f
FI[2]4 taking w = 20. Figure 3.8 shows the efficiency of different methods appeared
in the literature, the horizontal axis stands for the number of function evaluations
required and the vertical axis stands for the digital logarithm of the maximal global
error for the solution. The methods used for comparison are
• RK4: the classical RK method of order four
• RK5: the classical RK method of order five presented in [13]
3 Formulation and analysis of a class of direct implicit integration methods 57
Table 3.3 Maximum absolute error of the solution for different implementations of the explicit and
implicit Falkner methods in P-C mode for solving the problem in (3.36)
3.5.3 Example 3.
We consider the following nonlinear problem studied by Chawla and Rao in [3]
For this example, as there is not a known analytical solution, we perform the com-
putation on the interval [0, 20π ] and measure the absolute error at the final point
58 H. Ramos
2
æ
1 à
æ
0
à æ
ì
ò
-1 æ RK4
ô
ç
log10 Hmaximal global errorL
-2 ì à æ
ò à RK5
-3 ô
ç ì ì EFRK4
ò à
-4 á ò SIMOS4
í
ì
ò
ô
ç ô FRK5a
-5
á ô ç FRK5b
-6 í ç á FEAM4
-7 á
í FI@2D 4
í
-8
á
-9
í
-10
1 2 3 4 5 6 7 8 9 10 11
4
Number of function evaluations H´ 10 L
Fig. 3.8 Numerical results for Problem (3.37) with different Runge-Kutta type methods and with
f
the methods FEAM4 and FI[2]4
taking the value y(20π ) = 0.000392823991 (see [29]). In Table 3.4 we present the
absolute global error at x = 20π with the number of function evaluations, F.Ev., for
the following methods
• Method 1: the well known Runge-Kutta-Nyström method in [8]
• Method 2: the explicit eighth order method with maximal interval of periodicity
in [29]
• Method 3: the sixth order hybrid method of Chawla and Rao in [3]
• Method 4: the P-C eight order method FI[3]8 in this paper
f
• Method 5: the P-C eight order method FI[3]8 in this paper
We note that while the errors are similar with all the methods, the methods FI[3]8
f
and FI[3]8 f
produce the better performances, being the formulation FI[3]8 which
needs the smallest number of function evaluations.
3 Formulation and analysis of a class of direct implicit integration methods 59
Table 3.4 Data for the problem in (3.38) with different methods
3.5.4 Example 4.
The next example consists of the mildly stiff problem given by the linear system
(see [15], [16])
The eigenvalues of the matrix of the system have modulus |λ1 | = 2500, and
|λ2 | = 1. We have solved this problem in [0, 2π ] considering the methods FEAMk
and FI[3]k for k = 5, 6, 7, 8 taking different number of steps N = 200, 300, 400. The
results appear in Table 3.5 where we have included the errors measured in the norm
k · k∞, and the CPU time needed.
We have considered the methods and number of steps in Table 3.5 for this prob-
lem to highlight how important is to choose the method and the stepsize that suit the
stability requirements in order to solve the problem properly.
3.5.5 Example 5.
The last example consists in the system that models the motion of two bodies, given
by (see [26])
′′ −y1 (t)
y1 (t) = r3
−y2 (t) (3.40)
y2′′ (t) =
r3
y1 (0) = 1 , y1′ (0) = 0 , y2 (0) = 0 , y2′ (0) = 1 ,
60 H. Ramos
Table 3.5 Maximum absolute error of the solution for different implementations of the explicit and
implicit Falkner methods in P-C mode for solving the problem in (3.39)
q
with r = y21 + y22 , and exact solution
This problem has been used as a test in a lot of articles presenting methods for
solving initial value problems. We have solved the problem in the interval [0, 7], as
was done in [31], using the methods FEABk and FI[2]kf for k = 2, . . . , 10, taking a
number of steps N = 112 in all cases, which results in a stepsize h = 0.0625. The
numerical results appear in the Table 3.6.
Finally, in Table 3.7 we show the data with the adapted Runge-Kutta method
in [27], the classical Runge-Kutta method and methods FEAM4 and FI[2]3. f We
have calculated at the endpoint of the integration interval the Euclidian norm of the
error vector with components defined as the difference between the numerical and
the exact solutions. We note that the methods FEAM4 and FI[2]3 f provide better
results, even without considering that the RK methods need more evaluations of the
function, and thus more computational effort.
3 Formulation and analysis of a class of direct implicit integration methods 61
FEAMk:
f
FI[2]k:
f
Table 3.6 Data for the problem (3.40) with the methods FEAMk and FI[2]k for k = 2, . . ., 10.
h FEAM4 f
FI[2]3 Simos’ method Classical RK method
Table 3.7 Comparison of the Euclidean norms of the end-point global errors obtained by using
f
Simos’ method [27], the classical fourth- order RK method and the methods FEAM4 and FI[2]3.
3.6 Conclusions
Falkner methods are commonly used for solving the special second-order differen-
tial initial-value problem, particularly when the values of the derivatives are also
needed (particularizations of these methods are the well-known Velocity-Verlet,
Beeman’s method or Wilson’s method). On these approaches two formulas are
needed for advancing the solution y(x) and the derivative y′ (x). When the formu-
62 H. Ramos
We have considered three different modes to implement the implicit Falkner for-
mulas and have ma de an analysis of the accumulated truncation errors. Considering
the expressions of these errors for the solution and the derivative, we observe that
the modes FI[2]k and FI[3]k show the best performance. The resulting errors in this
case are both of order O(hk+1 ). In view of Table 3.1 we note that the errors of the
explicit methods FEAMk are also of order O(hk+1 ). On the other hand, if we do
not perform the last evaluation of f with the implicit Falkner method in P-C modes
the accumulated errors are also of order O(hk+1 ), but the stability characteristics
are compromised. This fact could suggest that the explicit method FEAMk would
be the best choice because explicit methods are in general less time consuming
than implicit ones. Nevertheless, the stability is a factor that should be considered
(compare different performances in Table 3.5). We can say that if the stability re-
quirements are fulfilled than the FEAMk should be used, but in other situation the
f
FI[2]k or even the FI[3]k should be used. Some numerical examples are included to
make a comparison of the numerical performance of the different implementations,
resulting that the proposed methods may be competitive with other methods in the
literature. The numerical results agree with the theoretical analysis.
Acknowledgements The author is pleased to acknowledge a referee for carefully reading the
manuscript.
References
1. Beeman, D.: Some multistep methods for use in molecular dynamics calculations. J. Comput.
Phys. 20, 130–139 (1976)
2. Calvo, M., Montijano, J.I., Rández, L.: A new stepsize change technique for Adams methods.
Applied Mathematics and Nonlinear Sciences 1, 547–558 (2016)
3. Chawla, M.M., Rao, P.S.: Numerov-type method with minimal phase-lag for the integration
of second order periodic initial-value problems II. Explicit method. J. Comput. Appl. Math.
15, 329–337 (1986)
4. Chen, Z., You, X., Shu, X., Zhang, M.: A New Family of Phase-Fitted and Amplification-
Fitted Runge-Kutta Type Methods for Oscillators. Journal of Applied Mathematics (2012)
doi:10.1155/2012/236281
5. Collatz, L.: The Numerical Treatment of Differential Equations. Springer, Berlin (1966)
6. Coleman, J.P., Mohamed, J.: De Vogelaere’s methods with automatic error control. Comput.
Phys. Comm. 17, 283–300 (1979)
7. Coleman, J.P., Ixaru, L.Gr.: P-stability and exponential-fitting methods for y′′ = f (x, y). IMA
J. Numer. Anal. 16, 179–199 (1996)
8. Dormand, J.R., El-Mikkawy, M., Prince, P.J.: High order embedded Runge-Kutta-Nystrom
formulae. IMA J. Numer. Anal. 7, 423–430 (1987)
9. Falkner, V.M.: A method of numerical solution of differential equations. Phil. Mag. S. 7, 621–
640 (1936)
10. Franco, J.M., Gómez, I.: Accuracy and linear stability of RKN methods for solving second-
order stiff problems. Appl. Numer. Math. 59, 959–975 (2009)
11. Gear, C.W.: Argonne National Laboratory, Report no. ANL-7126, 1966
3 Formulation and analysis of a class of direct implicit integration methods 63
12. Gladwell, I., Thomas, R.: Stability properties of the Newmark, Houbolt and Wilson θ meth-
ods. I. J. Numer. and Anal. Meth. in Geom. 4, 143–158 (1980)
13. Hairer, E., Norsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I. Springer,
Berlin (1987)
14. Henrici, P.: Discrete Variable Methods in Ordinary Differential Equations. John Wiley, New
York (1962)
15. Kramarz, L.: Stability of collocation methods for the numerical solution of y” = f (x, y). BIT
20, 215–222 (1980)
16. Krishnaiah, U.A.: Adaptive methods for periodic initial value problems of second order dif-
ferential equations. J. of Comput. and Appl. Math. 8, 101–104 (1982)
17. Krogh, F.T.: Issues in the Design of a Multistep Code. JPL Technical Report (1993)
http://hdl.handle.net/2014/34958
18. Lambert, J.D.: Numerical Methods for Ordinary Differential Systems. John Wiley, England
(1991)
19. Lambert, J.D., Watson, I.A.: Symmetric multistep methods for periodic initial value problems.
J. Inst.Math. Appl. 18, 189–202 (1976)
20. Li, J.: A family of improved Falkner-type methods for oscillatory systems. Applied Mathe-
matics and Computation 293, 345–357 (2917)
21. Li, J., Wu, X.: Adapted Falkner-type methods solving oscillatory second-order differential
equations. Numerical Algorithms 62, 355–381 (2013)
22. Paternoster, B.: Runge-Kutta-Nyström methods for ODEs with periodic solutions based on
trigonometric polynomials. Applied Numerical Mathematics 28, 401–412 (1998)
23. Ramos, H., Lorenzo, C.: Review of explicit Falkner methods and its modifications for solving
special second-order IVPs. Comput. Phys. Comm. 181, 1833–1841 (2010)
24. Ramos, H., Mehta, S., Vigo-Aguiar, J.: A unified approach for the development of k-step block
Falkner-type methods for so.lving general second-order initial-value problems in ODEs. J. of
Comput. and Appl. Math. 318, 550–564 (2017)
25. Ramos, H., Singh, G., Kanwar, V., Bhatia, S.: An efficient variable step-size rational Falkner-
type method for solving the special second-order IVP. Applied Mathematics and Computation
291, 39–51 (2016)
26. Shampine, L.F., Gordon, M.K.: Computer solution of Ordinary Differential Equations. The
initial Value Problem. Freeman, San Francisco, CA (1975)
27. Simos, T.E.: An exponentially-fitted Runge-Kutta method for the numerical integration of
initial-value problems with periodic or oscillating solutions. Computer Physics Communica-
tions 115 1–8 (1998)
28. Toxvaerd, S.: A New Algorithm for Molecular Dynamics Calculations. J. Comput. Phys. 47,
444–451 (1982)
29. Tsitouras, Ch., Simos, T.E.: Explicit high order methods for the numerical integration of peri-
odic initial-value problems. Applied Mathematics and Computation 95, 15–26 (1998)
30. Vigo-Aguiar, J., Ramos, H.: Variable stepsize implementation of multistep methods for y′′ =
f (x, y, y′ ). J. of Comput. and Appl. Math. 192, 114–131 (2006)
31. Vanden Berghe, G., De Meyer, H., Van Daele, M., Van Hecke, T.: Exponentially fitted Runge-
Kutta methods. J. of Comput. and Appl. Math. 125, 107–115 (2000)
32. Wilson, E.L.: A computer program for the dynamic stress analysis of underground structures.
SESM Report No. 68–1, Division Structural Engineering and Structural Mechanics, Univer-
sity of California, Berkeley (1968)