[go: up one dir, main page]

0% found this document useful (0 votes)
95 views29 pages

Advanced Numerical Methods Analysis

Uploaded by

higra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views29 pages

Advanced Numerical Methods Analysis

Uploaded by

higra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Chapter 3

Formulation and analysis of a class of direct


implicit integration methods for special
second-order I.V.P.s in predictor-corrector
modes

Higinio Ramos

Abstract A detailed analysis of different formulations of a class of explicit di-


rect integration methods in predictor-corrector modes for solving special second-
order initial-value problems has been carried out (Comput. Phys. Comm. 181 (2010)
1833–1841), showing that the adequate combination of the involved formulas led to
an increase in the order of the method. In this paper we consider different formu-
lations of the implicit direct integration methods in predictor-corrector modes. An
analysis of the accumulated truncation errors is made and the stability analysis is
addressed, including the intervals of stability. Some numerical examples are pre-
sented to show the performance of the different formulations. These methods may
constitute a reliable alternative to other methods in the literature for solving special
second order problems.

Key words: Error analysis, stability analysis, Falkner implicit methods, direct In-
tegration Methods, second-order initial-value problems, predictor-corrector modes

3.1 Introduction

Second-order differential equations appear frequently in the applied sciences. Ex-


amples of that are the mass movement under the action of a force, problems of
orbital dynamics, or in general, any problem involving Newton’s law.
Among the general procedures for direct integration of the so-called special
second-order initial value problem (I.V.P.)

H. Ramos
Grupo de Computación Cientı́fica. Universidad de Salamanca, Escuela Politécnica Superior de
Zamora, Campus Viriato. 49022-Zamora. Spain
Tel.: +034-80-545000
Fax: +034-80-545002
e-mail: higra@usal.es

35
36 H. Ramos

y ′′ (x) = f (x, y(x)), y(x0 ) = y0 , y ′ (x0 ) = y ′0 , (3.1)

the Falkner methods [9] is a class of schemes that may be used for this purpose.

Although it is possible to integrate a second-order I.V.P. by reducing it to a first-


order system and applying one of the methods available for such systems, it seems
more natural to provide numerical methods in order to integrate the problem directly.
The advantage of this procedure lies in the fact that they are able to exploit special
information about ODEs, resulting in an increase in efficiency. For instance, it is
well-known that Runge-Kutta-Nyström methods for (3.1) involve a real improve-
ment as compared to standard Runge-Kutta methods for a given number of stages
([13], p. 285), although the computational cost remains high because of the num-
ber of function evaluations. On the other hand, a linear k-step method for first-order
ODEs becomes a 2k-step method for (3.1), ([13], p. 461), increasing the computa-
tional work. In the words of F.T. Krogh [17], ”the direct integration of second order
systems requires about half the number of function evaluations required for inte-
grating the equivalent first order system”.

The explicit Falkner method of k steps consists in a couple of formulas that can
be written in the form [5]
k−1
yn+1 = yn + h y′n + h2 ∑ β j ▽ j fn , (3.2)
j=0
k−1
y′n+1 = y′n + h ∑ γ j ▽ j fn , (3.3)
j=0

where h is the stepsize, yn and y′n are approximations to the values of the solution
and its derivative at xn = x0 + n h , fn = f (xn , yn ), and ▽ j fn is the standard notation
for the backward differences. The coefficients β j and γ j can be obtained using the
generating functions

t + (1 − t) Log(1 − t)
Gβ (t) = ∑ βj t j = (1 − t)Log2(1 − t)
,
j=0

−t
Gγ (t) = ∑ γ j t j = (1 − t)Log(1 − t) ,
j=0

which have been obtained similarly as that for the Störmer or Cowell methods (see
[14], p. 291).

The implicit Falkner method of k steps consists in two formulas that may be
written as [5]
3 Formulation and analysis of a class of direct implicit integration methods 37

k
yn+1 = yn + h y′n + h2 ∑ β j∗ ▽ j fn+1 , (3.4)
j=0
k
y′n+1 = y′n + h ∑ γ ∗j ▽ j fn+1 , (3.5)
j=0

with generating functions for the coefficients given respectively by



t + (1 − t) Log(1 − t)
Gβ ∗ (t) = ∑ β j∗ t j = Log2 (1 − t)
,
j=0

−t
Gγ ∗ (t) = ∑ γ ∗j t j = Log(1 − t) .
j=0

Note that the formulas in (3.3) and (3.5) are respectively the Adams-Bashforth and
Adams-Moulton schemes for the problem (y′ )′ = f (x, y), which are used to follow
the values of the derivative. All the above formulas are of multistep type, specifically
k-step formulas, and so k initial values must be provided in order to proceed with
the methods (the Runge-Kutta methods are commonly used to obtain the starting
values). In this paper, a rigorous analysis of the errors involved in the formulation
of the implicit procedures in predictor-corrector modes (P-C) with fixed step size is
made (for a variable step size version of these methods see [30], and for an adapted
version of the methods in [23] in case of oscillatory systems see [21] and [20]. A
new technique for stepsize changing in case of Adam’s method has recently ap-
peared in [2]. Other approaches concerning different formulations of Falkner-type
methods have appeared in [24] or [25]).

The implicit formulas in (3.4-3.5) may be used to generate methods for solving
the I.V.P. in (3.1). A well-known procedure of this type is the Wilson method [32],
which is one of the Newmark family, and is commonly used in molecular dynamics
calculations. This method uses the two-step formula in (3.4), given by

h2 
yn+1 = yn + h y′n + 2y′′n + y′′n+1 (3.6)
6
to obtain the positions, while the formula to update the velocities is the two-step
method in (3.5) given by
h
y′n+1 = y′n + (y′′n + y′′n+1) . (3.7)
2
Another scheme that uses a Falkner formula is a method due to Gear [11], that
consists in the two implicit formulas given by
38 H. Ramos

h2 ′′ 
yn+1 = yn + h y′n + yn+1 + 6y′′n − y′′n−1 (3.8)
12

h
y′n+1 = y′n + (5y′′n+1 + 8y′′n − y′′n−1) , (3.9)
12
where the second formula is the three-step method in (3.5) but the first of these for-
mulas is not the three-step one in (3.4).

The application of any of such procedures to (3.1) results in an implicit system


that must be solved at each step, involving a great computational cost, but in prac-
tice, an explicit formulation in predictor-corrector mode is frequently used. In this
way, the implicit methods in (3.4-3.5) are adequately combined with the explicit
ones in (3.2-3.3) so as to avoid having to solve an algebraic system at each step.
This P-C formulation may also be used as a starting method, as for example in [6],
where the one-step implicit Falkner method in predictor-corrector mode is used to
provide the starting values in the application of the De Vogelaere’s method. Other
examples of such procedures may be found in [1], [12] or [28].

In the last years it seems to be a competition to find the numerical method that
performs better in solving different types of problems. This is not the goal of this
article; in some cases the proposed methods will do better and in other cases they
will perform worst than other specialized methods. But the question is how these
general purposed methods can be formulated and why they do work or do not, de-
veloping an analysis of the errors involved and the stability characteristics. Anyway,
we can say that the methods presented here may constitute a reliable alternative to
other methods in the literature for solving special second order problems.

The paper is organized as follows. In the following section different formula-


tions of the explicit and implicit Falkner methods are presented. The analysis of
the explicit methods was done in [23] while the analysis of the implicit formulas
formulated in P-C mode will be done here. Section 3.3 is devoted to the analysis
of the local truncation errors and accumulated truncation errors. Section 3.4 deals
with the stability analysis, which results of vital importance in the application of the
methods. In Section 3.5 some examples are presented to show the performance of
the different formulations. In the final section some conclusions put an end to the
article.

3.2 Formulations of Falkner methods in P-C mode

In the application of P-C modes, P indicates the application of the explicit method, in
our case the predictor given by the formula in (3.2), and C indicates the application
of the implicit method, that is, the corrector given by the formula in (3.4). For the
3 Formulation and analysis of a class of direct implicit integration methods 39

derivative, we will use P’ to indicate the application of the explicit formula in (3.3),
and C ’ to indicate the application of the implicit formula in (3.5). Finally, E refers
to the evaluation of the function f in order to use this value either for corrections,
or in the next step.
Two different implementations for the explicit methods were considered in [23],
and for the implicit methods three implementations will be considered here. They
are summarized in what follows.

3.2.1 Explicit methods

3.2.1.1 PP’E mode

The fist formulation of the explicit Falkner method on each step for solving the
problem in (3.1) consists in
1. Evaluate yn+1 using the formula in (3.2)
2. Evaluate y′n+1 using the formula in (3.3)
3. Evaluate fn+1 = f (xn+1 , yn+1 )

3.2.1.2 PEC ’ mode

The second formulation of the explicit Falkner method on each step for solving the
problem in (3.1) reads
1. Evaluate yn+1 using the formula in (3.2)
2. Evaluate fn+1 = f (xn+1 , yn+1 )
3. Evaluate y′n+1 using the formula in (3.5)
which can be accomplished due to the absence of the derivative on the function
f = f (x, y(x)) . Thus, having obtained the value yn+1 it is straightforward to obtain
fn+1 to be used in the formula in (3.5). Note that in this way the ”implicit formula”
in (3.5) is no longer implicit, resulting in an explicit formulation of the method.

For the PP’E mode the accumulated truncation error is of order O(hk ) while for
the PEC ’ mode is of order O(hk+1 ) (for details see [23]). Thus, the second formu-
lation of the explicit Falkner method (PEC ’ mode) provides a better performance.

3.2.2 Implicit methods

3.2.2.1 P’PECE mode

The first choice for the implementation of the implicit Falkner method is given by
40 H. Ramos

1. Evaluate y′n+1 using the formula in (3.3) to obtain y′ Pn+1


2. Evaluate yn+1 using the formula in (3.2) to obtain yPn+1
3. Evaluate fn+1 = f (xn+1 , yPn+1 )
4. Evaluate yn+1 using the formula in (3.4) to obtain yCn+1
5. Evaluate fn+1 = f (xn+1 , yCn+1 )

3.2.2.2 PEC ’CE mode

The second possibility is given by


1. Evaluate yn+1 using the formula in (3.2) to obtain yPn+1
2. Evaluate fn+1 = f (xn+1 , yPn+1 )
3. Evaluate y′n+1 using the formula in (3.5) to obtain y′Cn+1
4. Evaluate yn+1 using the formula in (3.4) to obtain yCn+1
5. Evaluate fn+1 = f (xn+1 , yCn+1 )

3.2.2.3 PECEC ’ mode

The last implementation consists in


1. Evaluate yn+1 using the formula in (3.2) to obtain yPn+1
2. Evaluate fn+1 = f (xn+1 , yPn+1 )
3. Evaluate yn+1 using the formula in (3.4) to obtain yCn+1
4. Evaluate fn+1 = f (xn+1 , yCn+1 )
5. Evaluate y′n+1 using the formula in (3.5) to obtain y′Cn+1
Note that in the above formulations, once we have obtained the final value for
yn+1 given by yCn+1 we have to evaluate fn+1 = f (xn+1 , yCn+1 ), which will be used in
the next step. This evaluation is indicated by the last E in the different modes.

Obviously, many more choices could have been considered, taking formulas with
different steps and therefore, a different order. Specifically, we have considered a
predictor with k + 1 steps and a corrector with k steps, but we have not obtained
any improvements over the methods with k steps in the predictor and the corrector.
Or you could have considered further corrections, obtaining methods of the forms
P′ P(EC)n E, P(EC ′C)n E or P(EC)n EC ′ . We have done different experiments, and
in the latter case the interval of stability varies, resulting in an interval slightly
higher, but with so little difference that the computational effort is not worth. We
have considered only P-C methods with the same number of steps in all the formu-
las (as it has been used by Shampine and Gordon [26]), where it is performed only
a correction, as is usually done in the Adams methods in P-C mode (see [18]).
3 Formulation and analysis of a class of direct implicit integration methods 41

3.3 Error analysis

3.3.1 Local truncation errors

Consider the formula resulting after approximating the function f on the exact for-
mula
Z xn+1
y(xn+1 ) = y(xn ) + hy′ (xn ) + f (x, y(x))(xn+1 − x) dx (3.10)
xn

by the interpolating polynomial on the grid points xn−(k−1) , . . . , xn , and also consider
the formula in (3.2). Using the localization hypothesis that y(x j ) = y j , j = n − (k −
1), . . . , n, and following a similar procedure as for the Adams formulas [18] we
obtain that the local truncation error for the method in (3.2) is given by

LP [y(xn ); h] = y(xn+1 ) − yPn+1 = hk+2 y(k+2) (ξ )βk , (3.11)

where ξ is an internal point of the smallest interval containing xn−(k−1), . . . , xn .

Similarly, for the formula in (3.3) the local truncation error reads
P
LP′ [y′ (xn ); h] = y′ (xn+1 ) − y′ n+1 = hk+1 y(k+2) (ψ )γk , (3.12)

where as before ψ refers to an internal point.

For the formula in (3.4) the local truncation error is given by

LC [y(xn ); h] = y(xn+1 ) − yCn+1 = hk+3 y(k+3) (ξ )βk+1



, (3.13)

and similarly, for the formula in (3.5) the local truncation error is
C ∗
LC ′ [y′ (xn ); h] = y′ (xn+1 ) − y′ n+1 = hk+2 y(k+3) (ψ )γk+1 (3.14)

where we have denoted by ξ or ψ the internal points, or if they have to be different,


we will use ξ j or ψ j , as in the following section.

3.3.2 Accumulated truncation errors for the implicit Falkner


method in P-C modes

3.3.2.1 P’PECE mode

Assuming that we have to integrate the problem in (3.1) on the interval [x0 , xN ] and
that we know in advance the starting values needed to apply the numerical scheme,
we proceed by analyzing the accumulated errors on each successive application of
42 H. Ramos

the method along the grid points on the integration interval. We use a superscript P
to indicate the application of the explicit method and a superscript C to indicate the
application of the implicit one, regardless of whether the formula is for the solution
or for the derivative.

Assuming the localization hypothesis and using the formulas for the local trunca-
tion errors in (3.11) and (3.12), for the first step (n = 0) we have that the differences
between the true values, y(x1 ) , y′ (x1 ), and the approximated ones, yP1 , y′P
1 , are given
respectively by the local truncation errors, that is,

y′ (x1 ) − y′P
1 = h
k+1 (k+2)
y (ψ1 )γk , (3.15)

y(x1 ) − yP1 = hk+2 y(k+2) (ξ1 )βk , (3.16)

where, for convenience, in the sequel the ξ j and ψ j will denote appropriate internal
points.

After evaluating f (x1 , yP1 ), using the Mean Value Theorem we can put that

∂f 
f (x1 , y(x1 )) − f1P = (x1 , ξ1 ) y(x1 ) − yP1
∂y

where f1P = f (x1 , yP1 ). Assuming the localization hypothesis, we have

k
y(x1 ) − yC1 = y(x0 ) + h y′ (x0 ) + h2 ∑ β j∗ ▽ j f (x1 , y(x1 ))
j=0
!
k
k+3 (k+3) ∗
+h y (ξ1 )βk+1 − y0 + h y′0 + h2 ∑ β j∗ ▽ j f1P
j=0


= hk+3 y(k+3) (ξ1 )βk+1 + O(hk+4 ) . (3.17)

After evaluating f (x1 , yC1 ), for the next step (n = 1) we have that

k−1
y′ (x2 ) − y′P ′
2 = y (x1 ) + h ∑ γ j ▽ j f (x1 , y(x1 ))
j=0
!
k−1
+ hk+1 y(k+2) (ψ2 )γk − y′P
1 +h ∑ γ j ▽ j f1C (3.18)
j=0

where we have used the values of the step before, y′P C


1 and y1 , and have used the
C C
notation f1 = f (x1 , y1 ). After some calculations, using the formulas in (3.15) and
(3.17), it results that
 
y′ (x2 ) − y′P
2 =h
k+1
γk y(k+2) (ψ1 ) + y(k+2) (ψ2 ) + O(hk+2 ) . (3.19)
3 Formulation and analysis of a class of direct implicit integration methods 43

Similarly, for the predictor we have

y(x2 ) − yP2 = hk+2 βk y(k+2) (ξ2 ) + hk+2 y(k+2) (ξ1 )γk + O(hk+3 ) . (3.20)

And finally, for the corrector we get that


k
y(x2 ) − yC2 = y(x1 ) + h y′ (x1 ) + h2 ∑ β j∗ ▽ j f (x1 , y(x1 ))
j=0
!
k
k+3 (k+3) ∗
+h y (ξ2 )βk+1 − yC1 + h y′P
1 +h
2
∑ β j∗ ▽ j f2P
j=0
 

= hk+3 βk+1 y(k+3) (ξ1 ) + y(k+3)(ξ2 ) + hk+2 y(k+2) (ψ1 )γk

+ O(hk+4 ) . (3.21)

Repeating the procedure along the nodes on the integration interval we determine
that the accumulated error at the final point xN is given by
N−1 N
y(xN ) − yCN = hk+2 γk ∑ (N − j)y(k+2)(ψ j ) + hk+3βk+1

∑ y(k+3) (ξ j )
j=1 j=1

+ O(hk+4 ) , (3.22)

and for the derivative we have


N
y′ (xN ) − y′P
N = h
k+1
γk ∑ y(k+2) (ψ j ) + O(hk+2) . (3.23)
j=1

Assuming that the derivatives y(k+2) (x) and y(k+3) (x) are continuous, after using the
Mean Value Theorem, the above formulas may be rewritten as follows:
1 k (k+2)
y(xN ) − yCN = h γk y (ψ )(xN − x0 )(xN − x1)
2


+ hk+2 βk+1 y(k+3) (ξ )(xN − x0 ) + O(hk+4) (3.24)

and

y′ (xN ) − y′P k
N = h γk (xN − x0 )y
(k+2)
(ψ ) + O(hk+2 ) . (3.25)

3.3.2.2 PEC ’CE mode

In this mode, for the first step (n = 0), after the application of the predictor P and
the corrector C we have the same results as before, that is,
44 H. Ramos

y(x1 ) − yP1 = hk+2 y(k+2) (ξ1 )βk ,

and

y(x1 ) − yC1 = hk+3 y(k+3) (ξ1 )βk+1 + O(hk+4 ) .

Now the difference with the mode before results in the application of the corrector
C ’ to obtain the approximation for the derivative. We have
k
y′ (x1 ) − y′C ′
1 = y (x0 ) + h ∑ γ ∗j ▽ j f (x1 , y(x1 ))
j=0
!
k
k+2 (k+3) ∗
+h y (ψ1 )γk+1 − y′C
0 + h ∑ γ ∗j ▽ j f1P
j=0


= hk+2 y(k+3) (ψ1 )γk+1 + O(hk+3 ) . (3.26)

After the evaluation of f (x1 , yC1 ) and the application of the predictor P to obtain an
estimate yP2 for y(x2 ), the application of the corrector C results in
 

y(x2 ) − yC2 = hk+3 βk+1 y(k+3) (ξ1 ) + y(k+3)(ξ2 ) + hk+3 y(k+3) (ψ1 )γk+1

+ O(hk+4 ) , (3.27)

while the application of the corrector C ’ produces


 
y′ (x2 ) − y′C
2 = h k+2 ∗
γk+1 y (k+3)
(ψ 1 ) + y (k+3)
(ψ 2 ) + O(hk+3 ) . (3.28)

Repeating the procedure along the nodes on the integration interval we obtain
that the accumulated error at the final point xN is given by
N−1 N
∗ ∗
y(xN ) − yCN = hk+3 γk+1 ∑ (N − j)yk+3)(ψ j ) + hk+3βk+1 ∑ yk+3)(ξ j )
j=1 j=1

+ O(hk+4 ) , (3.29)

while for the derivative we have the accumulated error given by


N
y′ (xN ) − y′C
N =h
k+2 ∗
γk+1 ∑ yk+3) (ψ j ) + O(hk+3) . (3.30)
j=1

Assuming that y(k+3) (x) is continuous, the above formulas for the accumulated
errors at the final point xN may be written as
3 Formulation and analysis of a class of direct implicit integration methods 45

1 k+1 ∗ (k+3)
y(xN ) − yCN = h γk+1 y (ψ )(xN − x0 )(xN − x1)
2


+ hk+2 βk+1 y(k+3) (ξ )(xN − x0 ) + O(hk+4) (3.31)

for the solution, and

y′ (xN ) − y′C
N =h
k+1 ∗
γk+1 (xN − x0 )y(k+3) (ψ ) + O(hk+3 ) (3.32)

for the derivative.

3.3.2.3 PECEC ’ mode

Now the difference with the mode before results in the application of the corrector
C ’ to obtain the values y′C C P
n , using f (xn , yn ) instead of f (xn , yn ). The first difference
to the previous mode is observed in the first step (n = 0) and results to be
k
y′ (x1 ) − y′C ′
1 = y (x0 ) + h ∑ γ ∗j ▽ j f (x1 , y(x1 ))
j=0
!
k
k+2 (k+3) ∗ ′
+h y (ψ1 )γk+1 − y′C
0 +h ∑ γ ∗j ▽ j f1C
j=0

= hk+2 y(k+3) (ψ1 )γk+1



+ O(hk+3 ) . (3.33)

Note that the only difference between the formulas in (3.26) and in (3.33) is that
f1P has been substituted by f1C , but this does not change the principal term of the
errors. This means that the principal terms in the errors for the final formulas are
the same as in the previous mode, and so the formulas in (3.31) and (3.32) remain
valid in this mode. This, however, does not mean that the errors are equal, since the
remaining terms are different. But since the principal terms are the same the errors
will be similar.

For comparison purposes we have included in Table 3.1 the principal terms of
the accumulated errors for the different implementations of the explicit and implicit
Falkner methods. These terms will be denoted by PT (y(xn ) − yN ) for the solution,
and PT (y′ (xn ) − y′N ) for the derivative, where the approximated values of the solu-
tion and the derivative at the final point are indicated by yN and y′N respectively. The
formulations of the explicit methods with k steps are named FEABk and FEAMk as
in [23], while the implicit method with k steps formulated in P-C modes are named
FI[1]k, FI[2]k and FI[3]k corresponding to the P’PECE, PEC ’CE and PECEC ’
modes respectively in the above section.
Remark 1. Note that the above methods may be applied to a system of second-order
differential equations of the form
46 H. Ramos

Method PT (y(xn ) − yN ) PT (y′ (xn ) − y′N )


1 k (k+2)
FEABk 2h y (ψ )(xN − x0 )(xN − x1 )γk hk y(k+2) (ψ )(xN − x0 )γk
1 k+1 (k+3) ∗
FEAMk 2h y (ψ )(xN − x0 )(xN − x1 )γk+1 ∗
hk+1 y(k+3) (ψ )(xN − x0 )γk+1
+hk+1 y(k+2) (ξ )(xN − x0 )βk
1 k (k+2)
FI[1]k 2h y (ψ )(xN − x0 )(xN − x1 )γk hk y(k+2) (ψ )(xN − x0 )γk
1 k+1 (k+3) ∗
FI[2]k 2h y (ψ )(xN − x0 )(xN − x1 )γk+1 ∗
hk+1 y(k+3) (ψ )(xN − x0 )γk+1
FI[3]k

Table 3.1 Principal terms of the accumulated errors for the different implementations of the ex-
plicit and implicit Falkner methods.

(
y′′ (x) = f(x, y(x)) , x ∈ [x0 , xN ]

y(x0 ) = y0

where y : IRm → IRm , and f : [x0 , xN ] × IRm → IRm , using a componentwise imple-
mentation.

3.4 Stability

In the context of ordinary differential equations, the concept of stability refers to


what extent a numerical scheme is appropriate for solving an initial-value problem.
Roughly speaking, a given method can be said to be stable if small changes in the
data result in small changes in the solution obtained.

A procedure commonly used to study stability (zero-stability) consists in writ-


ing the difference equations of the method as a one-step recurrence in a space with
high dimension and in an adequate norm to bound the finite powers of the resulting
matrices. For the different combinations of the implicit method in (3.4) to get the
above P-C modes, a similar procedure to that in [23] may be considered to obtain
zero-stability. But, the zero stability is only a minimal condition for a numerical
method; for second order equations the so-called P-stability is the type of concern
together with A-stability.

In order to determine whether a numerical method will produce reasonable re-


sults with a given value of h > 0, we need a notion of stability that is different from
zero-stability. The stability properties are analyzed by using the linear test equation
introduced by Lambert and Watson [19]

y′′ (x) = − µ 2 y(x) , with µ > 0 . (3.34)


3 Formulation and analysis of a class of direct implicit integration methods 47

The application of the above predictor-corrector formulations of the implicit Falkner


method to this problem yields the following recursion

Yn = M[i]Yn−1 , (3.35)
T
where Yn is the k + 1-vector given by Yn = yn+1 , yn , . . . , yn−(k−2) , h y′n+1 , and
M[i] , i = 1, 2, 3, are (k + 1) × (k + 1) matrices whose elements depends on H 2 and
the coefficients of the method, where H = µ h. The matrices M[i] are called the
stability matrices and the index i in M[i] is used to denote each implementation ac-
cording to its appearance in the above section, that is, i = 1 stands for the first P-C
mode, i = 2 for the second P-C mode, and i = 3 for the last one.

The entries of each of the stability matrices are given by


!
k k−1
M[1]1 1 = 1 + H2 ∑ j − 1 + H 2 ∑ βs β j∗
j=0 s=0

  !
k k−1  
j s
M[1]1 l+1 = (−1) H l 2
∑ 2
+H ∑ βs β j∗ ,
j=0 l+1 s=l l
l = 1, . . . , k − 1 .

M[1] j l = δ j l+1 , j = 2, . . . , k l = 1, . . . , k + 1 .

k−1  
j
M[1]k+1 l+1 = (−1)l+1 H 2 ∑ γj , l = 0, . . . , k − 1 .
j=l l

M[1]k+1 k+1 = 1
 
j
where δ j l+1 is the Kronecker delta and the notations of type refer to the bino-
l
mial coefficients.

For the matrices M[2] and M[3] all the entries are the same as for M[1] except for
the last row, which for M[2] reads
48 H. Ramos
!
k k−1
M[2]k+1 1 = H2 ∑ j − 1 + H 2 ∑ βs γ ∗j
j=0 s=0

  !
k k−1  
j s
M[2]k+1 l+1 = (−1) H l 2
∑ 2
+H ∑ βs γ ∗j ,
j=0 l+1 s=l l
l = 1, . . . , k − 1 .

k
M[2]k+1 k+1 = 1 − H 2 ∑ γ ∗j ,
j=0

and, similarly, for M[3] the last row results in


" ! #
k k k−1
M[3]k+1 1 =H 2
∑ j−1+H 2
∑ 1−s−H 2
∑ βt βs∗ γ ∗j
j=0 s=0 t=0

"    ! #
k k k−1  
j t s
M[3]k+1 l+1 = (−1)l H 2 ∑ − H2 ∑ + H2 ∑ βs βt∗ γ ∗j ,
j=0 l+1 t=0 l+1 s=l l
l = 1, . . . , k − 1 .
!
k k
M[3]k+1 k+1 = 1 − H 2
∑ 1−H 2
∑ βs∗ γ ∗j .
j=0 s=0

For example, when k = 3 the equation in (3.35) in case of M[1] reads


 
19(19H 2 −132)H 2 2 4 H 2 (19H 2 −28) 2
+ 1 H10 − 19H 1 − 19H
 4320 432 1440 180 
 1 0 0 0 
Yn =   Yn−1 ,
 0 1 0 0 
2 2 2
− 23H12
4H
3 − 5H
12 1

while the same equation in case of M[2] results in


 19 
(19H 2 −132)H 2 H2 4 H 2 (19H 2 −28) 2
4320 +1 10− 19H
432 1440 1 − 19H
180
 
 1 0 0 0 
Yn =   Yn−1 ,
 0 1 0 0 
19H 4 2 5 H 2 (4−3H 2 ) H 2 (9H 2 −8) 2
64 − 7H6 96 192 1 − 3H8

and finally, for the third implementation the matrix M[3] is given by
3 Formulation and analysis of a class of direct implicit integration methods 49
 
19(19H 2 −132)H 2 H2 4 H 2 (19H 2 −28) 2
+1 − 19H 1 − 19H
 4320 10 432 1440 180 
 1 0 0 0 
 
 0 1 0 0 
2508H 4 −361H 6 −13440H 2 95H 6 −216H 4 +1200H 2 28H 4 −19H 6 −160H 2 19H 4 −180H 2
11520 5760 3840 480 +1

The behaviour of the numerical solution will depend on the eigenvalues of the
matrices M[i], and the stability properties of the methods will be characterized by
the spectral radius, ρ (M[i]). According to the terminology introduced by Coleman
and Ixaru [7] we have:
• (0, Hs ) is an interval of stability for a given method if for all H ∈ (0, Hs ) it is
|ri | < 1 where the ri , i = 1, . . . , k + 1 are the eigenvalues of the stability matrix.
• (0, H p ) is an interval of periodicity for a given method if for all H ∈ (0, H p ) the
eigenvalues of the stability matrix, ri , i = 1, . . . , k + 1, satisfy

r1 = eiθ (H) , r2 = e−iθ (H) , |ri | ≤ 1 , i > 2 ,

where θ (H) is a real function.


In particular, if the interval of stability is (0, ∞) the method is A-stable, and if the
interval of periodicity is (0, ∞) the method is P-stable. Note that A-stability is more
restrictive than P-stability, that is, A-stability implies P-stability.

The practical significance of the interval of stability is that, for a given µ in


(3.34), there is no explosion of the error in the numerical solution when 0 < h <
Hs /µ [7]. The interval of periodicity defines the stepsize which can be used in order
for the approximation of the solution of problems with high oscillatory or peri-
odic solution to be of the same order as the algebraic order of the method. When
0 < h < H p / µ the numerical solution defined by (3.35) is also periodic, as is the
exact solution of the test model (3.34) for all non-trivial initial-conditions on y and
y′ .

A crucial difference between A-stability and P-stability is that for A-stable meth-
ods the stability matrix satisfies (M[i])n → 0 as n → ∞ because ρ (M[i]) < 1, but for
P-stable methods this fact is not possible because ρ (M[i]) = 1 , (see [10]). There-
fore, A-stable methods alleviate the initial errors whereas for P-stable methods the
initial errors do not diminish when the integration progresses in time.
If we let the value µ in (3.34) to be complex, instead of intervals we obtain stability
or periodicity regions in the H-complex plane.

The importance of determining the stability characteristics will be shown by con-


sidering the Gear’s method in (3.8-3.9). This is an implicit method for which we
need a predicted value to be formulated in P-C mode. Taking as predictor the two-
step explicit Falkner method given by
50 H. Ramos

h2 
yn+1 = yn + h y′n + 4y′′n − y′′n−1
6
to obtain yPn+1 , after evaluating f (xn+1 , yPn+1 ) and applying the formula in (3.8) to
obtain yCn+1 we can proceed in two ways:
1. we can evaluate f (xn+1 , yCn+1 ) with the best approximation we have for yn+1 and
then we use the formula in (3.9) to obtain the approximate value y′n+1 . We note
this formulation as P2 ECEC′ , where the subindex 2 corresponds to the order of
the predictor.
2. or we can use the previous evaluation f (xn+1 , yPn+1 ) in the formula in (3.9) to
obtain the approximate value y′n+1 . The notation for this formulation results in
P2 ECC′ .
For the first choice the stability matrix results in
  
H4 7H 2 1 2 2−6 H2
18 − 12 + 1 − 72 H H 1 − 12
 1 0 0 
 
6 4
−10H +105H −468H 2 H (( )
2 5 H 2 −6 H 2 +72
) (
5 H )
2 −12 H 2 +1

432 864 144

for which there are no stability nor periodicity intervals.


In the second case the stability matrix is given by
  
H4 2 2
18 − 7H12 + 1 − 72 1 2
H H 2 − 6 1 − H12
 
 1 0 0 
1 2 2
 1 2

4 1 − 5H 2
36 H 10H − 39 72 6H − 5H 12

and the stability interval is given by (0, 1.906123). Although at first glance one
could intuitively suppose that the first procedure will be the best this is not true. Not
only the first procedure requires more computational cost (two evaluations of the
function f per step versus one evaluation in the second case) but it also has worst
stability properties.

Nevertheless, if we consider as predictor the three-step explicit Falkner method


given by
h2 
yn+1 = yn + h y′n + 19y′′n − 10y′′n−1 + 3y′′n−2
24
to obtain yPn+1 , after evaluating f (xn+1 , yPn+1 ) and applying the formula in (3.8) to
obtain yCn+1 we can also proceed in two ways:
1. we can evaluate f (xn+1 , yCn+1 ) with the best approximation we have for yn+1 and
then we use the formula in (3.9) to obtain the approximate value y′n+1 . Now the
notation for this formulation is P3 ECEC′ .
2. or we can use the previous value f (xn+1 , yPn+1 ) in the formula in (3.9) to obtain
the approximate value y′n+1 . The notation for this formulation is P3 ECC′ .
Now in the first case the stability matrix is given by
3 Formulation and analysis of a class of direct implicit integration methods 51
  
19H 4 2 H4 2
288 − 7H
12 + 1
1
144 H
2
12 − 5H 2
96 1 − H12
 1 0 0 0 
 
 0 1 0 0 
 
−95H 6 +840H 4 −3744H 2 H (25H −60H +144)
2 4 2 6 2 2
5(H −12)H +1
5H
3456 1728 − 1152 144

and the stability interval is given by (0, 0.585417). In the second case the stability
matrix is given by
 19H 4 2  4 2 
288 − 7H12 + 1
1
144 H
2 12 − 5H 2 H96 1 − H12
 1 0 0 0 
 
 0 1 0 0 
1
 1
 4 5H 2
288 H
2 95H 2 − 312 144 H
2 12 − 25H 2 5H96 1 − 12

for which there are no stability nor periodicity intervals. This time the first procedure
is the best concerning the stability characteristics.
In Figure 3.1 the stability regions are depicted (left: taking as predictor the two-
step explicit Falkner method and using f (xn+1 , yPn+1 ) in the formula in (3.9); right:
taking as predictor the three-step explicit Falkner method and using f (xn+1 , yCn+1 )
in the formula in (3.9)).

0.3 0.00001

0.2
5. ´ 10-6
0.1

0.0 0

-0.1
-5. ´ 10-6
-0.2

-0.3 -0.00001
-2 -1 0 1 2 -1.0 -0.5 0.0 0.5 1.0

Fig. 3.1 Absolute stability regions for different formulations of Gear’s method in P-C mode (left:
P2 ECC′ ; right: P3 ECEC′ )

These unexpected results show the importance of determining the stability re-
gions in order to obtain appropriate results when applying a numerical method. The
numerical results in the following section will confirm this assertion (Tables 3.3 and
3.5 provide a confirmation of the theoretical expectations on the stability of the de-
52 H. Ramos

rived methods).

For the methods presented in the previous sections we have obtained that they are
not A-stable nor P-stable up to k = 14 (it is also known that there are no A-stable
Adams methods for y′ = f (x, y) ). There exists only one interval of
√ periodicity cor-
responding to the third mode when k = 1 being this interval [0, 6]. In Table 3.2
the intervals of stability are presented from k = 1 up to k = 14, where k refers to
the number of steps, for the different formulations, named after FI[1]k, FI[2]k, and
FI[3]k respectively, according to the notation used in the above sections.

All these values have been obtained with the help of the Mathematica program.
We have considered only the primary stability intervals although for different meth-
ods may exist secondary stability intervals, as for the second P-C mode for k = 2,
for which [1, 2] is an interval where |ri | < 1 , i = 1, 2, 3.

k FI[1]k FI[2]k FI[3]k

1 ∅ (0, 2) ∅
2 ∅ ∅ ∅
3 (0, 0.853006) ∅ ∅
4 (0, 0.930949) (0, 0.534947) (0, 1.108998)
5 ∅ (0, 0.925569) (0, 1.409664)
6 ∅ ∅ ∅
7 (0, 0.401364) ∅ ∅
8 (0, 0.285341) (0, 0.274630) (0, 0.480033)
9 ∅ (0, 0.521074) (0, 0.821956)
10 ∅ ∅ ∅
11 (0, 0.100705) ∅ ∅
12 (0, 0.071103) (0, 0.185094) (0, 0.300133)
13 ∅ (0, 0.362275) (0, 0.390144)
14 ∅ ∅ ∅

Table 3.2 Intervals of stability for the different formulations of the Falkner implicit method in P-C
modes.

We have included different plots showing the stability intervals. Figure 3.2 shows
the absolute value of the eigenvalues of M[2] and M[3] for k = 1 versus H, and the
resulting stability interval (0, Hs ). For the matrix M[3] we have included the interval
of periodicity (0, H p ). Figure 3.3 shows the absolute value of the eigenvalues of M[1]
and M[2] for k = 3 versus H. This time, for the M[2] mode there is no primary stabil-
ity interval, although a secondary interval of stability exists, (1.731413, 1.895953).

Figure 3.4 shows the absolute value of the eigenvalues of M[1] and M[3] for k = 5,
showing in case of M[1] a secondary stability interval given by (0.626085, 0.752697).
Finally, Figure 3.5 shows two eigenvalues limiting the stability regions in case of
3 Formulation and analysis of a class of direct implicit integration methods 53

M[2] and M[3] when k = 9. We have included as a reference in all the plots the
horizontal line at a distance of one unit above the horizontal axis.

2.0 2.0
1.5 1.5

1.0 1.0

0.5 0.5

0 Hs =2 2.5 0 Hp= 6 3

Fig. 3.2 Absolute value of the eigenvalues |ri |, i = 1, 2 of the stability matrices for the P-C modes:
M[2] (left) and M[3] (right) when k = 1.

1.4 1.4
1.2 1.2
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

0 Hs =0.853006 1.5 0 2

Fig. 3.3 Absolute value of the eigenvalues |ri |, i = 1, 2, 3, 4 of the stability matrices for the P-C
modes: M[1] (left) and M[2] (right) when k = 3.

1.4
1.004 1.2
1.002 1.0
0.8
1.000 0.6
0.998 0.4
0.2
1 0 Hs =1.40966 2

Fig. 3.4 Absolute values of the eigenvalues |ri |, i = 1, . . ., 6 of the stability matrices for the P-C
modes: M[1] (left) and M[3] (right) when k = 5.

In Figure 3.6 appear the stability regions for the methods FI[i]4 , i = 1, 2, 3.
54 H. Ramos

1.000000 1.00010

1.000000 1.00005
1.000000 1.00000
0.999999 0.99995

0 Hs =0.52107 1 0 Hs =0.8211

Fig. 3.5 Absolute value of two eigenvalues limiting the stability region for each of the stability
matrices for the P-C modes: M[2] (left) and M[3] (right) when k = 9.

0.15 0.00003 0.003

0.10 0.00002 0.002

0.05 0.00001 0.001

0.00 0 0.000

-0.05 -0.00001 -0.001

-0.10 -0.00002 -0.002

-0.15 -0.00003 -0.003


-1.0 -0.5 0.0 0.5 1.0 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 -1.0 -0.5 0.0 0.5 1.0

Fig. 3.6 Stability regions for each of the methods FI[i]4 , i = 1, 2, 3 (from left to right).

Remark 2. In the application of the P-C modes we may choose to evaluate the final
E in order to update the value of fn+1 for the next step or not. If we do not make
this further evaluation the mode FI[1]k results to be P’ PEC instead of P’ PECE, but
the modes FI[2]k and FI[3]k results to be the same, namely PEC’ C, and will be
f
denoted by FI[i]k, i = 1, 2. We make the observation that the order of the accumu-
lated truncation errors is the same with the evaluation or not of the final E, but the
stability characteristics differ markedly in both cases. In fact, if we do not make the
last evaluation, the stability regions are smaller. In Figure 3.7 we include a plot with
f
the stability regions for the modes FI[2]4 and FI[2]4.

3.5 Numerical examples

In this section we apply to some examples the above formulations in P-C mode to
illustrate the performance of the implicit Falkner method. The numerical results ob-
tained with these methods are compared with the results obtained with other numer-
ical methods. The predictor-corrector formulations of the implicit Falkner methods
in this paper will be denoted by FI[j]k, where j refers to the implementations 1, 2 or
3 Formulation and analysis of a class of direct implicit integration methods 55

0.00003 1. ´ 10-6

0.00002
5. ´ 10-7
0.00001

0 0

-0.00001
-5. ´ 10-7
-0.00002

-0.00003 -1. ´ 10-6


-0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3

f
Fig. 3.7 Stability regions for the methods FI[2]4 and FI[2]4.

3 in the above sections, and k refers to the number of steps of the method. That is,
number 1 refers to the P’PECE mode in the above sections, 2 refers to the PEC ’CE
mode, and 3 refers to the PECEC ’ mode.

3.5.1 Example 1.

As a first example we take the classical problem used to test absolute stability, with
particular initial values

y′′ (x) = − µ 2 y(x) , y(0) = 0 , y′ (0) = 1 (3.36)

whose exact solution is


sin(µ x)
y(x) = .
µ
The problem has been integrated on the interval [0, 5π ] with µ = 100, considering
the explicit and implicit implementations, using different values of N and k. Ta-
ble 3.3 shows the results obtained with k = 7, 8 and N = 3000, 4000. The values of
h µ , the CPU times (in seconds) and the maximum absolute error on the integration
interval
MaxErr = max |y(x j ) − y j | ,
j∈0,...,N

are included.
56 H. Ramos

We observe that the time needed by implicit methods is bigger than for the ex-
plicit ones, as expected. Nevertheless, the accuracies obtained by the explicit eight
order methods FEAB8 and FEAM8 are not acceptable. Concerning the FEAB7 and
FEAM7 methods, for N = 3000 the errors are enormous, while for N = 4000 they
are acceptable. These results are related with the stability intervals (see Table 2 in
[23]) since when N = 4000 the value h µ = 0.3926 is inside the absolute stability
intervals of the methods FEAB7 and FEAM7, and is not in other cases.

In the case of implicit methods there are only two situations in which the value
of hµ is within the ranges of the primary interval of absolute stability: for FI[1]7
and for FI[3]8 when N =4000 (see Table 3.2). Why in some of the other cases the
errors seem to behave well? There are two possible reasons, first, there may be a
secondary stability interval of the form [a, b] with a > 0, where the eigenvalues of
the stability matrix are less than unity. Nevertheless, this situation does not occur
with the data in Table 3.3 for this example.

On the other hand, it may occur that for some specific data, the eigenvalues of
the stability matrix although more than one, are very close to 1. In this situation
the errors grow very slowly in each step, indicating that for these values of N the
instability does not affect the results yet. This is the case for the data in Table 3.3.
For example, for the method FI[2]8 the largest eigenvalues are 1.0000079 when
N = 3000, and 1.00000024 when N = 4000, which justifies the not so bad behavior
of the method.

3.5.2 Example 2.

The next example has appeared different times in the literature (see [4], [22]). The
problem consists in

y′′ (t) = −w2 y(t) + (w2 − 1) sin(t) , y(0) = 1 , y′ (0) = w + 1 (3.37)

whose exact solution is

y(t) = cos(wt) + sin(wt) + sin(t) .

This problem has been solved in the interval [0, 100] with the methods FEAM4 and
f
FI[2]4 taking w = 20. Figure 3.8 shows the efficiency of different methods appeared
in the literature, the horizontal axis stands for the number of function evaluations
required and the vertical axis stands for the digital logarithm of the maximal global
error for the solution. The methods used for comparison are
• RK4: the classical RK method of order four
• RK5: the classical RK method of order five presented in [13]
3 Formulation and analysis of a class of direct implicit integration methods 57

Method N steps hµ CPU(s.) MaxErr(y)

FEAB7 3000 0.5235 0.344 3.0150 × 1050


4000 0.3926 0.484 3.1289 × 10−3
FEAB8 3000 0.5235 0.391 6.0959 × 10252
4000 0.3926 0.547 2.7399 × 1085
FEAM7 3000 0.5235 0.359 3.2187 × 10114
4000 0.3926 0.500 2.7601 × 10−4
FEAM8 3000 0.5235 0.422 3.0925 × 10367
4000 0.3926 0.547 1.3965 × 10160

FI[1]7 3000 0.5235 0.594 1.9224 × 10185


4000 0.3926 0.813 3.2240 × 10−3
FI[1]8 3000 0.5235 0.703 2.3554 × 10387
4000 0.3926 0.906 1.8158 × 10259
FI[2]7 3000 0.5235 0.610 5.4653 × 10−4
4000 0.3926 0.813 4.9368 × 10−5
FI[2]8 3000 0.5235 0.688 2.4507 × 10−4
4000 0.3926 0.921 1.6627 × 10−5
FI[3]7 3000 0.5235 0.610 4.5078 × 10−4
4000 0.3926 0.812 4.3100 × 10−5
FI[3]8 3000 0.5235 0.672 1.9418 × 10−4
4000 0.3926 0.922 1.4224 × 10−5

Table 3.3 Maximum absolute error of the solution for different implementations of the explicit and
implicit Falkner methods in P-C mode for solving the problem in (3.36)

• EFRK4: the four-stage exponentially fitted RK method of order four given in


[31]
• SIMOS4: the RK method of order four presented in [27]
• FRK5a: the seven-stage RK method of order five given by (4.17) and (4.20) in
[4]
• FRK5b: the seven-stage RK method of order five given by (4.17) and (4.28) in
[4]
It is obvious from the graphs that the Falkner methods perform better.

3.5.3 Example 3.

We consider the following nonlinear problem studied by Chawla and Rao in [3]

y′′ (x) + 100y = sin(y) , y(0) = 0 , y′ (0) = 1 (3.38)

For this example, as there is not a known analytical solution, we perform the com-
putation on the interval [0, 20π ] and measure the absolute error at the final point
58 H. Ramos

2
æ
1 à
æ
0
à æ
ì
ò
-1 æ RK4
ô
ç
log10 Hmaximal global errorL

-2 ì à æ
ò à RK5

-3 ô
ç ì ì EFRK4
ò à
-4 á ò SIMOS4
í
ì
ò
ô
ç ô FRK5a
-5
á ô ç FRK5b
-6 í ç á FEAM4
-7 á Ž
í FI@2D 4
í
-8
á
-9
í
-10
1 2 3 4 5 6 7 8 9 10 11
4
Number of function evaluations H´ 10 L

Fig. 3.8 Numerical results for Problem (3.37) with different Runge-Kutta type methods and with
f
the methods FEAM4 and FI[2]4

taking the value y(20π ) = 0.000392823991 (see [29]). In Table 3.4 we present the
absolute global error at x = 20π with the number of function evaluations, F.Ev., for
the following methods
• Method 1: the well known Runge-Kutta-Nyström method in [8]
• Method 2: the explicit eighth order method with maximal interval of periodicity
in [29]
• Method 3: the sixth order hybrid method of Chawla and Rao in [3]
• Method 4: the P-C eight order method FI[3]8 in this paper
f
• Method 5: the P-C eight order method FI[3]8 in this paper
We note that while the errors are similar with all the methods, the methods FI[3]8
f
and FI[3]8 f
produce the better performances, being the formulation FI[3]8 which
needs the smallest number of function evaluations.
3 Formulation and analysis of a class of direct implicit integration methods 59

Method F.Ev. MaxErr

Method 1 14000 3.4 × 10−10


Method 2 16000 3.2 × 10−10
Method 3 16000 3.8 × 10−10
Method 4 12000 1.8 × 10−10
Method 5 6000 4.1 × 10−10

Table 3.4 Data for the problem in (3.38) with different methods

3.5.4 Example 4.

The next example consists of the mildly stiff problem given by the linear system
(see [15], [16])

y′′ (x) = 2498y + 4998z , z′′ (x) = −2499y − 4999z (3.39)


y(0) = 0 , y′ (0) = 1 , z(0) = 0 , z′ (0) = 1

with the exact solution y = 2 cos x, z = − cosx.

The eigenvalues of the matrix of the system have modulus |λ1 | = 2500, and
|λ2 | = 1. We have solved this problem in [0, 2π ] considering the methods FEAMk
and FI[3]k for k = 5, 6, 7, 8 taking different number of steps N = 200, 300, 400. The
results appear in Table 3.5 where we have included the errors measured in the norm
k · k∞, and the CPU time needed.

We have considered the methods and number of steps in Table 3.5 for this prob-
lem to highlight how important is to choose the method and the stepsize that suit the
stability requirements in order to solve the problem properly.

3.5.5 Example 5.

The last example consists in the system that models the motion of two bodies, given
by (see [26])

 ′′ −y1 (t)
 y1 (t) = r3



−y2 (t) (3.40)

 y2′′ (t) =

 r3

y1 (0) = 1 , y1′ (0) = 0 , y2 (0) = 0 , y2′ (0) = 1 ,
60 H. Ramos

Method N steps CPU(s.) MaxErr(y) MaxErr(y′ )

FEAM5 300 0.125 1.6005 × 1020 4.4213 × 1021


400 0.188 5.2427 × 10−12 7.1552 × 10−12
FI[3]5 200 0.093 2.5509 × 104 1.4710 × 106
300 0.172 5.8347 × 10−12 7.5593 × 10−12

FEAM6 300 0.125 1.9647 × 1052 4.8460 × 1053


400 0.187 9.3717 × 1020 3.1892 × 1022
FI[3]6 200 0.109 5.0378 × 109 2.3892 × 1011
300 0.187 1.8252 × 10−13 3.4094 × 10−13

FEAM7 300 0.125 3.2773 × 1077 7.4364 × 1078


400 0.187 2.2564 × 1058 7.1188 × 1059
FI[3]7 200 0.110 2.8934 × 1017 1.1389 × 1019
300 0.203 3.6781 × 10−13 5.7642 × 10−13

FEAM8 300 0.125 3.3732 × 1097 7.1745 × 1098


400 0.188 3.8269 × 1087 1.1378 × 1089
FI[3]8 200 0.109 1.0663 × 1026 1.4867 × 1028
300 0.219 1.0280 × 10−13 1.0943 × 10−12

Table 3.5 Maximum absolute error of the solution for different implementations of the explicit and
implicit Falkner methods in P-C mode for solving the problem in (3.39)

q
with r = y21 + y22 , and exact solution

y1 (t) = cos(t) , y2 (t) = sin(t) .

This problem has been used as a test in a lot of articles presenting methods for
solving initial value problems. We have solved the problem in the interval [0, 7], as
was done in [31], using the methods FEABk and FI[2]kf for k = 2, . . . , 10, taking a
number of steps N = 112 in all cases, which results in a stepsize h = 0.0625. The
numerical results appear in the Table 3.6.
Finally, in Table 3.7 we show the data with the adapted Runge-Kutta method
in [27], the classical Runge-Kutta method and methods FEAM4 and FI[2]3. f We
have calculated at the endpoint of the integration interval the Euclidian norm of the
error vector with components defined as the difference between the numerical and
the exact solutions. We note that the methods FEAM4 and FI[2]3 f provide better
results, even without considering that the RK methods need more evaluations of the
function, and thus more computational effort.
3 Formulation and analysis of a class of direct implicit integration methods 61

FEAMk:

k MaxErr y1 MaxErr y2 MaxErr y1′ MaxErr y2′

2 1.1651 × 10−3 1.1781 × 10−3 1.2740 × 10−3 8.5695 × 10−3


3 1.7458 × 10−5 1.9902 × 10−5 1.9762 × 10−5 1.6453 × 10−5
4 3.9115 × 10−6 3.9177 × 10−6 4.2581 × 10−6 2.8593 × 10−6
5 5.6869 × 10−8 7.3229 × 10−8 7.3142 × 10−8 5.6222 × 10−8
6 1.3264 × 10−8 1.3101 × 10−8 1.4338 × 10−8 9.6002 × 10−9
7 2.3774 × 10−10 2.9070 × 10−10 2.9288 × 10−10 2.1161 × 10−10
8 4.5591 × 10−11 4.4313 × 10−11 4.8903 × 10−11 3.2613 × 10−11
9 9.9675 × 10−13 1.1674 × 10−12 1.1874 × 10−12 8.1706 × 10−13
10 1.5953 × 10−13 1.5451 × 10−13 1.7053 × 10−13 1.1368 × 10−13

f
FI[2]k:

2 5.3652 × 10−4 5.4163 × 10−4 5.8808 × 10−4 3.9961 × 10−4


3 3.1679 × 10−6 1.8396 × 10−6 2.6416 × 10−6 1.4614 × 10−6
4 8.5809 × 10−7 8.3725 × 10−7 9.2345 × 10−7 6.2228 × 10−7
5 1.4127 × 10−8 1.0529 × 10−8 1.3377 × 10−8 8.3543 × 10−8
6 1.7960 × 10−9 1.6812 × 10−9 1.8923 × 10−9 1.2603 × 10−9
7 5.3236 × 10−11 4.2099 × 10−11 5.2048 × 10−11 3.2922 × 10−11
8 4.1453 × 10−12 3.6718 × 10−12 4.2665 × 10−12 2.7894 × 10−12
9 1.9606 × 10−13 1.5978 × 10−13 1.9451 × 10−13 1.2401 × 10−13
10 2.1871 × 10−14 2.0983 × 10−14 2.3314 × 10−14 1.5543 × 10−14

f
Table 3.6 Data for the problem (3.40) with the methods FEAMk and FI[2]k for k = 2, . . ., 10.

h FEAM4 f
FI[2]3 Simos’ method Classical RK method

0.5 0.06 0.08 0.07 0.11


0.25 4.12 × 10−3 5.54 × 10−3 8.00 × 10−3 1.10 × 10−2
0.125 1.62 × 10−4 1.80 × 10−4 6.02 × 10−4 8.29 × 10−4
0.0625 5.53 × 10−6 3.66 × 10−6 4.05 × 10−5 5.53 × 10−5

Table 3.7 Comparison of the Euclidean norms of the end-point global errors obtained by using
f
Simos’ method [27], the classical fourth- order RK method and the methods FEAM4 and FI[2]3.

3.6 Conclusions

Falkner methods are commonly used for solving the special second-order differen-
tial initial-value problem, particularly when the values of the derivatives are also
needed (particularizations of these methods are the well-known Velocity-Verlet,
Beeman’s method or Wilson’s method). On these approaches two formulas are
needed for advancing the solution y(x) and the derivative y′ (x). When the formu-
62 H. Ramos

las are implicit, they are usually formulated in predictor-corrector modes.

We have considered three different modes to implement the implicit Falkner for-
mulas and have ma de an analysis of the accumulated truncation errors. Considering
the expressions of these errors for the solution and the derivative, we observe that
the modes FI[2]k and FI[3]k show the best performance. The resulting errors in this
case are both of order O(hk+1 ). In view of Table 3.1 we note that the errors of the
explicit methods FEAMk are also of order O(hk+1 ). On the other hand, if we do
not perform the last evaluation of f with the implicit Falkner method in P-C modes
the accumulated errors are also of order O(hk+1 ), but the stability characteristics
are compromised. This fact could suggest that the explicit method FEAMk would
be the best choice because explicit methods are in general less time consuming
than implicit ones. Nevertheless, the stability is a factor that should be considered
(compare different performances in Table 3.5). We can say that if the stability re-
quirements are fulfilled than the FEAMk should be used, but in other situation the
f
FI[2]k or even the FI[3]k should be used. Some numerical examples are included to
make a comparison of the numerical performance of the different implementations,
resulting that the proposed methods may be competitive with other methods in the
literature. The numerical results agree with the theoretical analysis.

Acknowledgements The author is pleased to acknowledge a referee for carefully reading the
manuscript.

References

1. Beeman, D.: Some multistep methods for use in molecular dynamics calculations. J. Comput.
Phys. 20, 130–139 (1976)
2. Calvo, M., Montijano, J.I., Rández, L.: A new stepsize change technique for Adams methods.
Applied Mathematics and Nonlinear Sciences 1, 547–558 (2016)
3. Chawla, M.M., Rao, P.S.: Numerov-type method with minimal phase-lag for the integration
of second order periodic initial-value problems II. Explicit method. J. Comput. Appl. Math.
15, 329–337 (1986)
4. Chen, Z., You, X., Shu, X., Zhang, M.: A New Family of Phase-Fitted and Amplification-
Fitted Runge-Kutta Type Methods for Oscillators. Journal of Applied Mathematics (2012)
doi:10.1155/2012/236281
5. Collatz, L.: The Numerical Treatment of Differential Equations. Springer, Berlin (1966)
6. Coleman, J.P., Mohamed, J.: De Vogelaere’s methods with automatic error control. Comput.
Phys. Comm. 17, 283–300 (1979)
7. Coleman, J.P., Ixaru, L.Gr.: P-stability and exponential-fitting methods for y′′ = f (x, y). IMA
J. Numer. Anal. 16, 179–199 (1996)
8. Dormand, J.R., El-Mikkawy, M., Prince, P.J.: High order embedded Runge-Kutta-Nystrom
formulae. IMA J. Numer. Anal. 7, 423–430 (1987)
9. Falkner, V.M.: A method of numerical solution of differential equations. Phil. Mag. S. 7, 621–
640 (1936)
10. Franco, J.M., Gómez, I.: Accuracy and linear stability of RKN methods for solving second-
order stiff problems. Appl. Numer. Math. 59, 959–975 (2009)
11. Gear, C.W.: Argonne National Laboratory, Report no. ANL-7126, 1966
3 Formulation and analysis of a class of direct implicit integration methods 63

12. Gladwell, I., Thomas, R.: Stability properties of the Newmark, Houbolt and Wilson θ meth-
ods. I. J. Numer. and Anal. Meth. in Geom. 4, 143–158 (1980)
13. Hairer, E., Norsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I. Springer,
Berlin (1987)
14. Henrici, P.: Discrete Variable Methods in Ordinary Differential Equations. John Wiley, New
York (1962)
15. Kramarz, L.: Stability of collocation methods for the numerical solution of y” = f (x, y). BIT
20, 215–222 (1980)
16. Krishnaiah, U.A.: Adaptive methods for periodic initial value problems of second order dif-
ferential equations. J. of Comput. and Appl. Math. 8, 101–104 (1982)
17. Krogh, F.T.: Issues in the Design of a Multistep Code. JPL Technical Report (1993)
http://hdl.handle.net/2014/34958
18. Lambert, J.D.: Numerical Methods for Ordinary Differential Systems. John Wiley, England
(1991)
19. Lambert, J.D., Watson, I.A.: Symmetric multistep methods for periodic initial value problems.
J. Inst.Math. Appl. 18, 189–202 (1976)
20. Li, J.: A family of improved Falkner-type methods for oscillatory systems. Applied Mathe-
matics and Computation 293, 345–357 (2917)
21. Li, J., Wu, X.: Adapted Falkner-type methods solving oscillatory second-order differential
equations. Numerical Algorithms 62, 355–381 (2013)
22. Paternoster, B.: Runge-Kutta-Nyström methods for ODEs with periodic solutions based on
trigonometric polynomials. Applied Numerical Mathematics 28, 401–412 (1998)
23. Ramos, H., Lorenzo, C.: Review of explicit Falkner methods and its modifications for solving
special second-order IVPs. Comput. Phys. Comm. 181, 1833–1841 (2010)
24. Ramos, H., Mehta, S., Vigo-Aguiar, J.: A unified approach for the development of k-step block
Falkner-type methods for so.lving general second-order initial-value problems in ODEs. J. of
Comput. and Appl. Math. 318, 550–564 (2017)
25. Ramos, H., Singh, G., Kanwar, V., Bhatia, S.: An efficient variable step-size rational Falkner-
type method for solving the special second-order IVP. Applied Mathematics and Computation
291, 39–51 (2016)
26. Shampine, L.F., Gordon, M.K.: Computer solution of Ordinary Differential Equations. The
initial Value Problem. Freeman, San Francisco, CA (1975)
27. Simos, T.E.: An exponentially-fitted Runge-Kutta method for the numerical integration of
initial-value problems with periodic or oscillating solutions. Computer Physics Communica-
tions 115 1–8 (1998)
28. Toxvaerd, S.: A New Algorithm for Molecular Dynamics Calculations. J. Comput. Phys. 47,
444–451 (1982)
29. Tsitouras, Ch., Simos, T.E.: Explicit high order methods for the numerical integration of peri-
odic initial-value problems. Applied Mathematics and Computation 95, 15–26 (1998)
30. Vigo-Aguiar, J., Ramos, H.: Variable stepsize implementation of multistep methods for y′′ =
f (x, y, y′ ). J. of Comput. and Appl. Math. 192, 114–131 (2006)
31. Vanden Berghe, G., De Meyer, H., Van Daele, M., Van Hecke, T.: Exponentially fitted Runge-
Kutta methods. J. of Comput. and Appl. Math. 125, 107–115 (2000)
32. Wilson, E.L.: A computer program for the dynamic stress analysis of underground structures.
SESM Report No. 68–1, Division Structural Engineering and Structural Mechanics, Univer-
sity of California, Berkeley (1968)

You might also like