[go: up one dir, main page]

0% found this document useful (0 votes)
18 views14 pages

Variable Order

This document presents a variable-order, variable-step algorithm for solving second-order initial-value problems using implicit two-step methods. The authors derive order conditions and emphasize the combination of methods to optimize computational efficiency while ensuring P-stability. The paper discusses the implementation of these methods, step size selection, and provides numerical results to demonstrate their effectiveness.

Uploaded by

bharathi kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views14 pages

Variable Order

This document presents a variable-order, variable-step algorithm for solving second-order initial-value problems using implicit two-step methods. The authors derive order conditions and emphasize the combination of methods to optimize computational efficiency while ensuring P-stability. The paper discusses the implementation of these methods, step size selection, and provides numerical results to demonstrate their effectiveness.

Uploaded by

bharathi kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

JOURNAL OF

COMPUTATIONAL AND
APPLIED MATHEMATICS

ELSEVIER Journal of Computational and Applied Mathematics 79 (1997) 263-276

Variable-order, variable-step methods for second-order initial-value


problems
M.S.H. Khiyal, a'* R.M. T h o m a s b
aDirectorate of Computers & Control, HQ PAEC, P.O. Box 3095, Islamabad, 44000 Pakistan
bDepartment of Mathematics, UMIST P.O. Box 88, Manchester, M60 1QD, United Kingdom
Received 10 November 1995

Abstract

We develop a variable-order, variable-step algorithm for solving second-order initial-value problems y" = f(t,y), y(0),
y'(0) given. This algorithm employs a family of implicit two-step methods, in which the differential equation is evaluated
at a number of off-step points. We have derived the order conditions for this family of methods in previously published
work and we have also shown that the remaining free parameters of the method may be chosen to given P-stability. Here,
the emphasis is on the combination of second-, fourth-, sixth- and eighth-order methods from this family in a variable-
order, variable-step algorithm. The aim is to choose particular P-stable methods from the family in such a way that
a saving can be made in the amount of computational effort involved when they are combined together.

Keywords: Second order initial value problems; Oscillation problems; Combination of efficient P-stable method

AMS classification: 65L05

1. Introduction

We consider an extension of the class of direct hybrid methods proposed in [1] for solving the
second-order initial-value problem given
y"=f(t,y), y(O), y'(O) (1)
The basic method has the form
Y.+a -- 2y. + y . _ , = h2{flo(y~+, +Y~- 1) + 7Y~ + flx(.V~+., +y~-.,)
+ fl2(Y"+~ + Y"-.~)+ f13(Y~+.3 + Y"-~3)}, (2)

* Corresponding author.

0377-0427/97/$17.00 © 1997 Elsevier Science B.V. All rights reserved


PII S 0 3 7 7 - 0 4 2 7 ( 9 6 ) 0 0 1 70-7
264 M.S.H. Kh(val, R.M. Thomas/Journal o f Computational and Applied Mathematics 79 (1997) 263 276

Yn + ~tI = A+yn+I + B+_yn + C+yn-1 + h2{s+y~+l -Jr-q+y'~ + U+Yn-1}, (3)


12 ( tt . tt
Yn +_~2 =R+yn+I+L±yn+T+yn-1 + n ~Y+Yn+I + V+yn + W+yn-1

+ X+_yL,, + (4)

Yn + O~3 = D+y.+I + E+y. + G±y.-1 + hZ{H+y~+a + K+y" + M+y'~-I

+ N±y"_~, + P+Y~+~I + Q+y~-.~ + S+y~+~}, (5)


and

y~ =f(t.,y.), Yn+l = f ( t . +_ h,y.+ 1), y'~+~ = f ( t . +_ cqh, y.+.,),

y'~+.~ = f ( t . -+ ~2h, y. _+.), y~ +~3 = f ( t . +_ ~3h, y . + . ) .

Here t. = nh and we define t. +., = t. _+ ~lh, i = 1, 2, 3 and n = 0, 1, 2 . . . . . (The methods proposed


in [-1] are given by Eqs, (2) with f13 = 0, (3) and (4).) The parameters may be chosen so that the
methods are eighth-order accurate and P-stable (P-stability was defined in [-11]). The full set of
order conditions and the P-stability conditions are given in [9, 16].
Since such methods are implicit, it is necessary to solve at each step a system of nonlinear
(algebraic) equations. On solving these nonlinear equations using a modified Newton-type method,
seven function evaluations are required per iteration, in general. However, the number of function
evaluations may be reduced to four per iteration, by choosing the parameters appropriately (see
[9, 16] for details).

Notation. A method which requires m function evaluations per iteration will be called an m-
evaluation method.
When method (2) (5) is applied to the nonlinear differential system (1), a nonlinear algebraic
system must be solved at each step. This may be solved by using a modified Newton iteration, in
which case the iteration matrix involves J, j2, j3 and j4, where J is an approximation for the
Jacobian matrix of f with respect to y. For large systems, the formation of j2, j3 and j4 is
undesirable, since matrix products are expensive and any sparsity in J will be weakened in its
powers and any ill-conditioning in J will be magnified in its powers. In an attempt to avoid this
problem, Khiyal [9] and Thomas and Khiyal [16] derived eighth-order, P-stable, four-evaluation
methods of the form (2)-(5), for which the iteration matrix is a true perfect quartic. This last
property implies that at most one real matrix must be factorised at each step and the formation of
j2, j3 and j4 is avoided.
By taking f13 = 0 in (2) and choosing the remaining parameters appropriately, several authors
(for example, Cash [1] and Chawla and Rao [5]) have derived sixth-order methods of the form
(2)-(5) which are P-stable. The methods proposed by Cash [1] require five function evaluation per
iteration, in general. Cash [2] and Voss and Serbin [18] show how the number of function
evaluations may be reduced to four per iteration. The method proposed in [2] is obtained by
choosing ~2 = 0 and requiring the points (t, + ~2,Y, + ~2) to be coincident. Voss and Serbin [18] have
modified this method in an attempt to obtain a four-evaluation method with a true perfect cube
iteration matrix, thus avoiding the need to form j2 and j3. (A minor typographical error in their
M.S.H. Khiyal, R.M Thomas~Journal of Computational and Applied Mathematics 79 (1997) 263-276 265

paper means that the method which they give is not actually sixth-order-accurate. The correct
version of their method is given in [9].) For the method proposed in [5], the number of function
evaluations per iteration is reduced to three by requiring that y,_~ and y,_,~ are independent of
Y, + 1. This implies thatf(t,_ ~1,Y,-~1) a n d f ( t , _ ~2,y,_ ~,) must be computed once per step rather than
once per iteration. (Note that the iteration matrix for their methods is not a true perfect cube.)
Finally, Thomas [15], Khiyal and Thomas [10] and Khiyal [9] have derived sixth-order, P-stable,
three-evaluation methods of the form (2)-(5), for which the iteration matrix is a true perfect cube.
By taking {/2 = 0 and f13 = 0 in (2) and choosing the remaining parameters appropriately,
Cash [1] and Chawla [-3] obtained families of fourth-order, P-stable methods. In general, these are
three-evaluation methods. However, Hairer [8], Costabile and Costabile [7], Chawla [-4] and Cash
[-2] derived particular two-evaluation methods from these families. Application of the modified
Newton method to the nonlinear algebraic equations which result when such fourth-order methods
are applied to (1), yields an iteration matrix involving J and j2. To avoid the formation of j 2
Thomas [-14], Voss and Serbin [18] and Khiyal and Thomas [10] derived fourth-order, P-stable,
two-evaluation methods for which the iteration matrix is a true perfect square.
In Section 2, we consider the methods of Thomas [14, 15], Khiyal and Thomas [-10], Khiyal [-9]
and Thomas and Khiyal [16] and show that there exist
(il a second-order, P-stable, one-evaluation, single-step method with iteration matrix
(I -- rhZJ),
(ii) fourth-order, P-stable, two-evaluation, two-step methods with iteration matrix (I - rh2j) 2,
(iii) sixth-order, P-stable, three-evaluation, two-step methods with iteration matrix (I - rhZJ) 3,
(iv) eighth-order, P-stable, four-evaluation, two-step methods with iteration matrix (I - rhZJ) 4.
The important point to note is that the methods may be chosen so that the value of r is the same for
all four methods. The particular methods used in our variable-order, variable-step algorithm are
chosen in such a way that a saving can be made in the amount of computational effort involved
when they are combined together. Our choice of methods is discussed in Section 2.
In Section 3, we describe the step size selection and order selection strategies used in our
variable-order, variable-step code. To vary the order and step size, it is necessary to obtain an
estimate of the local error. Here, we do so by combining the methods of order 2m and 2m + 2
(m = l, 2, 3) in a formula pair. Finally, some numerical results are presented in Section 4.

2. The P-stable methods

In this section, we introduce the P-stable methods used in our variable-order, variable-step
algorithm.

2.1. Eighth-order method

Any of the eighth-order, P-stable, four-evaluation, perfect quartic methods derived in [16] could
be used in our variable-order, variable-step algorithm. Rather than list all the possibilities, here we
choose just one of the methods (mnemonic SHK4E1) which is obtained by insisting that Y,-,1,
y,_,, and Y.-,3 are independent of Y,+x and, moreover, Y . - ~ I - Y , - 1 with ~1 = 1. Thus,
y~-~, -Y~-1 while y"_ ~2 and y~_,3 only have to be evaluated once per step (not per iteration). The
266 M.S.H. Khiyal, R.M. Thomas~Journal o f Computational and Applied Mathematics 79 (1997) 263-276

resulting m e t h o d is given by Eqs. (2)-(5) with p a r a m e t e r s given by


~1 = 1 , A_ = 0 , B_ = 0 , C_ = 1, q_ = 0 , s_ = 0 , u_ = 0 , R_ =0, y_ =0,
Z_ =0, D_ = 0 , H_ =0, P_ =0, S_ = 0 , L_ =1-o~2, T _ =o~2, E_ = 1 - - o c 3 ,
G_ = o~3, //2 = (13 - 42~2)/840(o~24 - ~22) (o~2 - o~2),
//3 = (13 - 42o~2)/840(o~ - o~2)(oc2 - ~ ) , //o = ~ - / / 1 -//zo~ 2 -//3oc32,
Y=1--2(//o+//1+//2+//3), W- = ~1( ~ 23 - ~2) - X_,
Q_ : 17 3
3360//30~3(0~2 - - 0~2),
1 3
M _ : 6(~3 - - 0~3) - - ~ 2 Q - -- N_,

Z + = 17(1 + o~2 - ~2)/3360//2(1 + o~2) (o~2 - o~3) (21 - 1),


S+ = ( ~ - 12r)/60//3o~(1 - ~2), s+ = -r*///3S+Z+, A+ = 21 - 12s+, C+ = A+ - 1,
//3P+ = --//2Z+ -- (22 - - 0 ~ ) / / 3 8 + / ( 2 1 - 1), u+ = s+,
q+ = l - - A + - s + - - u + , B+ = 2 - - 2 A + ,
Y+ = - - Z + + (2, - 22)/18, R+ = 22 -- 12(Y+ + Z+), / / 3 D + = 4r - / / o -//1A+ -//2R+,
H+ = -P+ - o~2S+ + (23 - D+)/12, L+ = 1 + c¢2 - 2R+, T+ = R+ - ~2,
W+ = Y+ - - W_, V+ = ~(lX2
1 2 + ~2)--R+ -Y+ -W+ -Z+ -X+,
V_ = 12(0~22 - - 0~2) - - W _ - - X _ , E+ = 1 + ~z3 -- 2D+,
N+=P+-N_, Q+=S+-Q_, G+=D+-~3, X+=Z+-X_, M+=H+-M_,
K_ : ±
2(0~ 2
3 - - 0~3) - - M _ - N_ - Q_,
K+ = ±2(0~ 23 + 0~3) - - D+ -- H + - M+ -- N+ - P+ - Q+ - S+,
(1 - ~3) {(13 - 22~3 - 5 1 ~ ) - ~1(9 - 20~3) + ~22(29 + 42~3)} = 0. (6)
Here,

I 28 + 3~3 17(~zz+ ~2 + 1) ]/
24 = 3o(1 - ~t3) + 168 112(~tz + 1) + 31(1 - ~3) (A+ + 30s+) fl2(~3 - ~t2),

23 = ~2(~z + ~3) (51~ 2 + 13~ 2 + 29~t2 - 35)/(~2 + 1)(42~ 2 - 13), 22 = ~22(~z + 1),

21//1 = 3-26o- - / / o - - / / 2 2 2 - - / / 3 2 3 , r = ¼[//o + / / I ( A + + A_) +//2(R+ + R_) +//3(D+ + O_)].

To ensure t h a t the m e t h o d is P-stable, we take r / > R*, where R* is the largest r o o t of the
polynomial equation
1.2 g
r* -- r3 + 8 _360
_ + so 164o = 0. (7)

(Note t h a t R* = 0.8581 to four significant figures.) The free p a r a m e t e r s of the m e t h o d are r,//x, N _ ,
X _ a n d one of ~2 a n d ~3- It might a p p e a r t h a t a further saving could be m a d e by choosing either
• 2 = 1 or ~3 = 1. However, if we choose ~2 = 1 the e q u a t i o n giving the values of Q_ c a n n o t be
satisfied. Similarly if we choose ~3 = 1 then from the e q u a t i o n for//3, we have ~ = ~ a n d in this
case the e q u a t i o n for 21//1 c a n n o t be satisfied. W h e n ~3 # 1, the last of Eqs. (6) m a y be regarded as
MS.H. Khiyal, R.M. Thomas~Journalof Computational and Applied Mathematics 79 (1997) 263-276 267

a quadratic for a3 in terms of 0~2. We take ~3 = ½ and then choose ~2 to be the root of the quadratic
which lies between 0 and 1. This gives ~2 = ( - 1 + x/2151)/100. The remaining free parameters are
chosen to be r = R*, fll = 1 and N_ = X_ = 0.

2.2. Sixth-order method

The iteration matrix for the P-stable, perfect cube, sixth-order methods derived in [15], Khiyal
and T h o m a s [10] and Khiyal [9] and given by Eqs. (2)-(4) with f13 = 0 is (I - rh2j) 3. To ensure
P-stability, r/> R +, and R ÷ is the largest root of the polynomial equation
4r 3 -- 3r 2 + ¼r -- ~ o = 0. (8)
(Note that R + = 0.6564 to four significant figures.) We choose to take r = R* > R +; the resulting
sixth-order m e t h o d is P-stable and has the same iteration matrix as the eighth-order m e t h o d given
above.
Thomas and Khiyal [16] showed that the free parameters of the sixth-order methods can be
chosen so that the resulting method has other features in common with the eighth-order method
above. Following their approach, we choose a sixth-order method with the additional properties that
Y,-,2 is independent o f y . +x and y;~_,, is identically equal to y"_ 1. This means that f ( t . _ ~ , y.-~,) has
to be evaluated once per step for each of the sixth- and eighth-order methods. We choose ~2 to be the
same for both methods and we also choose the coefficients in the expression (4) for y._,~ to be the
same for both methods. Hence, the same value o f f ( t , _ , ~ , y . - , ) may be used for both the sixth- and
eighth-order methods. Such a three-evaluation, sixth-order, P-stable, perfect cube method is given by
Yn+l -- 2y, + Y , - x = h2{flo(Yn+l + Y ~ - I ) + 7Y~ + f l , ~ + l + Y ~ - , ) + flz07~+,: + Y ~ - , ) } . (9)
fi.+l = A+y.+, + (2 -- 2A+)y. + (A+ -- 1)y.- x + hZ{s+(y"+l + Y " - x)
+ (1 - A+ - 2s+)y"},
y,_,~ = (1 - ~2)Y. + ezt',- 1 + h2{V-Y~ + (W_ + X-)y"-1},
Yn+a2 = R + y n + I "Jv (1 "~- 0~2 - - 2R+)y. + (R+ - e2)Y.-x + hz{Y+Y~+x + V+y"
+ (w+ + X+)y"-i + z+i + d,
where
/ h = a_, 22o - = fi -/h 7 = 1 - 2(/ o + / h +

-400a2(1 - a2)2fl, [ r 1 1
Z+ = ( l + ~ 2 - e z 2) 1 2 r 3 - 3rz + 4 360 ' s+ =r3/fl2Z+,

1+~2-~2
A+ = 1 - 12s+ - 20(1 - ~zz)fll '
fl2R+= 3 r - ~ + f l 2 e 2+fix(I-A+),
W_ + X_ = (~23 - c~2)/6, V_ = ,t2(ez2_ a2) - ( W _ + X_), Y+ = (az2 + e] - R+)/12 - Z+,
W+ + X+ = Y+ + Z+ - - ( W - "-[- X - ) , V+ = 12(~22 + ~2) - R + -- Y+ -- Z+ -- (W+ + X+),

1+ x/2151, fit = 1 and r = R*.


with ,a = 1, a2 = 100
268 M.S.H. Khiyal, R.M. Thomas~Journal of Computational and Applied Mathematics 79 (1997) 263-276

2.3. Fourth-order m e t h o d

The iteration matrix for the fourth-order, two-evaluation, perfect square methods derived in [-14]
and Khiyal and Thomas [10] and given by Eqs. (2) with r 2 = 0 and r3 = 0 is (i - rhZJ) 2, where
r = [rio + r l ( A + + A_)]/2. We choose the parameters of the fourth-order method in such a way
that y , _ , , is identically equal to y , _ 1. (The eighth- and sixth-order methods given above also have
this property.) Also, we insist that the fourth-order method has the same iteration matrix as
the sixth- and the eighth-order methods. Such a two-evaluation P-stable fourth-order m e t h o d is
given by

Yn+I -- 2y, -t-Yn-1 = h2{flo{Y"+l q-Y"-1) -t- yy~ -t- r,(fi"+l -k-y"- 1)}, (10)
37,+1 = A + y . + 1 + ( 2 - 2A+)y, + (A+ -- 1 > , - 1 + hZ{s+0'~+l + y " - , )
+ (1 - A+ - 2s+)y,~},
where
rio-- -~,11 7 = {, s+ = - r Z , ~1 = 1 , rl =1, A+ = 2 r + 1112 and r=R*.

2.4. Second-order m e t h o d

To obtain a suitable second-order method, we consider the class of N e w m a r k [-12] methods,


given by

Yn+ I = Yn q- hy', + hZ { q l f ( t n , yn) + tl2f(tn+ bYn+ l)},

2"+1 = Y', + h { q 3 f ( t , , y , ) + q4f(tn+ bYn+ l)},


where rh, r/2, r]3 and q4 are parameters. This is a one-step method which involves the first derivative.
After a little manipulation the two-step second-order P-stable method is
Y,+I -- 2y, + Y . - 1 = h2{roY~+ 1 + (1 - 2to)y" + roY'-,}.
The iteration matrix for this method is I - r o h i j . Thus, we choose ro = R*(>¼) to ensure that the
iteration matrix is the same as for the fourth-order direct hybrid method.

3. Order and stepsize selection

Our variable-order, variable-step code (mnemonic VOVS) obtains at each step two approxima-
tions for the solution y(t,+ 1). These approximations are obtained by using methods of orders 2m
and 2m + 2, where in our case m = 1, 2 or 3. At the end of the step, we form an error estimate. Thus,
if*Jn+ , + 1+ 2] denote the approximations obtained with the methods of orders 2m and 2m + 2,
l,t2m]1 a n ,t~ j,a2m
respectively, then our local error estimate is given by
Le[2+~ = y~2+,,1+2] __ ,,[2ml
- - J n + 1 "

(The convergence tests and action on divergence of the iteration for the current formula pair are
essentially the same as discussed in [14] and were described in detail by Khiyal [9].) We accept or
M.S.H. Kh(val, R.M. Thomas/Journal of Computational and Applied Mathematics 79 (1997) 263-276 269

reject the step depending on whether or not miLer2+"1][ ~ TOL. If the local error test is not satisfied
(i.e. l~[2m]
--~,+ 1H > TOL), we repeat the step with a smaller stepsize but do not consider the possibility
of changing the order, i.e., we repeat the step with the same formula pair but a smaller stepsize. At
the end of a successful step we can consider whether or not to use a different formula pair on the
next step. We adopt the approach proposed by Shampine and G o r d o n [13], in their variable-order,
variable-step code (based on the Adams methods) for solving first-order initial-value problems and
this order selection strategy is described in Section 3.1. Then in Section 3.2, we describe the stepsize
selection strategy used in our code.

3.1. Order selection

[2m]
If the local error test iiLe,+ll I ~< T O L is satisfied we accept as the approximation for y,+~ the
value ofy[nzml+21. Thus we s e t y , + l _ ,,,+ 1 2]• Then we need to calculatey~+ ~ as this is required for
,,[2m+
the next step. It m a y be calculated in the same way as described in [14, 15] for the fourth- and
sixth-order methods. (The details of this approach for the eighth-order methods are given in [9].)
To decide whether to change the order of the method for the next step, we first calculate the stepsize
which could be used for the next step if we were to continue with the current pair. Then we
determine the stepsize which could be used if we were to use the formula pair consisting of methods
of orders 2m + 2 and 2m + 4. To do so, we perform one iteration with the method of order 2m + 4.
(Of course, we do not do this if m = 3 because we are already using the highest-order pair
available.) We use the results of the method of order 2m + 2 as a predictor to start the iteration for
the method of order 2m + 4. We have found that the higher-order method in a formula pair usually
converges after little more than one iteration and it is for this reason that we perform just one
iteration with the method of order 2m + 4. (We have also tested an alternative technique where the
method of order 2m + 4 is iterated to convergence, instead of performing just one iteration with
this method. In general, the same formula pair is chosen at each step but this version of the code is
very much more expensive.) Then we form the error estimate using the results obtained from the
2m + 2 and 2m + 4 order methods. In this case we denote the error estimate by [o[2m+2] ~n + 1
and we
have
Le[2m+2] _ ,,[2m+41 ,,[2m+2]
--n+l -- JVn+ 1 --.IVn+ 1 "

To consider the possibility of reducing the order we also carry out one iteration of the method of
order 2m - 2. (Note that we do this only if m = 2 or 3, because for m = 1 we are already using the
lowest-order pair available.) N o w to start the iteration with the lower-order method, we take as
predictor the approximation obtained by the lower-order method of the current formula pair. After
doing one iteration with the method of order 2m - 2, we form the error estimate by using the
formula
[2m- 2 ] __ 1~[2m] ~,[2m- 2]
Len+l -- J'n+ 1 -- JVn+ 1 '

(Instead of performing only one iteration with the method of order 2m - 2, we have also tested an
approach in which the method of order 2m - 2 is iterated to convergence. In our tests, the same
formula pair is selected at each step, with the same predicted stepsize to three decimal places,
whether we do one iteration with the method of order 2 m - 2 or iterate to convergence, but
iterating to convergence is much more expensive. In general, we have also tested several different
270 M.S.H. Khiyal, R.M. Thomas~Journal of Computational and Applied Mathematics 79 (1997) 263-276

predictors but have found that the approach described above is best, Further details are given in
I-9-1.)
We change the order for the next step if it is predicted that the error will be reduced by doing so.
At each step, first we consider lowering the order from 2m to 2m - 2. W h e n m = 2, we lower the
order if
Le [2m-2]
.+1 ~< min(Le[2+~, L eo[2m+2]),
.+l

and when m = 3, we lower the order if


L e~2+m1- 21 < L_ee~2y~.

We also consider whether to raise the order. When m = 1, we raise the order if

t e n[+2l m +<"-2Tl1' ~_ In2 +m ]1,


and when m = 2, we raise the order if

L n+l e 2] ~[ 2
rnin(l~[2m]
m +lo[2m-
. . . . . . ~.~Vn + 1, ~ n + l
2]$
,~"

If the above criteria indicates that the order should be changed for the next step but the stepsize
predicted for the next step is not at least double the old stepsize, we do not change the order but use
the current formula pair with the same stepsize on the next step.
The extra cost involved in deciding whether to change the order, using the strategy described
above, is as follows
(i) If m = 1 then we can only consider whether to raise the order from the fourth-order/second-
order pair to the sixth-order/fourth-order pair. This requires four extra function evaluations.
(Recall that the sixth-order m e t h o d requires three functions evaluations per iteration, plus one
function evaluation per step.)
(ii) If m = 2 then the current formula pair is the sixth-order/fourth-order pair. In this case, we
can consider whether to reduce or raise the order or leave it unchanged. Thus, we do one iteration
with the second-order method, which requires one extra function evaluation, and one iteration with
the eighth-order method, at a cost of five extra function evaluations. (Recall that the eighth-order
m e t h o d requires four function evaluations per iteration, plus two function evaluations per step.
However, one of these two is also required by the sixth-order m e t h o d and so is already available.)
Hence, when m = 2 six extra function evaluations are required.
(iii) Ifm = 3 then the current formula pair is the eighth-order/sixth-order pair. Thus we can only
consider whether to reduce the order. This costs two extra function evaluations.
Thus, if we consider whether to change the current formula pair at the end of each successful step,
we incur a lot of extra expense. Moreover, we have tested such a code on a variety of test problems
and have found that changing the order too frequently can lead to large fluctuations in the stepsize,
with m a n y failed steps, making the code very expensive. Instead we adopt a modified strategy, in
which we do not consider changing the order until five steps have been completed successfully with
the current formula pair. If, however, the stepsize has to be changed in the meantime, because of
our stepsize strategy, then we do not consider changing the order until five successful steps have
been completed with the new stepsize.
M.S.H. Khiyal, R.M. Thomas~Journal of Computational and Applied Mathematics 79 (1997) 263 276 271

3.2. Step s&e selection

After selecting the order to be used, we rename it 2m, and the corresponding estimated error is
renamed L__get.2~.The new stepsize is given by
T O L -p/(2m + I)
= ph - l~[2m]~.n
+ 1Ill

where p is a safety factor whose purpose is to avoid (expensive) failed steps. We do not allow the
stepsize to decrease by more than a factor Pl or increase by more than a factor P3. These
restrictions help to avoid large fluctuations in the stepsize caused by local changes in the error
estimate. Also, we do not increase the stepsize at all unless it can be increased by a factor of at least
P2, where P2 < P3. The purpose of this restriction is to avoid the extra function and Jacobian
evaluations involved in changing the stepsize unless a worthwhile increase is predicted. In
obtaining our numerical results in Section 4, we take these factors to be p = 2- 1/t2m+ 1), pl = 0.2,
P2 = 2.0 and P3 = 5.0. In a production code, additional safety features may be introduced.

4. Numerical results

In this section we present some numerical results for a variable-order, variable-step code in
which the four methods derived in Section 2 are used. We use the mnemonic KT(2m, 2m + 2),
m = 1, 2 or 3, to denote the formula pair consisting of the methods of order 2m and 2m + 2.
Following Thomas [-14, 15], we have tried a number of explicit scalar (nonstiff) test problems of the
form (1). They give similar results and so we restrict our attention to one oscillatory example.

Example 1.
y"+sinh(y)=0, y(0)=l, y'(0)=0.
This is a pure oscillation problem whose solution has maximum amplitude unity and period
approximately six.
To verify that our techniques work for systems, we use as a test problem a moderately stiff system
of two equations.

Example 2.
y'~ + s i n h ( y l + y 2 ) = 0 , ya(0)=l, y'l(0)=0.
y[+104y2=0, y 2 ( 0 ) = 1 0 -8 , y~(0)=0.
For this example we have deliberately introduced coupling from the stiff (linear) equation to the
nonstiff (nonlinear) equation. Our intention here is that the rapidly oscillating component Yz
should be present only at the noise level as otherwise we would expect to choose the stepsize to
resolve Y2.
Both Examples 1 and 2 have been solved for t s [0, 6]. The error at the end point is obtained by
comparing the computed solution with the solution obtained by using a fixed step code with
272 MS.H. Khiyal, R.M. Thomas/Journal of Computational and Applied Mathematics 79 (1997) 263-276

a small stepsize for Example 1 and for the first equation of Example 2. For the second equation of
Example 2 we have used the exact solution. We denote the error at the end point by MAXERR
where, for the scalar equation, MAXERR = [Error at t = 61 and for the system MAXERR
=[]Error at t = 611~.
We present some statistics on the performance of the methods. These are important when
comparing the cost of the methods. The notation used in the Tables throughout this section is as
follows:
FCN number of evaluations of the differential equations on the right-hand side f
JCB number of evaluations of the Jacobian Of/Oy
NOIT number of iterations overall
NOSIT number of iterations on steps where the iteration converges
N O S T P number of steps overall
NSSTP number of successful steps to complete the integration
N C S T P number of steps where the stepsize is changed
N F S T P number of failed steps
N F A C T number of LU factorisations of the iteration matrix
To obtain the starting values, we use the fourth-order, two-stage implicit R u n g e - K u t t a method
with the Cooper and Butcher [-6] iteration scheme in our variable-order, variable-step code
because after the starting phase we use the fourth-order/second-order formula pair KT(2, 4) and
accept the results of the fourth-order method. Thus, we prefer to use a fourth-order method for the
starting phase.
Also we present the results obtained by using each of the formula pairs KT(6, 8), KT(4, 6) and
KT(2, 4) in a variable step code, to compare with those produced by our variable-order, variable-
step code VOVS. For all three of these formula pairs, the starting values are obtained by using the
fourth-order, two-stage implicit R u n g e - K u t t a method with the Cooper and Butcher [6] iteration
scheme for the first two steps.
The results for Example 1 with T O L = 10 - 2 , 10-'*, 10 - 6 and 10 -8 are given in Table 1. First
consider with T O L = 10-2. In terms of function evaluations, the code VOVS is cheaper than the
formula pairs KT(6, 8) and KT(4, 6). Although VOVS requires a smaller number of steps overall, it
is more expensive than KT(2, 4) because more work is required on those steps where a change of
order is considered. Initially, we start with the formula pair KT(2, 4). The order is increased at the
point t = 4.142 where the old stepsize was 0.2899 and the stepsize predicted for the next step is
0.6561. After this point the code VOVS uses the formula pair KT(4, 6) up to the end point t = 6. We
would expect the code to choose the lower-order methods for the larger tolerances and for this
tolerance, VOVS does indeed use the lowest-order pair available for most of the time.
In Table 1 with T O L = 10 -4, we see that the code VOVS is more expensive than all the other
methods. The order is raised twice, once at the point t = 0.6567 and then at t = 4.134. In this case
the code VOVS mostly uses the formula pair KT(4, 6). It starts with KT(2, 4) and then we find that
the order is raised (after exactly five successful steps) and KT(4, 6) is used until the point t = 4.134.
After this the code uses the formula pair KT(6, 8) until the end of the integration. Six successful
steps are required but the stepsize is changed once, which means that a reduction of order is not
considered. If we compare the performance of the formula pair KT(4, 6) with VOVS, we see that
both require the same number of successful steps but VOVS is more expensive in terms of function
evaluations. This is because when the code VOVS is using the KT(4, 6), deciding whether to change
M.S.H. Kh~val, R.M. Thomas~Journalof Computational and Applied Mathematics 79 (1997) 263-276 273

Table 1
Results for Example 1

Methods MAXERR FCN JCB NOIT NOSIT NOSTP NSSTP NCSTP NFSTP

Tol = 10 2
VOVS 1.95 x 10-1 139 3 78 43 25 16 5 9
KT(6, 8) 2 . 1 4 x 10 - 2 188 4 53 25 22 13 2 9
KT(4,6) 1.93 x 10 - ° 151 2 56 29 20 12 6 8
KT(2,4) 9.21 x 10 -~ 129 3 89 47 27 17 6 10

Tol = 10 4
VOVS 1.88 x 10 2 383 2 142 79 40 30 9 10
KT[6,8) 5.16x10 2 250 2 72 51 22 17 3 5
KT(4,6) 2.45 )< 10 2 338 2 126 74 42 30 12 12
KT(L4) 5.31 x 10 3 328 3 235 169 71 56 13 15

Tol = 10 -6
VOVS 5.23 x 10 -3 853 2 288 160 73 55 19 18
KT(6,8) 1.93 X 10 - 2 642 3 182 101 44 29 14 15
KT(4,6) 2.36 x 10 -3 612 2 231 172 74 62 12 12
KT[2,4) 3.40 x 10 -~ 834 2 610 539 188 174 16 14

Tol = 10 s
VOVS 5.25 x 10-'* 1684 3 521 301 126 94 36 32
KT(6, 8) 1.04 x 10- 3 1260 2 348 199 84 56 33 28
KT(4,6) 3.10x 10 4 1664 2 629 487 196 165 38 31
KT(2,4) 3.81 x 10 -5 2485 2 1854 1760 582 563 18 19

the order requires one iteration of both the second- and eighth-order methods. We find that the
code VOVS uses the KT(4, 6) for 16 successful steps between the point t = 0.6567 and t = 4.134 but
the step size is changed five times. A change of order is considered twice during these 16 successful
steps. This accounts for the extra cost of code VOVS over the formula pair KT(4, 6).
When obtaining the results of Table 1 with TOL = 10 -6, the order is changed four times.
Initially, we start with the formula pair KT(2, 4) and change to KT(4, 6) at the point t = 0.2051
after exactly five successful steps. The predicted stepsize for KT(4, 6) is 0.1088 while the old stepsize
was 0.02085 and this is a big increase. Due to this big increase the step failed and the stepsize had to
be reduced to 0.06929. After six successful steps with the stepsize 0.06929 the order is again raised at
the point t = 0.6208. The pair KT(6, 8) is used for the next step with stepsize 0.1818. After
completing five successful steps with this formula pair, we consider whether to reduce the order of
the formula pair for the next step. This decision involves extra work, which is really wasted because
the same stepsize and formula pair are used for the next step. The next step is also successful and so
we consider again whether to change the formula pair for the next step. Again the extra work is
wasted because the code decides to continue with the same stepsize and formula pair. The next step
(the seventh) is also successful but this time, the code decide to reduce the order, in accordance with
the order changing strategy. From this point (t = 1.566), KT(4, 6) is used for 20 successful steps
although the step failed five times during this period and the stepsize had to be reduced. The order
274 M.S.H. Khiyal, R.M. Thomas~Journal of Computational and Applied Mathematics 79 (1997) 263-276

is increased again at the point t = 3.631 and KT(6, 8) is then used until the end of the interval of
integration. The code uses this formula pair for 15 successful steps but the order is not changed
because the stepsize has to be reduced six times due to error test failures. From Table 1 we see that
the code VOVS is more expensive than all the methods but requires fewer steps overall and
a smaller number of successful steps than all the methods except the formula pair KT(6, 8). The
code mostly uses either KT(6, 8) or KT(4, 6). The extra cost is incurred because of the cost of
considering whether to change the order, especially when using KT(4, 6).
When TOL = 10-8 (see Table 1), the order is changed four times and the code mostly uses either
KT(6, 8) or KT(4, 6). The code VOVS requires a smaller number of steps overall and a smaller
number of successful steps than all the pairs except KT(6, 8). In terms of function evaluations
VOVS is cheaper than KT(2, 4).
Fom Table 1, we see that the code VOVS mostly uses that pair which gives the best results with
a particular tolerance. Thus, for tolerance 10- 2, KT(2, 4) is cheaper than the other formula pairs
and VOVS mostly uses that formula pair. With TOL = 1 0 - 6 KT(4, 6) is cheaper than KT(6, 8),
which is itself cheaper than KT(2, 4) in terms of function evaluations. In this case we find that
VOVS uses KT(4, 6) for 26 successful steps and KT(6, 8) 22 successful steps while KT(2, 4) is used
only for the first five successful steps. Again with TOL = 10- 8, we notice, that the code mostly uses
KT(6, 8) which is cheaper than the other pairs given in Table 1 with TOL = 10- 8. This convinces
us that our strategy for varying the order does lead to the selection of the appropriate formula pair.
Next we consider Example 2 with TOL = 10-2 for which the results are presented in Table 2.
Code VOVS uses KT(2, 4) for nine successful steps before the order is changed. The possibility of
changing the order during this period is considered (at a cost of one iteration of the sixth-order
method) but at no stage could the stepsize be increased by more than double the size of the current
step. When the code does finally change to KT(4, 6) at the point t = 4.142 divergence occurs and
the step size has to be halved. Next, there is an error test failure and the stepsize had to be reduced
again. Finally, the code carries out seven successful steps with KT(4, 6) to complete the integration.
After completing five of these steps, the code carries out on each of the following steps one iteration
with the second-order method and one iteration with the eighth-order method (to decide whether
to reduce or raise the order of the method for the next step) but at no stage could the stepsize be
increased by more than double the size of the current step and so the order was not changed. This
increases the cost of VOVS considerably in terms of function evaluations when compared to the
pairs KT(4, 6) and KT(2, 4). (VOVS is only more expensive than KT(4, 6) in terms of Jacobian
evaluations and factorisations of the iteration matrix not in terms of function evaluations.)
For Example 2 with TOL = 10 -4, 1 0 - 6 and 10 -8, for which the results are given in Table 2, we
find that the code VOVS mostly uses KT(4, 6) and never uses KT(6, 8). In general KT(4, 6) is
cheaper than KT(2, 4) while KT(6, 8) is very, very expensive and is never used by VOVS for
Example 2. Thus, we can draw the same conclusion as we have drawn for Example 1, namely that
the strategy for varying the order does lead to the selection of the best formula pair, in general, for
a particular tolerance.
Since the code VOVS requires more overheads, it turns out to be more expensive than many of
the individual formula pairs. Note that we are using the cubic interpolant to calculate the back
values even when the formula pair in use on the current step is KT(6, 8) or KT(4, 6). For
a production code we need higher-order interpolants that are more reliable. We believe that such
interpolants would improve the performance of our codes considerably.
M.S.H. Khiyal, R.M. Thomas~Journal of Computational and Applied Mathematics 79 (1997) 263-276 275

©
Z

X X X X X X X X X X X X X X X X
<

~o
0
276 M.S.H. Kh(val. R.M. Thomas~Journal of Computational and Applied Mathematics 79 (1997.) 263 276

5. Conclusions

We have shown how an eighth-order, P-stable, perfect quartic, four-evaluation, two-step method
for solving second-order initial-value problems directly may be combined with a sixth-order
P-stable, perfect cubic, three-evaluation method to give an estimate of the local error. We have also
discussed how this pair of formulae may be incorporated in a variable-order, variable-step code
with a sixth-order/fourth-order pair and a fourth-order/second-order pair. The three pairs of
formulae have many features in common, thus reducing the amount of computational effort
required. Our strategy for varying the order means that the code uses the best formula pair, in
general, for a given tolerance although, for a production code, better interpolants would be needed.

References

[1] J.R. Cash, High order P-stable formulae for the numerical integration of periodic initial value problems, Numer.
Math. 37 (1981) 355-370.
[2] J.R. Cash, Efficient P-stable methods for periodic initial value problems, BIT 24 (1984) 248 252.
[-3] M.M. Chawla, Two step fourth order P-stable methods for second order differential equations, BIT 21 (1981)
190-193.
[4] M.M. Chawla, Unconditionally stable Numerov-type methods for second order differential equations, B I T 23
(1983) 541-542.
[-5] M.M. Chawla and P.S. Rao. High accuracy P-stable methods for y" =f(t, y), IMA J. Numer. Anal. 5 (1985)
215-220.
[-6] G.J. Cooper and J.C. Butcher, An iteration scheme for implicit Runge-Kutta methods, IMA J. Numer. Anal.
3 (1983) 127-140.
[7] F. Costabile and C. Costabile, Two-step fourth order P-stable methods for second order differential equations, B I T
22 (1982) 384-386.
[-8] E. Hairer, Unconditionally stable methods for second order differential equations, Numer. Math. 32 (1979) 373-379.
[-9] M.S.H. Khiyal, Efficient Algorithms based on direct hybrid methods for second order initial value problems, Ph.D.
Thesis, The University of Manchester Institute of Science and Technology, Manchester, 1991.
[10] M.S.H. Khiyal and R.M. Thomas, Efficient P-stable methods for second order systems, in: J.R. Cash and I.
Gladwell, Eds., Proc. IMA Cor~[. on Computation Ordinary Differential Equations (Oxford Univ. Press, Oxford,
1992) 127 134.
[11] J.D. Lambert and I.A. Watson, Symmetric multistep methods for periodic initial value problems, J. Inst. Maths.
Appl. 18 (1976) 189 202.
[12] N.M. Newmark, A method of computation for structural dynamics, J. Eng. Mech. Div. ASCE 8 (1959) 67-94.
[13] L.F. Shampine and M.K. Gordon, Computer Solution of Ordinary Differential Equations (W.H. Freeman & Co. San
Francisco, 1975).
[14] R.M. Thomas, Efficient fourth order P-stable formulae, BIT, 27 (1987) 599-614.
[15] R.M. Thomas, Efficient sixth order methods for nonlinear oscillation problems, B I T 28 (1988) 898-903.
[16] R.M. Thomas and M.S.H. Khiyal, High order, P-stable methods for nonlinear oscillation problems, University of
Manchester/UMIST Numerical Analysis Report No. 221, 1992.
[17] R.M. Thomas and M.S.H. Khiyal, Variable-order, Variable-step methods for second order initial value problems,
University of Manchester/UMIST Numerical Analysis Report No. 231, 1993.
[18] D.A. Voss and S.M. Serbin, Two-step hybrid methods for periodic initial value problems, Comput. Math. Appl. 15
(1988) 203 208.

You might also like