[go: up one dir, main page]

Next Article in Journal
Microwave Photonic ICs for 25 Gb/s Optical Link Based on SiGe BiCMOS Technology
Next Article in Special Issue
Parametric Jensen-Shannon Statistical Complexity and Its Applications on Full-Scale Compartment Fire Data
Previous Article in Journal
Determination of Radial Segmentation of Point Clouds Using K-D Trees with the Algorithm Rejecting Subtrees
Previous Article in Special Issue
Volume Preserving Maps Between p-Balls
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Class of Optimal Fourth Order Multiple Root Solvers without Using Derivatives

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Sangrur 148106, India
2
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
3
Institute of Doctoral Studies, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania
*
Authors to whom correspondence should be addressed.
Symmetry 2019, 11(12), 1452; https://doi.org/10.3390/sym11121452
Submission received: 23 October 2019 / Revised: 19 November 2019 / Accepted: 20 November 2019 / Published: 26 November 2019
(This article belongs to the Special Issue Symmetry in Applied Mathematics)

Abstract

:
Many optimal order multiple root techniques involving derivatives have been proposed in literature. On the contrary, optimal order multiple root techniques without derivatives are almost nonexistent. With this as a motivational factor, here we develop a family of optimal fourth-order derivative-free iterative schemes for computing multiple roots. The procedure is based on two steps of which the first is Traub–Steffensen iteration and second is Traub–Steffensen-like iteration. Theoretical results proved for particular cases of the family are symmetric to each other. This feature leads us to prove the general result that shows the fourth-order convergence. Efficacy is demonstrated on different test problems that verifies the efficient convergent nature of the new methods. Moreover, the comparison of performance has proven the presented derivative-free techniques as good competitors to the existing optimal fourth-order methods that use derivatives.

1. Introduction

We consider derivative-free methods for finding the multiple root (say, α ) with multiplicity m of a nonlinear equation f ( t ) = 0 , i.e., f ( j ) ( α ) = 0 , j = 0 , 1 , 2 , , m 1 and f ( m ) ( α ) 0 .
Several higher order methods, with or without the use of modified Newton’s method [1]
t k + 1 = t k m f ( t k ) f ( t k ) ,
have been derived and analyzed in literature (see, for example, [2,3,4,5,6,7,8,9,10,11,12,13,14,15] and references cited therein). In such methods, one requires determining the derivatives of either first order or both first and second order. Contrary to this, higher-order derivative-free methods to compute multiple roots are yet to be investigated. These methods are important in the problems where derivative f is complicated to process or is costly to evaluate. The basic derivative-free method is the Traub–Steffensen method [16], which uses the approximation
f ( t k ) f ( t k + β f ( t k ) ) f ( t k ) β f ( t k ) , β R { 0 } ,
or
f ( t k ) f [ s k , t k ] ,
for the derivative f in the classical Newton method in Equation (1). Here, s k = t k + β f ( t k ) and f [ s , t ] = f ( s ) f ( t ) s t is a divided difference of first order. In this way, the modified Newton method in Equation (1) transforms to the modified Traub–Steffensen derivative free method
t k + 1 = t k m f ( t k ) f [ s k , t k ] .
The modified Traub–Steffensen method in Equation (2) is a noticeable improvement over the Newton method, because it preserves the convergence of order two without using any derivative.
In this work, we aim to design derivative-free multiple root methods of high efficient quality, i.e., the methods of higher convergence order that use the computations as small as we please. Proceeding in this way, we introduce a class of derivative-free fourth-order methods that require three new pieces of information of the function f per iteration, and hence possess optimal fourth-order convergence in the terminology of Kung–Traub conjecture [17]. This conjecture states that multi-point iterative functions without memory based on n function evaluations may attain the convergence order 2 n 1 , which is maximum. The methods achieving this convergence order are usually called optimal methods. The new iterative scheme uses the modified Traub–Steffensen iteration in Equation (2) in the first step and Traub–Steffensen-like iteration in the second step. The methods are examined numerically on many practical problems of different kind. The comparison of performance with existing techniques requiring derivative evaluations verifies the efficient character of the new methods in terms of accuracy and executed CPU time.
The rest of the paper is summarized as follows. In Section 2, the scheme of fourth-order method is proposed and its convergence order is studied for particular cases. The main result for the general case is studied in Section 3. Numerical tests to demonstrate applicability and efficiency of the methods are presented in Section 4. In this section, a comparison of performance with already established methods is also shown. In Section 5, a conclusion of the main points is drawn.

2. Formulation of Method

To compute a multiple root with multiplicity m 1 , consider the following two-step iterative scheme:
z k = t k m f ( t k ) f [ s k , t k ] , t k + 1 = z k H ( x k , y k ) f ( t k ) f [ s k , t k ] ,
where x k = f ( z k ) f ( t k ) m , y k = f ( z k ) f ( s k ) m and H : C 2 C is analytic in a neighborhood of ( 0 , 0 ) . Notice that this is a two-step scheme with first step as the Traub–Steffensen iteration in Equation (2) and the next step as the Traub–Steffensen-like iteration. The second step is weighted by the factor H ( x , y ) , thus we can call it weight factor or more appropriately weight function.
In the sequel, we study the convergence results of proposed iterative scheme in Equation (3). For clarity, the results are obtained separately for different cases based on the multiplicity m. Firstly, for the case m = 1 , the following theorem is proved:
Theorem 1.
Assume that f : C C is an analytic function in a domain containing a multiple zero (say, α) with multiplicity m = 1 . Suppose that the initial point t 0 is close enough to α, then the convergence order of Equation (3) is at least 4, provided that H 00 = 0 , H 10 = 1 , H 01 = 0 , H 20 = 2 , H 11 = 11 and H 02 = 0 , where H i j = i + j x i y j H ( x k , y k ) | ( x k = 0 , y k = 0 ) , for 0 i , j 2 .
Proof. 
Assume that the error at kth stage is e k = t k α . Using the Taylor’s expansion of f ( t k ) about α and keeping into mind that f ( α ) = 0 and f ( α ) 0 , we have
f ( t k ) = f ( α ) e k 1 + A 1 e k + A 2 e k 2 + A 3 e k 3 + A 4 e k 4 + ,
where A n = 1 ( 1 + n ) ! f ( 1 + n ) ( α ) f ( α ) for n N .
Similarly we have the Taylor’s expansion of f ( s k ) about α
f ( s k ) = f ( α ) e s k 1 + A 1 e s k + A 2 e s k 2 + A 3 e s k 3 + A 4 e s k 4 + ,
where e s k = s k α = e k + β f ( α ) e k 1 + A 1 e k + A 2 e k 2 + A 3 e k 3 + A 4 e k 4 + .
Then, the first step of Equation (3) yields
e z k = z k α = ( 1 + β f ( α ) ) A 1 e k 2 ( 2 + 2 β f ( α ) + ( β f ( α ) ) 2 ) A 1 2 ( 2 + 3 β f ( α ) + ( β f ( α ) ) 2 ) A 2 e k 3 + ( ( 4 + 5 β f ( α ) + 3 ( β f ( α ) ) 2 + ( β f ( α ) ) 3 ) A 1 3 ( 7 + 10 β f ( α ) + 7 ( β f ( α ) ) 2 + 2 ( β f ( α ) ) 3 ) A 1 A 2 + ( 3 + 6 β f ( α ) + 4 ( β f ( α ) ) 2 + ( β f ( α ) ) 3 ) A 3 ) e k 4 + O ( e k 5 ) .
Expanding f ( z k ) about α , it follows that
f ( z k ) = f ( α ) e z k 1 + A 1 e z k + A 2 e z k 2 + A 3 e z k 3 + .
Using Equations (4), (5) and (7) in x k and y k , after some simple calculations, we have
x k = ( 1 + β f ( α ) ) A 1 e k ( 3 + 3 β f ( α ) + ( β f ( α ) ) 2 ) A 1 2 ( 2 + 3 β f ( α ) + ( β f ( α ) ) 2 ) A 2 e k 2 + ( ( 8 + 10 β f ( α ) + 5 ( β f ( α ) ) 2 + ( β f ( α ) ) 3 ) A 1 3 2 ( 5 + 7 β f ( α ) + 4 ( β f ( α ) ) 2 + ( β f ( α ) ) 3 ) A 1 A 2 + ( 3 + 6 β f ( α ) + 4 ( β f ( α ) ) 2 + ( β f ( α ) ) 3 ) A 3 ) e k 3 + O ( e k 4 )
and
y k = A 1 e k ( 3 + 2 β f ( α ) ) A 1 2 ( 2 + β f ( α ) A 2 ) e k 2 + ( ( 8 + 8 β f ( α ) + 3 ( β f ( α ) ) 2 ) A 1 3 ( 10 + 11 β f ( α ) + 4 ( β f ( α ) ) 2 ) A 1 A 2 + ( 3 + 3 β f ( α ) + ( β f ( α ) ) 2 ) A 3 ) e k 3 + O ( e k 4 ) .
Developing H ( x k , y k ) by Taylor series in the neighborhood of origin ( 0 , 0 ) ,
H ( x k , y k ) H 00 + x k H 10 + y k H 01 + 1 2 x k 2 H 20 + x k y k H 11 + 1 2 y k 2 H 02 .
Inserting Equations (4)–(10) into the second step of Equation (3), and then some simple calculations yield
e k + 1 = H 00 e k + H 00 H 01 + β f ( α ) H 00 ( 1 + H 10 ) ( 1 + β f ( α ) ) A 1 e k 2 1 2 ( ( 4 + H 02 8 H 10 + 2 H 11 + H 20 + 4 β f ( α ) 10 β f ( α ) H 10 + 2 β f ( α ) H 11 + 2 β f ( α ) H 20 + 2 ( β f ( α ) ) 2 4 ( β f ( α ) ) 2 H 10 + ( β f ( α ) ) 2 H 20 2 H 01 ( 4 + 3 β f ( α ) ) + 2 H 00 ( 2 + 2 β f ( α ) + ( β f ( α ) ) 2 ) ) A 1 2 2 ( 2 + β f ( α ) ) ( H 00 H 01 + β f ( α ) H 00 ( 1 + H 10 ) ( 1 + β f ( α ) ) ) A 2 ) e k 3 + δ e k 4 + O ( e k 5 ) ,
where δ = δ ( β , A 1 , A 2 , A 3 , H 00 , H 10 , H 01 , H 20 , H 11 , H 02 ) . Here, expression of δ is not being produced explicitly since it is very lengthy.
It is clear from Equation (11) that we would obtain at least fourth-order convergence if we set coefficients of e k , e k 2 and e k 3 simultaneously equal to zero. Then, solving the resulting equations, one gets
H 00 = 0 , H 10 = 1 , H 01 = 0 , H 20 = 2 , H 11 = 1 , H 02 = 0 .
As a result, the error equation is given by
e k + 1 = ( 1 + β f ( α ) ) A 1 ( 5 + 5 β f ( α ) + ( β f ( α ) ) 2 ) A 1 2 ( 1 + β f ( α ) ) A 2 e k 4 + O ( e k 5 ) .
Thus, the theorem is proved. □
Next, we show the conditions for m = 2 by the following theorem:
Theorem 2.
Using the hypotheses of Theorem 1, the order of convergence of the scheme in Equation (3) for the case m = 2 is at least 4, if H 00 = 0 , H 10 = 1 , H 01 = 1 , and H 20 = 8 H 02 2 H 11 , wherein { | H 11 | , | H 02 | } < .
Proof. 
Assume that the error at kth stage is e k = t k α . Using the Taylor’s expansion of f ( t k ) about α and keeping in mind that f ( α ) = 0 , f ( α ) = 0 , and f ( 2 ) ( α ) 0 , we have
f ( t k ) = f ( 2 ) ( α ) 2 ! e k 2 1 + B 1 e k + B 2 e k 2 + B 3 e k 3 + B 4 e k 4 + ,
where B n = 2 ! ( 2 + n ) ! f ( 2 + n ) ( α ) f ( 2 ) ( α ) for n N .
Similarly, we have the Taylor’s expansion of f ( s k ) about α
f ( s k ) = f ( 2 ) ( α ) 2 ! e s k 2 1 + B 1 e s k + B 2 e s k 2 + B 3 e s k 3 + B 4 e s k 4 + ,
where e s k = s k α = e k + β f ( 2 ) ( α ) 2 ! e k 2 1 + B 1 e k + B 2 e k 2 + B 3 e k 3 + B 4 e k 4 + .
Then, the first step of Equation (3) yields
e z k = z k α = 1 2 β f ( 2 ) ( α ) 2 + B 1 e k 2 1 16 ( β f ( 2 ) ( α ) ) 2 8 β f ( 2 ) ( α ) B 1 + 12 B 1 2 16 B 2 e k 3 + 1 64 ( ( β f ( 2 ) ( α ) ) 3 20 β f ( 2 ) ( α ) B 1 2 + 72 B 1 3 + 64 β f ( 2 ) ( α ) B 2 10 B 1 ( β f ( 2 ) ( α ) ) 2 + 16 B 2 + 96 B 3 ) e k 4 + O ( e k 5 ) .
Expanding f ( z k ) about α , it follows that
f ( z k ) = f ( 2 ) ( α ) 2 ! e z k 2 1 + B 1 e z k + B 2 e z k 2 + B 3 e z k 3 + B 4 e z k 4 + .
Using Equations (14), (15) and (17) in x k and y k , after some simple calculations, we have
x k = 1 2 β f ( 2 ) ( α ) 2 + B 1 e k 1 16 ( β f ( 2 ) ( α ) ) 2 6 β f ( 2 ) ( α ) B 1 + 16 ( B 1 2 B 2 ) e k 2 + 1 64 ( ( β f ( 2 ) ( α ) ) 3 22 β f ( 2 ) ( α ) B 1 2 + 4 29 B 1 3 + 14 β f ( 2 ) ( α ) B 2 2 B 1 3 ( β f ( 2 ) ( α ) ) 2 + 104 B 2 + 96 B 3 ) e k 3 + O ( e k 4 )
and
y k = 1 2 β f ( 2 ) ( α ) 2 + B 1 e k 1 16 3 ( β f ( 2 ) ( α ) ) 2 2 β f ( 2 ) ( α ) B 1 + 16 ( B 1 2 B 2 ) e k 2 + 1 64 ( 7 ( β f ( 2 ) ( α ) ) 3 + 24 β f ( 2 ) ( α ) B 2 14 β f ( 2 ) ( α ) B 1 2 + 116 B 1 3 2 B 1 11 ( β f ( 2 ) ( α ) ) 2 + 104 B 2 + 96 B 3 ) e k 3 + O ( e k 4 ) .
Developing by Taylor series the weight function H ( x k , y k ) in the neighborhood of origin ( 0 , 0 ) ,
H ( x k , y k ) H 00 + x k H 10 + y k H 01 + 1 2 x k 2 H 20 + x k y k H 11 + 1 2 y k 2 H 02 .
Inserting Equations (14)–(20) intothe second step of Equation (3), and then some simple calculations yield
e k + 1 = H 00 2 e k + 1 4 2 + H 00 H 01 H 10 β f ( 2 ) ( α ) 2 + B 1 e k 2 1 64 ( ( β f ( 2 ) ( α ) ) 2 ( 4 + 2 H 00 8 H 01 + H 02 4 H 10 + 2 H 11 + H 20 ) + 4 β f ( 2 ) ( α ) ( 8 4 H 00 H 01 + H 02 + H 10 + 2 H 11 + H 20 ) B 1 + 4 ( 12 + 6 H 00 10 H 01 + H 02 10 H 10 + 2 H 11 + H 20 ) B 1 2 32 ( 2 + H 00 H 01 H 10 ) B 2 ) e k 3 + ϕ e k 4 + O ( e k 5 ) ,
where ϕ = ϕ ( β , B 1 , B 2 , B 3 , H 00 , H 10 , H 01 , H 20 , H 11 , H 02 ) . Here, expression of ϕ is not being produced explicitly since it is very lengthy.
It is clear from Equation (21) that we would obtain at least fourth-order convergence if we set coefficients of e k , e k 2 and e k 3 simultaneously equal to zero. Then, solving the resulting equations, one gets
H 00 = 0 , H 10 = 1 , H 01 = 1 , H 20 = 8 H 02 2 H 11 .
As a result, the error equation is given by
e k + 1 = 1 32 β f ( 2 ) ( α ) 2 + B 1 ( 2 β f ( 2 ) ( α ) ( 3 + H 02 + H 11 ) B 1 + 22 B 1 2 + ( ( β f ( 2 ) ( α ) ) 2 ( H 02 + H 11 ) 8 B 2 ) e k 4 + O ( e k 5 ) .
Thus, the theorem is proved. □
Below, we state the theorems (without proof) for the cases m = 3 , 4 , 5 as the proof is similar to the above proved theorems.
Theorem 3.
Using the hypotheses of Theorem 1, the order of convergence of scheme in Equation (3) for the case m = 3 is at least 4, if H 00 = 0 , H 10 = 3 H 01 , and H 20 = 12 H 02 2 H 11 , where { | H 01 | , | H 02 | , | H 11 | } < . Moreover, the scheme satisfies error equation
e k + 1 = 1 54 β f ( 3 ) ( α ) ( 3 + H 01 ) C 1 + 12 C 1 3 6 C 1 C 2 e k 4 + O ( e k 5 ) ,
where C n = 3 ! ( 3 + n ) ! f ( 3 + n ) ( α ) f ( 3 ) ( α ) for n N .
Theorem 4.
Using the hypotheses of Theorem 1, the order of convergence of scheme in Equation (3) for the case m = 4 is at least 4, if H 00 = 0 , H 10 = 4 H 01 , and H 20 = 16 H 02 2 H 11 , where { | H 01 | , | H 02 | , | H 11 | } < . Moreover, the scheme satisfies error equation
e k + 1 = 1 128 13 D 1 3 8 D 1 D 2 e k 4 + O ( e k 5 ) ,
where D n = 4 ! ( 4 + n ) ! f ( 4 + n ) ( α ) f ( 4 ) ( α ) for n N .
Theorem 5.
Using the hypotheses of Theorem 1, the order of convergence of scheme in Equation (3) for the case m = 5 is at least 4, if H 00 = 0 , H 10 = 5 H 01 , and H 20 = 20 H 02 2 H 11 , where { | H 01 | , | H 02 | , | H 11 | } < . Moreover, the scheme satisfies error equation
e k + 1 = 1 125 7 E 1 3 5 E 1 E 2 e k 4 + O ( e k 5 ) ,
where E n = 5 ! ( 5 + n ) ! f ( 5 + n ) ( α ) f ( 5 ) ( α ) for n N .
Remark 1.
We can observe from the above results that the number of conditions on H i j is 6 , 4 , 3 , 3 , 3 corresponding to cases m = 1 , 2 , 3 , 4 , 5 to attain the fourth-order convergence of the method in Equation (3). The cases m = 3 , 4 , 5 satisfy the common conditions, H 00 = 0 , H 10 = m H 01 , and H 20 = 4 m H 02 2 H 11 . Nevertheless, their error equations differ from each other as the parameter β does not appear in the equations for m = 4 , 5 . It has been seen that when m 4 the conditions on H i j are always three in number and the error equation in each such case does not contain β term. This type of symmetry in the results helps us to prove the general result, which is presented in next section.

3. Main Result

For the multiplicity m 4 , we prove the order of convergence of the scheme in Equation (3) by the following theorem:
Theorem 6.
Assume that the function f : C C is an analytic in a domain containing zero α having multiplicity m 4 . Further, suppose that the initial estimation t 0 is close enough to α. Then, the convergence of the iteration scheme in Equation (3) is of order four, provided that H 00 = 0 , H 10 = m H 01 , and H 20 = 4 m H 02 2 H 11 , wherein { | H 01 | , | H 11 | , | H 02 | } < . Moreover, the error in the scheme is given by
e k + 1 = 1 2 m 3 ( 9 + m ) K 1 3 2 m K 1 K 2 e k 4 + O ( e k 5 ) .
Proof. 
Taking into account that f ( j ) ( α ) = 0 , j = 0 , 1 , 2 , . . . , m 1 and f m ( α ) 0 , then, developing f ( t k ) about α in the Taylor’s series,
f ( t k ) = f m ( α ) m ! e k m 1 + K 1 e k + K 2 e k 2 + K 3 e k 3 + K 4 e k 4 + ,
where K n = m ! ( m + n ) ! f ( m + n ) ( α ) f ( m ) ( α ) for n N .
In addition, from the expansion of f ( s k ) about α , it follows that
f ( s k ) = f m ( α ) m ! e s k m 1 + K 1 e s k + K 2 e s k 2 + K 3 e s k 3 + K 4 e s k 4 + ,
where e s k = s k α = e k + β f m ( α ) m ! e k m 1 + K 1 e k + K 2 e k 2 + K 3 e k 3 + K 4 e k 4 + . From the first step of Equation (3),
e z k = z k α = K 1 m e k 2 + 1 m 2 2 m K 2 ( 1 + m ) K 1 2 e k 3 + 1 m 3 ( 1 + m ) 2 K 1 3 m ( 4 + 3 m ) K 1 K 2 + 3 m 2 K 3 e k 4 + O ( e k 5 ) .
Expansion of f ( z k ) around α yields
f ( z k ) = f m ( α ) m ! e z k 2 1 + K 1 e z k + K 2 e z k 2 + K 3 e z k 3 + K 4 e z k 4 + .
Using Equations (23), (24) and (26) in the expressions of x k and y k , we have that
x k = K 1 m e k + 1 m 2 2 m K 2 ( 2 + m ) K 1 2 e k 2 + 1 2 m 3 ( 7 + 7 m + 2 m 2 ) K 1 3 2 m ( 7 + 3 m ) K 1 K 2 + 6 m 2 K 3 e k 3 + O ( e k 4 )
and
y k = K 1 m e k + 1 m 2 m K 2 ( 2 + m ) K 1 2 e k 2 + 1 2 m 3 ( 6 + 7 m + 2 m 2 ) K 1 3 2 m ( 6 + 3 m ) K 1 K 2 + 6 m 2 K 3 e k 3 + O ( e k 4 ) .
Developing H ( x k , y k ) in Taylor’s series in the neighborhood of origin ( 0 , 0 ) ,
H ( x k , y k ) H 00 + x k H 10 + y k H 01 + 1 2 x k 2 H 20 + x k y k H 11 + 1 2 y k 2 H 02 .
Inserting Equations (23)–(29) into the second step of Equation (3), it follows that
e k + 1 = H 00 m e k + 1 m 2 ( H 00 H 01 H 10 + m ) K 1 e k 2 1 2 m 3 ( ( H 02 6 H 10 + 2 H 11 + H 20 + 2 m 2 m H 10 + 2 m 2 + 2 ( 1 + m ) H 00 2 ( 3 + m ) H 01 ) K 1 2 4 m ( H 00 H 01 H 10 + m ) K 2 ) e k 3 + 1 2 m 4 ( ( 5 H 02 13 H 10 + 10 H 11 + 5 H 20 + 2 m + 2 m H 02 11 m H 10 + 4 m H 11 + 2 m H 20 + 4 m 2 2 m 2 H 10 + 2 m 3 + 2 ( 1 + m ) 2 H 00 ( 13 + 11 m + 2 m 2 ) H 01 ) K 1 3 2 m ( 2 H 02 11 H 10 + 4 H 11 + 2 H 20 + 4 m 3 m H 10 + 3 m 2 + ( 4 + 3 m ) H 00 ( 11 + 3 m ) H 01 ) K 1 K 2 + 6 m 2 ( H 00 H 01 H 10 + m ) K 3 ) e k 4 + O ( e k 5 ) .
It is clear that we can obtain at least fourth-order convergence if the coefficients of e k , e k 2 , and e k 3 vanish. On solving the resulting equations, we get
H 00 = 0 , H 10 = m H 01 , H 20 = 4 m H 02 2 H 11 .
Then, error of Equation (30) is given by
e k + 1 = 1 2 m 3 ( 9 + m ) K 1 3 2 m K 1 K 2 e k 4 + O ( e k 5 ) .
Thus, the theorem is proved. □
Remark 2.
The proposed scheme in Equation (3) reaches fourth-order convergence provided that the conditions of Theorems 1–3 and 6 are satisfied. This convergence rate is achieved by using only three function evaluations, viz. f ( t n ) , f ( s n ) , and f ( z n ) , per iteration. Therefore, the scheme in Equation (3) is optimal by the Kung–Traub hypothesis [17].
Remark 3.
It is important to note that parameter β, which is used in s k , appears only in the error equations of the cases m = 1 , 2 , 3 but not for m 4 . For m 4 , we have observed that this parameter appears in the coefficients of e k 5 and higher order. However, we do not need such terms to show the required fourth-order convergence.

Some Special Cases

We can generate many iterative schemes as the special cases of the family in Equation (3) based on the forms of function H ( x , y ) that satisfy the conditions of Theorems 1, 2 and 6. However, we restrict ourselves to the choices of low-degree polynomials or simple rational functions. These choices should be such that the resulting methods may converge to the root with order four for m 1 . Accordingly, the following simple forms are considered:
( 1 )
Let us choose the function
H ( x k , y k ) = x k + m x k 2 + ( m 1 ) y k + m x k y k ,
which satisfies the conditions of Theorems 1, 2 and 6. Then, the corresponding fourth-order iterative scheme is given by
t k + 1 = z k x k + m x k 2 + ( m 1 ) y k + m x k y k f ( t k ) f [ s k , t k ] .
( 2 )
Next, consider the rational function
H ( x k , y k ) = x k + m x k 2 ( m 1 ) y k ( m y k 1 ) m y k 1 ,
satisfying the conditions of Theorems 1, 2 and 6. Then, corresponding fourth-order iterative scheme is given by
t k + 1 = z k + x k + m x k 2 ( m 1 ) y k ( m y k 1 ) m y k 1 f ( t k ) f [ s k , t k ] .
( 3 )
Consider another rational function satisfying the conditions of Theorems 1, 2 and 6, which is given by
H ( x k , y k ) = x k y k + m y k + 2 m x k y k m 2 x k y k 1 m x k + x k 2 .
The corresponding fourth-order iterative scheme is given by
t k + 1 = z k x k y k + m y k + 2 m x k y k m 2 x k y k 1 m x k + x k 2 f ( t k ) f [ s k , t k ] .
For each of the above cases, z k = t k m f ( t k ) f [ s k , t k ] . For future reference, the proposed methods in Equations (33)–(35) are denoted by NM1, NM2, and NM3, respectively.

4. Numerical Results

To validate the theoretical results proven in previous sections, the special cases NM1, NM2, and NM3 of new family were tested numerically by implementing them on some nonlinear equations. Moreover, their comparison was also performed with some existing optimal fourth-order methods that use derivatives in the formulas. We considered, for example, the methods by Li et al. [7,8], Sharma and Sharma [9], Zhou et al. [10], Soleymani et al. [12], and Kansal et al. [14]. The methods are expressed as follows:
Li–Liao–Cheng method (LLC):
z k = t k 2 m m + 2 f ( t k ) f ( t k ) , t k + 1 = t k m ( m 2 ) m m + 2 m f ( z k ) m 2 f ( t k ) f ( t k ) m m + 2 m f ( z k ) f ( t k ) 2 f ( t k ) .
Li–Cheng–Neta method (LCN):
z k = t k 2 m m + 2 f ( t k ) f ( t k ) , t k + 1 = t k α 1 f ( t k ) f ( z k ) f ( t k ) α 2 f ( t k ) + α 3 f ( z k ) ,
where
α 1 = 1 2 m m + 2 m m ( m 4 + 4 m 3 16 m 16 ) m 3 4 m + 8 , α 2 = ( m 3 4 m + 8 ) 2 m ( m 4 + 4 m 3 4 m 2 16 m + 16 ) ( m 2 + 2 m 4 ) , α 3 = m 2 ( m 3 4 m + 8 ) m m + 2 m ( m 4 + 4 m 3 4 m 2 16 m + 16 ) ( m 2 + 2 m 4 ) .
Sharma–Sharma method (SS):
z k = t k 2 m m + 2 f ( t k ) f ( t k ) , t k + 1 = t k m 8 [ ( m 3 4 m + 8 ) ( m + 2 ) 2 m m + 2 m f ( t k ) f ( z k ) × 2 ( m 1 ) ( m + 2 ) m m + 2 m f ( t k ) f ( z k ) ] f ( t k ) f ( t k ) .
Zhou–Chen–Song method (ZCS):
z k = t k 2 m m + 2 f ( t k ) f ( t k ) , t k + 1 = t k m 8 [ m 3 m + 2 m 2 m f ( z k ) f ( t k ) 2 2 m 2 ( m + 3 ) m + 2 m m f ( z k ) f ( t k ) + ( m 3 + 6 m 2 + 8 m + 8 ) ] f ( t k ) f ( t k ) .
Soleymani–Babajee–Lotfi method (SBL):
z k = t k 2 m m + 2 f ( t k ) f ( t k ) , t k + 1 = t k f ( z k ) f ( t k ) q 1 ( f ( z k ) ) 2 + q 2 f ( z k ) f ( t k ) + q 3 ( f ( t k ) ) 2 ,
where
q 1 = 1 16 m 3 m ( 2 + m ) m , q 2 = 8 m ( 2 + m ) ( 2 + m 2 ) 8 m , q 3 = 1 16 ( 2 + m ) m 1 + m ( 2 + m ) 3 m .
Kansal–Kanwar–Bhatia method (KKB):
z k = t k 2 m m + 2 f ( t k ) f ( t k ) , t k + 1 = t k m 4 f ( t k ) 1 + m 4 p 2 m f ( z k ) f ( t k ) + p 1 + m 2 ( 1 + p m ) 8 ( 2 p m + m ( 1 + p m ) ) × 4 2 m + m 2 ( 1 + p m ) f ( t k ) p m ( 2 p m + m ( 1 + p m ) ) 2 f ( t k ) f ( z k ) ,
where p = m m + 2 .
Computational work was compiled in the programming package of Mathematica software [18] in a PC with Intel(R) Pentium(R) CPU B960 @ 2.20 GHz, 2.20 GHz (32-bit Operating System) Microsoft Windows 7 Professional and 4 GB RAM. Performance of the new methods was tested by choosing value of the parameter β = 0 . 01 . The tabulated results obtained by the methods for each problem include: (a) the number of iterations ( k ) required to obtain the solution using the stopping criterion | t k + 1 t k | + | f ( t k ) | < 10 100 ; (b) the estimated error | t k + 1 t k | in the first three iterations; (c) the calculated convergence order (CCO); and (d) the elapsed time (CPU time in seconds) in execution of a program, which was measured by the command “TimeUsed[ ]”. The calculated convergence order (CCO) to confirm the theoretical convergence order was calculated by the formula (see [19])
CCO = log | ( t k + 2 α ) / ( t k + 1 α ) | log | ( t k + 1 α ) / ( t k α ) | , for each k = 1 , 2 ,
The following numerical examples were chosen for experimentation:
Example 1.
Planck law of radiation to calculate the energy density in an isothermal black body [20] is stated as
ϕ ( λ ) = 8 π c h λ 5 e c h / λ k T 1 .
where λ is wavelength of the radiation, c is speed of light, T is absolute temperature of the black body, k is Boltzmann’s constant, and h is Planck’s constant. The problem is to determine the wavelength λ corresponding to maximum energy density ϕ ( λ ) . Thus, Equation (37) leads to
ϕ ( λ ) = 8 π c h λ 6 e c h / λ k T 1 ( c h / λ k T ) e c h / λ k T e c h / λ k T 1 5 = A . B . ( say )
Note that a maxima for ϕ will occur when B = 0 , that is when
( c h / λ k T ) e c h / λ k T e c h / λ k T 1 = 5 .
Setting t = c h / λ k T , the above equation assumes the form
1 t 5 = e t .
Define
f 1 ( t ) = e t 1 + t 5 .
The root t = 0 is trivial and thus is not taken for discussion. Observe that for t = 5 the left-hand side of Equation (38) is zero and the right-hand side is e 5 6.74 × 10 3 . Thus, we guess that another root might occur somewhere near to t = 5 . In fact, the expected root of Equation (39) is given by α 4.96511423174427630369 with t 0 = 5.5 . Then, the wavelength of radiation (λ) corresponding to maximum energy density is
λ c h 4 . 96511423174427630369 ( k T ) .
The results so obtained are shown in Table 1.
Example 2.
Consider the van der Waals equation (see [15])
P + a 1 n 2 V 2 ( V n a 2 ) = n R T ,
that explains the nature of a real gas by adding two parameters a 1 and a 2 in the ideal gas equation. To find the volume V in terms of rest of the parameters, one requires solving the equation
P V 3 ( n a 2 P + n R T ) V 2 + a 1 n 2 V a 1 a 2 n 2 = 0 .
One can find values of n, P, and T, for a given a set of values of a 1 and a 2 of a particular gas, so that the equation has three roots. Using a particular set of values, we have the function
f 2 ( t ) = t 3 5.22 t 2 + 9.0825 t 5.2675 ,
that has three roots from which one is simple zero α = 1.72 and other one is a multiple zero α = 1.75 of multiplicity two. However, our desired zero is α = 1.75 . The methods are tested for initial guess t 0 = 2.5 . Computed results are given in Table 2.
Example 3.
Next, we assume a standard nonlinear test function which is defined as
f 3 ( t ) = tan 1 5 2 tan 1 ( t 2 1 ) + 6 tan 1 t 2 1 6 tan 1 1 2 5 6 11 63 3 .
The function f 3 has multiple zero at α = 1.8411027704926161 of multiplicity three. We select initial approximation t 0 = 1.6 to obtain zero of this function. Numerical results are exhibited in Table 3.
Example 4.
Lastly, we consider another standard test function, which is defined as
f 4 ( t ) = t ( t 2 + 1 ) ( 2 e t 2 + 1 + t 2 1 ) cosh 2 π t 2 .
The function f 4 has multiple zero at i of multiplicity four. We choose the initial approximation x 0 = 1.2 i for obtaining the zero of the function. Numerical results are displayed in Table 4.
From the computed results shown in Table 1, Table 2, Table 3 and Table 4, we can observe a good convergence behavior of the proposed methods similar to those of existing methods. The reason for good convergence is the increase in accuracy of the successive approximations per iteration, as is evident from numerical results. This also points to the stable nature of methods. It is also clear that the approximations to the solutions by the new methods have accuracies greater than or equal to those computed by existing methods. We display the value 0 of | t k + 1 t k | at the stage when stopping criterion | t k + 1 t k | + | f ( t k ) | < 10 100 has been satisfied. From the calculation of computational order of convergence shown in the penultimate column in each table, we verify the theoretical fourth-order of convergence.
The efficient nature of presented methods can be observed by the fact that the amount of CPU time consumed by the methods is less than the time taken by existing methods (result confirmed by similar numerical experiments on many other different problems). The methods requiring repeated evaluations of the roots (such as the ones tackled in [21,22,23,24]), also may benefit greatly from the use of proposed methods (NM1–NM3, Equations (33)–(35)).

5. Conclusions

In this paper, we propose a family of fourth-order derivative-free numerical methods for obtaining multiple roots of nonlinear equations. Analysis of the convergence was carried out, which proved the order four under standard assumptions of the function whose zeros we are looking for. In addition, our designed scheme also satisfies the Kung–Traub hypothesis of optimal order of convergence. Some special cases are established. These are employed to solve nonlinear equations including those arising in practical problems. The new methods are compared with existing techniques of same order. Testing of the numerical results shows that the presented derivative-free methods are good competitors to the existing optimal fourth-order techniques that require derivative evaluations in the algorithm. We conclude the work with a remark that derivative-free methods are good alternatives to Newton-type schemes in the cases when derivatives are expensive to compute or difficult to obtain.

Author Contributions

Methodology, J.R.S.; writing, review and editing, J.R.S.; investigation, S.K.; data curation, S.K.; conceptualization, L.J.; formal analysis, L.J.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
  2. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  3. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
  4. Dong, C. A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 1987, 21, 363–367. [Google Scholar] [CrossRef]
  5. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef]
  6. Neta, B. New third order nonlinear solvers for multiple roots. App. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef]
  7. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  8. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef]
  9. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  10. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef]
  11. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  12. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  13. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef]
  14. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
  15. Behl, R.; Zafar, F.; Alshormani, A.S.; Junjua, M.U.D.; Yasmin, N. An optimal eighth-order scheme for multiple zeros of unvariate functions. Int. J. Comput. Meth. 2019, 16, 1843002. [Google Scholar] [CrossRef]
  16. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  17. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  18. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Bengaluru, India, 2003. [Google Scholar]
  19. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  20. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  21. Jäntschi, L.; Bolboacă, S.D. Conformational study of C24 cyclic polyyne clusters. Int. J. Quantum Chem. 2018, 118, e25614. [Google Scholar] [CrossRef]
  22. Azad, H.; Anaya, K.; Al-Dweik, A.Y.; Mustafa, M.T. Invariant solutions of the wave equation on static spherically symmetric spacetimes admitting G7 isometry algebra. Symmetry 2018, 10, 665. [Google Scholar] [CrossRef]
  23. Matko, V.; Brezovec, B. Improved data center energy efficiency and availability with multilayer node event processing. Energies 2018, 11, 2478. [Google Scholar] [CrossRef]
  24. Jäntschi, L. The eigenproblem translated for alignment of molecules. Symmetry 2019, 11, 1027. [Google Scholar] [CrossRef] [Green Version]
Table 1. Numerical results for Example 1.
Table 1. Numerical results for Example 1.
Method k | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | CCO CPU-Time
LLC 4 1 . 51 × 10 5 1 . 47 × 10 23 1 . 30 × 10 95 4.000 0.4993
LCN 4 1 . 55 × 10 5 1 . 73 × 10 23 2 . 65 × 10 95 4.000 0.5302
SS 4 1 . 52 × 10 5 1 . 51 × 10 23 1 . 47 × 10 95 4.000 0.6390
ZCS 4 1 . 57 × 10 5 1 . 87 × 10 23 3 . 75 × 10 95 4.000 0.6404
SBL 4 1 . 50 × 10 5 1 . 43 × 10 23 1 . 19 × 10 95 4.000 0.8112
KKB fails - - - - -
NM1 3 5 . 59 × 10 6 1 . 35 × 10 25 0 4.000 0.3344
NM2 3 5 . 27 × 10 6 9 . 80 × 10 26 0 4.000 0.3726
NM3 3 5 . 43 × 10 6 1 . 16 × 10 25 0 4.000 0.3475
Table 2. Numerical results for Example 2.
Table 2. Numerical results for Example 2.
Method k | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | CCO CPU-Time
LLC 6 9 . 09 × 10 2 8 . 03 × 10 3 2 . 33 × 10 5 4.000 0.0780
LCN 6 9 . 09 × 10 2 8 . 03 × 10 3 2 . 33 × 10 5 4.000 0.0784
SS 6 9 . 26 × 10 2 8 . 58 × 10 3 3 . 11 × 10 5 4.000 0.0945
ZCS 6 9 . 62 × 10 2 9 . 84 × 10 3 5 . 64 × 10 5 4.000 0.0792
SBL 6 9 . 09 × 10 2 8 . 03 × 10 3 2 . 33 × 10 5 4.000 0.0797
KKB 6 8 . 97 × 10 2 7 . 62 × 10 3 1 . 68 × 10 5 4.000 0.0934
NM1 6 9 . 91 × 10 2 1 . 08 × 10 2 8 . 79 × 10 5 4.000 0.0752
NM2 6 8 . 06 × 10 2 5 . 08 × 10 3 2 . 81 × 10 5 4.000 0.0684
NM3 6 8 . 78 × 10 2 7 . 02 × 10 3 1 . 31 × 10 5 4.000 0.0788
Table 3. Numerical results for Example 3.
Table 3. Numerical results for Example 3.
Method k | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | CCO CPU-Time
LLC 4 1 . 11 × 10 4 9 . 02 × 10 19 3 . 91 × 10 75 4.000 2.5743
LCN 4 1 . 11 × 10 4 8 . 93 × 10 19 3 . 72 × 10 75 4.000 2.6364
SS 4 1 . 11 × 10 4 8 . 71 × 10 19 3 . 29 × 10 75 4.000 2.8718
ZCS 4 1 . 11 × 10 4 8 . 16 × 10 19 2 . 38 × 10 75 4.000 2.8863
SBL 4 1 . 11 × 10 4 8 . 63 × 10 19 3 . 15 × 10 75 4.000 3.2605
KKB 4 1 . 11 × 10 4 9 . 80 × 10 19 5 . 87 × 10 75 4.000 2.9011
NM1 4 2 . 31 × 10 5 4 . 04 × 10 21 3 . 78 × 10 84 4.000 2.2935
NM2 4 2 . 07 × 10 5 1 . 32 × 10 21 2 . 18 × 10 86 4.000 2.5287
NM3 4 2 . 11 × 10 5 1 . 66 × 10 21 6 . 36 × 10 86 4.000 2.4964
Table 4. Numerical results for Example 4.
Table 4. Numerical results for Example 4.
Method k | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | CCO CPU-Time
LLC 4 2 . 64 × 10 4 2 . 13 × 10 15 9 . 11 × 10 60 4.000 1.7382
LCN 4 2 . 64 × 10 4 2 . 14 × 10 15 9 . 39 × 10 60 4.000 2.4035
SS 4 2 . 64 × 10 4 2 . 18 × 10 15 1 . 01 × 10 59 4.000 2.5431
ZCS 4 2 . 65 × 10 4 2 . 24 × 10 15 1 . 14 × 10 59 4.000 2.6213
SBL 4 2 . 66 × 10 4 2 . 28 × 10 15 1 . 23 × 10 59 4.000 3.2610
KKB 4 2 . 61 × 10 4 2 . 00 × 10 15 6 . 83 × 10 60 4.000 2.6524
NM1 4 1 . 43 × 10 4 1 . 29 × 10 16 8 . 61 × 10 65 4.000 0.5522
NM2 4 4 . 86 × 10 5 5 . 98 × 10 20 1 . 36 × 10 79 4.000 0.6996
NM3 4 6 . 12 × 10 5 6 . 69 × 10 19 9 . 54 × 10 75 4.000 0.6837

Share and Cite

MDPI and ACS Style

Sharma, J.R.; Kumar, S.; Jäntschi, L. On a Class of Optimal Fourth Order Multiple Root Solvers without Using Derivatives. Symmetry 2019, 11, 1452. https://doi.org/10.3390/sym11121452

AMA Style

Sharma JR, Kumar S, Jäntschi L. On a Class of Optimal Fourth Order Multiple Root Solvers without Using Derivatives. Symmetry. 2019; 11(12):1452. https://doi.org/10.3390/sym11121452

Chicago/Turabian Style

Sharma, Janak Raj, Sunil Kumar, and Lorentz Jäntschi. 2019. "On a Class of Optimal Fourth Order Multiple Root Solvers without Using Derivatives" Symmetry 11, no. 12: 1452. https://doi.org/10.3390/sym11121452

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop