Filled Function
Filled Function
a r t i c l e i n f o a b s t r a c t
Article history: The filled function method is an effective approach to find a global minimizer for a general
Received 31 May 2008 class of nonsmooth programming problems with a closed bounded domain. This paper
Received in revised form 5 September 2008 gives a new definition for the filled function, which overcomes some drawbacks of the pre-
Accepted 13 October 2008
vious definition. It proposes a two-parameter filled function and a one-parameter filled
Available online 1 November 2008
function to improve the efficiency of numerical computation. Based on these analyses,
two corresponding filled function algorithms are presented. They are global optimization
methods which modify the objective function as a filled function, and which find a better
Keywords:
Nonsmooth optimization
local minimizer gradually by optimizing the filled function constructed on the minimizer
Global optimization previously found. Numerical results obtained indicate the efficiency and reliability of the
Filled function proposed filled function methods.
Local minimizer Ó 2008 Elsevier Inc. All rights reserved.
Global minimizer
1. Introduction
Frequently, practitioners need to solve global optimization problems in many fields such as engineering design, molecular
biology, neural network training, and social science. So global optimization becomes a popular computational task for
researchers. However, due to the existence of multiple local minimizers that differ from the global solution, we have to face
two difficulties: how to jump from a local minimizer to a smaller one and how to judge that the current minimizer is a global
one. Hence all these problems cannot be solved by classical nonlinear programming techniques directly. Generally speaking,
the global optimization methods can be divided into two types: stochastic methods and deterministic methods. The stochas-
tic methods such as those in [1–3], are based on biology or statistical physics, which jump to the local minimizer by using a
probability based approach. Although these methods have their uses, they have some handicaps such as slow convergence or
easiness of getting into local optimization. How to leave the local minimizer previously found quickly is an issue to be con-
sidered. Some researchers have proposed a method which uses several initial points to conduct multi-search to solve this
problem. But this method is a stochastic searching method and needs to use the random number generator in determining
the next searching direction. Theoretically, the randomness ensures that the probability to get the optimal solution is non-
zero in given computing time, but in fact, it will take considerable computing time to get the optimal solution. On the other
hand, deterministic methods such as those in [4–7], converge more rapidly, and can often escape from the previously found
local minimizer to a better one (see Appendix).
The filled function method, first proposed for smooth optimization by Ge [8,9], and recently reconsidered in [10–14], is
one of the effective deterministic global optimization methods. It modifies the objective function as a filled function, and
then finds a better local minimizer gradually by optimizing the filled function constructed on the minimizer previously
* Corresponding author. Address: Department of Mathematics, Shanghai University, Shangda Road No. 99, Shanghai 200444, PR China.
E-mail address: znuzy@shu.edu.cn (Y. Zhang).
0307-904X/$ - see front matter Ó 2008 Elsevier Inc. All rights reserved.
doi:10.1016/j.apm.2008.10.015
Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129 3115
found. The filled function method provides us with a good idea to use the local optimization techniques to solve global opti-
mization problems.
Nonsmooth optimization is also one of the active research areas in computational mathematics, applied mathematics,
and engineering design optimization. The methods for nonsmooth optimization include subgradient method [15], cut-
ting-planes method [16,17], analytic center cutting-planes (ACCP) method [18], bundle method [19,20], trust region method
[21], and bundle trustering method [22]. Most of the existing nonsmooth optimization methods are actually to seek a local
minimizer. Several researchers have extended the filled function method from smooth global optimization to nonsmooth
global optimization (see [23–25]).
In this paper, we consider the following nonsmooth global optimization problem (P):
min f ðxÞ
ð1:1Þ
s:t: x 2 X;
where f : X Rn ! R is Lipschitz continuous with constant L in Rn ; X is a closed bounded box containing all global mini-
mizers of f ðxÞ in its interior. It is assumed that the number of minimizers of (P) can be infinite, but the number of different
function values at the minimizers is finite.
In practice, it is more important to make sure that the global minimizer is in the interior of X. When f : Rn ! R satisfies
the condition f ðxÞ ! þ1, as kxk ! þ1, that is, f ðxÞ is coercive, a nonsmooth global optimization problem:
min f ðxÞ
ð1:2Þ
s:t: x 2 Rn
can be reduced into an equivalent problem formulation in (1.1). Obviously, if f ðxÞ is coercive, then it implies the existence of
a closed bounded box X Rn such that X contains all minimizers of f ðxÞ and the value of f ðxÞ when x is on the boundary of X
is greater than any values of f ðxÞ when x is inside X. In this case, We assume that X is known and our method only considers
the points in X.
Let x1 be a current local minimizer of f ðxÞ. For the definitions of basin and hill, please refer to [8]. In [23], a function Pðx; x1 Þ
is called a filled function of f ðxÞ at x1 , if it satisfies the following properties:
(1) x1 is a maximizer of Pðx; x1 Þ and the whole basin B1 becomes a part of a hill of Pðx; x1 Þ.
(2) Pðx; x1 Þ has no stationary points in the basin which is higher than B1 .
(3) There is a point x0 in a basin which is lower than B1 (if such a basin exists) that minimizes Pðx; x1 Þ on the line through x
and x1 .
The filled function Q 1 ðx; x0 ; h; AÞ requires two adjustable parameters, h and A, which need to be appropriately iterated and
coordinated each other, hence their algorithmic realization is fairly complicated. The values of h and A must satisfy the
inequalities
where d is a descent direction of f ðxÞ at x; F 0 ðx; dÞ ¼ maxfg T d; g 2 of ðxÞg, of ðxÞ is the generalized gradient of f ðxÞ at x. Further-
more, Q 1 ðx; x0 ; h; AÞ needs the assumption that f ðxÞ has only a finite number of local minimizers and there exists a minimizer
on the line joining x0 and x1 . These features may affect the computability when applied to numerical optimization. In order to
overcome these drawbacks, in Section 2, we revise the definition for the filled function and propose a new two-parameter
filled function.
Owing to the complicated co-adjustment of the two-parameters, attempts have been made to improve the properties of
the filled function. The basic idea for the modification is to cancel a parameter. Kong [24] proposed a one-parameter filled
function:
Since both Q 2 ðx; AÞ and oQ 2 ðx; AÞ are affected by the term expðAkx x1 k2 Þ, changes in Q 2 ðx; AÞ and oQ 2 ðx; AÞ become indis-
tinguishable when kx x1 k is small. Then Wu [25] proposed a new one-parameter filled function to tackle this handicap:
1
Q 3 ðx; AÞ ¼ Akx x1 k2 ;
1 þ f ðxÞ f ðx1 Þ
where A > 0 is a parameter.
Notice that the filled function Q 3 ðx; AÞ does not include exponential terms, but both Q 2 ðx; AÞ and Q 3 ðx; AÞ still need the
assumption that f ðxÞ has only a finite number of local minimizers and the choice of their parameters is heavily restricted
by the minimal basin radius of local minimizers. So, in Section 3, we propose a new one-parameter filled function which
includes neither exponential terms nor logarithmic terms to eliminate these drawbacks.
The aim of this paper is to develop the filled function with certain satisfactory properties in nonsmooth global optimiza-
tion. The remainder of the paper is organized as follows. Following this introduction, a new two-parameter filled function
and a new one-parameter filled function are proposed in Sections 2 and 3, respectively. Then, the properties of the new filled
functions are investigated and solution algorithms are suggested. In Section 4, numerical experiments are implemented to
test both nonsmooth and smooth problems for global optimization. In Section 5, the performance of the proposed filled func-
tion methods is compared to the performance of some well-known global optimization methods, the branch-and-bound
method and the tunneling method. Finally, some suggestions and conclusions are given in Section 6.
P
Throughout the paper, the Euclidean inner product in Rn is denoted by hx; yi ¼ xT y ¼ nj¼1 xj yj , and the associated norm by
k k. By L we denote the Lipschitz constant of f ðxÞ, with of ðxÞ we denote the generalized gradient of f ðxÞ at the point x 2 Rn .
By LðPÞ we denote the set of local minimizers for problem (P) and GðPÞ the set of the global minimizers for problem (P).
Note that Definition 2.1 about the filled function is different from the definition mentioned in [23]. This novel definition is
stronger than that one. If Pðx; x1 Þ is a filled function in the sense of Definition 2.1 and x1 is not a global minimizer, then we can
find a point xÞ < f ðx1 Þ in the course of minimizing Pðx; x1 Þ. Therefore, we can obtain a local minimizer of f ðxÞ
x satisfying f ð
with lower objective value by searching f ðxÞ starting at x via local search schemes.
We propose in this section a new two-parameter filled function as follows:
Pðx; x1 ; r; aÞ ¼ uðr þ f ðxÞÞ akx x1 k;
where a > 0 and r are two-parameters.
In order to guarantee theoretical properties of the proposed filled function Pðx; x1 ; r; aÞ, uðtÞ needs to satisfy the following
conditions:
Proof. Since x1 2 LðPÞ, there is a neighborhood Oðx1 ; d1 Þ of x1 with d1 > 0 such that f ðxÞ P f ðx1 Þ for all x 2 Oðx1 ; d1 Þ. Therefore,
r þ f ðxÞ P r þ f ðx1 Þ > 0. Then, for any x 2 Oðx1 ; d1 Þ; x – x1 ,
Pðx; x1 ; r; aÞ ¼ uðr þ f ðxÞÞ akx x1 k 6 uðr þ f ðx1 ÞÞ akx x1 k < uðr þ f ðx1 ÞÞ ¼ Pðx1 ; x1 ; r; aÞ: ð2:1Þ
Hence, x1 is a strictly local maximizer of Pðx; x1 ; r; aÞ. h
Theorem 2.2 below reveals that under some conditions on the parameters r and a, Pðx; x1 ; r; aÞ has a good property that the
farther the point x is from x1 , the smaller the value of Pðx; x1 ; r; aÞ is. This property guarantees that in the minimizing search
for local minimizers of Pðx; x1 ; r; aÞ, it will not return to the primary basin.
Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129 3117
Theorem 2.2. Assume that x1 2 LðPÞ. Suppose that x1 and x2 are two points which belong to Oðx1 ; d1 Þ given in Theorem 2.1 such
that 0 < kx1 x1 k < kx2 x1 k and f ðx2 Þ P f ðx1 Þ P f ðx1 Þ: If r þ f ðx1 Þ > 0, then for any a > 0, we have
Pðx2 ; x1 ; r; aÞ < Pðx1 ; x1 ; r; aÞ < Pðx1 ; x1 ; r; aÞ:
Proof. By f ðx2 Þ P f ðx1 Þ P f ðx1 Þ, we have uðr þ f ðx2 ÞÞ < uðr þ f ðx1 ÞÞ.
We notice that
due to the fact that the first term in the last expression is negative and the second one is nonpositive. From (2.1), we have
Pðx2 ; x1 ; r; aÞ < Pðx1 ; x1 ; r; aÞ < Pðx1 ; x1 ; r; aÞ:
Theorem 2.3. Assume that x1 2 LðPÞ. For any x 2 X1 ¼ fx 2 X j f ðxÞ P f ðx1 Þ; x–x1 g; if r þ f ðx1 Þ > 0 and a > u0 ðr þ f ðx1 ÞÞ L,
then we have 0 R oPðx; x1 ; r; aÞ.
1. When r þ f ðx1 Þ > 0 and a > u0 ðr þ f ðx1 ÞÞ L, for any x 2 X1 , x x1 is a descent direction of Pðx; x1 ; r; aÞ at point x.
2. If 0 2 of ðxÞ, then we have 0 R oPðx; x1 ; r; aÞ: So, in practical computation, it allows us to delete all higher minimizers.
Theorem 2.4. Assume that x1 2 LðPÞ. If r þ f ðx1 Þ > 0 and a > u0 ðr þ f ðx1 ÞÞ L, then any local minimizer or saddle point of
Pðx; x1 ; r; aÞ must belong to the set X2 ¼ fx 2 X j f ðxÞ < f ðx1 Þg.
Proof. If the theorem is not true, then there is a local minimizer or saddle point x2 of Pðx; x1 ; r; aÞ, such that x2 R X2 and
f ðx2 Þ P f ðx1 Þ. Since x1 is a strictly local maximizer of Pðx; x1 ; r; aÞ and x2 is a local minimizer or saddle point of Pðx; x1 ; r; aÞ,
thus x1 –x2 . If x2 is a local minimizer of Pðx; x1 ; r; aÞ, it contradicts Theorem 2.2 when a > 0 and r þ f ðx1 Þ > 0. Similarly, if
x2 is a saddle point of Pðx; x1 ; r; aÞ, it contradicts Theorem 2.3 when a > u0 ðr þ f ðx1 ÞÞ L and r þ f ðx1 Þ > 0. Consequently,
the theorem is true. h
Remark 2.2. The parameters r and a in the function Pðx; x1 ; r; aÞ only need to satisfy r þ f ðx1 Þ > 0 and a > u0ðr þ f ðx1 ÞÞ L,
they are easier to be appropriately chosen than those of the function Q 1 ðx; x0 ; h; AÞ in [23], where the parameters h and A
must satisfy the inequalities
A Gðf ðxÞ f ðx1 Þ þ hÞ F 0 ðx; dÞ kx x0 k A GðhÞ
< and > L:
G0 ðf ðxÞ f ðx1 Þ þ hÞ T
d ðx x0 Þ G0 ðhÞ
We can easily know that the computational efficiency of function Pðx; x1 ; r; aÞ is better than that of function Q 1 ðx; x0 ; h; AÞ.
Remark 2.3. In a way, the filled function can be used to find the constrained global minimizer. By using nonsmooth exact
penalty function method, we can transform a constrained minimization problem
min f ðxÞ
s:t: hi ðxÞ ¼ 0; i ¼ 1; . . . ; m
g j ðxÞ P 0; j ¼ 1; . . . ; n
n
x2R
3118 Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129
X
m X
n
min Lðx; k; lÞ ¼ f ðxÞ þ ki jhi ðxÞj þ lj max½0; g j ðxÞ;
x
i¼1 j¼1
where ki > 0ði ¼ 1; . . . ; mÞ and lj > 0ðj ¼ 1; . . . ; nÞ are sufficiently large. Obviously, Lðx; k; lÞ is a nonsmooth function regard-
less f ðxÞ is differentiable or not. Hence, we can use the proposed new two-parameter filled function Pðx; x1 ; r; aÞ to find a glo-
bal minimizer.
Based on the above discussions, we obtain a filled function method, referred to as Algorithm NFFM1.
Initialization step
Main step
1. Start at an initial point x, minimize f ðxÞ by implementing a nonsmooth local search procedure and obtain the first local
minimizer x1 of f ðxÞ.
2. Let a ¼ 1:
3. Choose r to satisfy r þ f ðx1 Þ > 0.
4. Construct the filled function:
Pðx; x1 ; r; aÞ ¼ uðr þ f ðxÞÞ akx x1 k:
5. If k > k0 then go to 9 else set x :¼ x1 þ dek as an initial point, minimize Pðx; x1 ; r; aÞ by implementing a nonsmooth local
search procedure and obtain a local minimizer denoted by xk .
6. If xk 2X then set k :¼ k þ 1, go to 5 else next step.
7. If xk satisfies any one of the following conditions then set x :¼ xk and k :¼ 1, start at x as a new initial point, minimize
f ðxÞ by implementing a nonsmooth local search procedure and obtain another local minimizer x2 of f ðxÞ else go to 9.
(7a) f ðxk Þ < f ðx1 Þ;
(7b) Pðxk ; x1 ; r; aÞ > Pðxk1 ; x1 ; r; aÞ;
(7c) ðxk x1 ÞT f P 0; 8f 2 oPðxk ; x1 ; r; aÞ;
(7d) kfk < e; 8f 2 oPðxk ; x1 ; r; aÞ.
8. If f ðx2 Þ 6 f ðx1 Þ then set x1 :¼ x2 , go to 2 else next step.
9. Increase a by setting a :¼ a ^a.
10. If a 6 aU then set k :¼ 1, go to 4 else the algorithm is incapable of finding a better local minimizer. The algorithm stops
and x1 is taken as a global minimizer.
Remark 2.4. (1) Algorithm NFFM1 consists of two-phases, local minimization and filling:
Phase 1: In this phase, a local minimizer x1 of f ðxÞ is found.
Phase 2: In this phase, filled function Pðx; x1 ; r; aÞ is constructed. Pðx; x1 ; r; aÞ is minimized and Phase 2 ends when such an
xk 2 X2 is found. Then, Algorithm NFFM1 reenters Phase 1, with xk as the starting point to find a new minimizer x2 of f ðxÞ (if
such one exists), and so on.
The above process is repeated until a stopping criterion is met. The last local minimizer to be found is assumed to be the
global minimizer.
(2) The motivation and mechanism behind Algorithm NFFM1 are explained below.
In Step 5 of Initialization step, we can choose direction ek ; k ¼ 1; 2; . . . ; k0 as positive and negative unit coordinate vectors,
at this case, k0 ¼ 2n. For example, when n = 2, the directions can be chosen as ð1; 0Þ; ð0; 1Þ; ð1; 0Þ; ð0; 1Þ:
In Steps 1, 5, and 7 of Main step, we minimize the objective function f ðxÞ or the filled function Pðx; x1 ; r; aÞ by
implementing a nonsmooth local search procedure such as cutting-planes method, bundle trustering method or Powell’s
method. These methods are all effective methods to find local minimizers.
In Main step, we let a ¼ 1 and r satisfy r þ f ðx1 Þ > 0 in Steps 2 and 3, respectively. Here, we choose r not too large, for
example, r ¼ df ðx1 Þe þ 1, where df ðx1 Þe is the integer closest to f ðx1 Þ. Afterwards, they are gradually increased via the
Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129 3119
two-phase cycle until they are more than sufficiently large positive scales. If the parameter a is sufficiently large, we cannot
find the point x with f ðxÞ < f ðx1 Þ yet, then we believe that there does not exist a better local minimizer of f ðxÞ, the last local
minimizer is assumed to be the global minimizer. Then, the algorithm is terminated. This is the stopping condition of
Algorithm NFFM1.
One might argue that uðtÞ has no definition in the region corresponding to ð1; 0Þ, when f ðxÞ < f ðx1 Þ, it cannot ensure
uðr þ f ðxÞÞ > 0. This feature, however, does not affect its application, because in the numerical iteration process of
minimization, we always check the function value of current iteration point first. If at some xk we obtain f ðxk Þ < f ðx1 Þ, then xk
is in X2 already. Phase 2 can ends right here, that is, xk should not be a local minimizer of Pðx; x1 ; r; aÞ, and Algorithm NFFM1
can reenter Phase 1 by starting at xk . Therefore, the fact that uðtÞ is not defined in ð1; 0Þ is never an obstacle in the
arithmetic realization of Algorithm NFFM1. Moreover, r may be chosen appropriately large if the above problem really needs
to be avoided.
(3) The method proposed in this section also adapts to smooth optimization. The smooth local search schemes employed
in the minimization phase can be ordinary gradient method, Newton’s method, PRP conjugate gradient method, and so on.
In this section, we attempt to improve the properties of the two-parameter filled function to make more practicable in
computation. So, we introduce a new one-parameter filled function as follows:
1
Pðx; x1 ; lÞ ¼ kx x1 k þ lfmax½0; f ðxÞ f ðx1 Þg þ fmin½0; f ðxÞ f ðx1 Þg3 ;
l
where l > 0 is a parameter.
When f ðxÞ P f ðx1 Þ, one has
1
Pðx; x1 ; lÞ ¼ kx x1 k þ ½f ðxÞ f ðx1 Þ3 :
l
The following Theorems 3.1–3.4 show that Pðx; x1 ; lÞ is a filled function in the sense of Definition 2.1 under some assump-
tions on the parameter l.
Theorem 3.1. Assume that x1 2 LðPÞ. If 0 < l < 1L , then x1 is a strictly local maximizer of Pðx; x1 ; lÞ.
Proof. Since x1 2 LðPÞ, there is a neighborhood Oðx1 ; d1 Þ of x1 with d1 > 0 such that f ðxÞ P f ðx1 Þ for all x 2 Oðx1 ; d1 Þ. Then, for
any x 2 Oðx1 ; d1 Þ; x – x1 , when 0 < l < 1L ,
Pðx; x1 ; lÞ ¼ kx x1 k þ l½f ðxÞ f ðx1 Þ 6 kx x1 k þ lLkx x1 k < 0 ¼ Pðx1 ; x1 ; lÞ:
Theorem 3.2. Assume that x1 2 LðPÞ. For any x 2 X1 ¼ fx 2 X j f ðxÞ P f ðx1 Þ; x– x1 g, if 0 < l < 1L , then we have 0 R oPðx; x1 ; lÞ.
Proof. Let x 2 X1 ; i.e., f ðxÞ P f ðx1 Þ; and x – x1 . Consider the following two cases:
x x1 3 x x1
oPðx; x1 ; lÞ þ lkof ðxÞ þ g½f ðxÞ f ðx1 Þ2 of ðxÞ ¼ þ lkof ðxÞ;
kx x1 k l kx x1 k
where 0 6 k; g 6 1:
2. If f ðxÞ > f ðx1 Þ, then
x x1
oPðx; x1 ; lÞ þ lof ðxÞ:
kx x1 k
3120 Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129
xx
Combining the two cases and denoting d ¼ kxx1 k, for any n 2 of ðxÞ, we have
1
hxx1 ;di
hoPðx; x1 ; lÞ; di kxx1 k
þ lbhn; di
ðxx1 ÞT ðxx1 Þ hn;xx1 i
¼ kxx1 k2
þ lb kxx1 k
xx
6 1 þ lbknk k kxx1 k k
1
6 1 þ lL
< 0;
where b ¼ k or 1.
Then, for any f 2 oPðx; x1 ; lÞ, we have fT d < 0. It implies that 0 R oPðx; x1 ; lÞ. h
Proof. Since x1 2 LðPÞ but x1 R GðPÞ, there exists x2 2 LðPÞ such that f ðx2 Þ < f ðx1 Þ. By the continuity of f ðxÞ, there exists an
e > 0 small enough, such that fx 2 Xjf ðxÞ ¼ f ðx2 Þ þ eg intX. We denote
X¼e ¼ fx 2 Xjf ðxÞ ¼ f ðx2 Þ þ eg;
then for all x 2 X¼e , we have f ðxÞ f ðx1 Þ ¼ f ðx2 Þ þ e f ðx1 Þ < 0.
When f ðxÞ < f ðx1 Þ, one has Pðx; x1 ; lÞ ¼ kx x1 k þ l1 ½f ðxÞ f ðx1 Þ3 .
For any x 2 X¼ e , we consider the following two cases:
1. If kx2 x1 k P kx x1 k, then by f ðx2 Þ f ðx1 Þ < f ðxÞ f ðx1 Þ ¼ f ðx2 Þ þ e f ðx1 Þ < 0, we have
1 1
Pðx2 ; x1 ; lÞ ¼ kx2 x1 k þ ½f ðx2 Þ f ðx1 Þ3 < kx x1 k þ ½f ðxÞ f ðx1 Þ3 ¼ Pðx; x1 ; lÞ:
l l
2. If kx2 x1 k < kx x1 k, we note the fact that Pðx2 ; x1 ; lÞ < Pðx; x1 ; lÞ if and only if
1 1
kx2 x1 k þ ½f ðx2 Þ f ðx1 Þ3 < kx x1 k þ ½f ðxÞ f ðx1 Þ3 ;
l l
which is equivalent to
1
kx x1 k kx2 x1 k < ½ðf ðxÞ f ðx1 ÞÞ3 ðf ðx2 Þ f ðx1 ÞÞ3 ;
l
that is
min
6
Pðx; x1 ; lÞ ¼ Pðx3 ; x1 ; lÞ
x2Xe
¼
e and Xe are compact sets. It is obvious that Pðx3 ; x1 ; lÞ 6 Pðx2 ; x1 ; lÞ. Since
since X6
min
6
Pðx; x1 ; lÞ ¼ Pðx3 ; x1 ; lÞ ¼ min
6 ¼
Pðx; x1 ; lÞ
x2Xe x2Xe nXe
¼ ¼
and Xe n Xe is an open set, thus x3 2 X6 e n Xe int X is a local minimizer of Pðx; x1 ; lÞ.
6
Combining the above discussions and Theorem 3.2, we know that when 0 < l < min 1L ; N , Pðx; x1 ; lÞ must have a local
minimizer in X2 ¼ fx 2 X j f ðxÞ < f ðx1 Þg. h
Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129 3121
We now discuss the existence of a lower bound of N in Theorem 3.3. Since X¼ e ¼ fx 2 Xjf ðxÞ ¼ f ðx2 Þ þ eg, there is a con-
stant N 1 > 0 such that ðf ðxÞ f ðx1 ÞÞ3 ðf ðx2 Þ f ðx1 ÞÞ3 P N 1 ; 8x 2 X¼
e . For kx x
1 k kx
2 x
1 k such that kx x1 k > kx2 x1 k,
by the continuity of norm function kx x1 k kx2 x1 k on a compact set X¼ e , there is a constant N 2 > 0, such that
0 < kx x1 k kx2 x1 k 6 N 2 . Let N ¼ NN12 > 0, we have
Proof. Consider
Pðx2 ; x1 ; lÞ Pðx1 ; x1 ; lÞ
¼ ðkx2 x1 k kx1 x1 kÞ þ l½f ðx2 Þ f ðx1 Þ
6 ðkx2 x1 k kx1 x1 kÞ þ lLkx2 x1 k
n o
¼ ðkx2 x1 k kx1 x1 kÞ 1 þ lL kx2 xkx 2kkx
x1 k
1 x1 k
1
6 ðkx2 x1 k kx1 x1 kÞ 1 þ lL rg
< 0:
1
When 0 < l < min L
; Lgr ; N , combining Theorem 3.1, we have
Remark 3.1. The function Pðx; x1 ; lÞ appears more applicable to computational assignments than Q 2 ðx; AÞ in [24] and
Q 3 ðx; AÞ in [25], because the function Pðx; x1 ; lÞ overcomes the drawbacks of Q 2 ðx; AÞ and Q 3 ðx; AÞ, and the choice of the
parameter l here only needs to satisfy 0 < l < min 1L ; Lgr ; N . That is, If l > 0 is sufficiently small, the function Pðx; x1 ; lÞ
will satisfy the properties of Definition 2.1.
In the following algorithm, referred to as Algorithm NFFM2, the sole differences with Algorithm NFFM1 are in the Steps 3
and 4 of Initialization step, and in the Steps 3, 4, 9, and 10 of Main step. Thus, we avoid the complete description of Algorithm
NFFM2 and point out only those few differences.
Initialization step
Main step
3. Is deleted.
4. Construct the filled function:
1
Pðx; x1 ; lÞ ¼ kx x1 k þ lfmax½0; f ðxÞ f ðx1 Þg þ fmin½0; f ðxÞ f ðx1 Þg3 :
l
9. Reduce l by setting l :¼ l^ l.
10. If l P lL then set k :¼ 1, go to 4
else the algorithm is incapable of finding a better local minimizer. The algorithm stops and x1 is taken as a global
minimizer.
Remark 3.2. See also Remark 2.4 for similar remarks. Here, we only point out that in Main step, we let l ¼ 1 in Step 2.
Afterwards, it is gradually reduced via the two-phase cycle until they are less than sufficiently small positive scales. If the
parameter l is sufficiently small, we cannot find the point x with f ðxÞ < f ðx1 Þ yet, then we believe that there does not exist a
better local minimizer of f ðxÞ, the last local minimizer is assumed to be the global one. Then, the algorithm is terminated.
This is the stopping condition of Algorithm NFFM2.
3122 Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129
4. Numerical experiments
In this section, we present some experiences with both nonsmooth and smooth test examples. The algorithms have been
implemented in fortran95, under Windows XP and Celelon 1.8 with Ram 256 M. There are two keys in the algorithm imple-
mentation: local search algorithm and stopping condition.
Local search algorithm
(1) In nonsmooth case, we obtain the local minimizers by Powell’s method to get the search direction and the Brent’s
method in line search to get the step size, N P 103 or d 6 108 as the terminating condition, where N is the iteration number
and d is the step size.
(2) In smooth case, we obtain the local minimizers by PRP conjugate gradient method to get the search direction and the
Armijo line search to get the step size, krf ðxÞk 6 108 as the terminating condition, where rf ðxÞ is the gradient of f ðxÞ.
Stopping condition
The iteration process showed the expected behavior: in most cases the sequence of iteration points achieved some neigh-
borhood of the global minimizer after a few number of iterations both in nonsmooth (Pr.4.1–4.9) and smooth (Pr.4.10–4.12)
cases.
We make note that uðtÞ used in Algorithm NFFM1 is uðtÞ ¼ 1t . The computational results of algorithms are summarized in
tables. The following symbols are used: k is the iteration number in finding the kth local minimizer; r and a are the param-
eters to find the ðk þ 1Þth local minimizer in Algorithm NFFM1; l is the parameter to find the ðk þ 1Þth local minimizer in
Algorithm NFFM2; xk is the kth new initial point in finding the kth local minimizer; xk is the kth local minimizer; f ðxk Þ is the
function value of the kth new initial point; f ðxk Þ is the function value of the kth local minimizer.
Problem 4.1
min f ðxÞ ¼ j x1
4
j þ j sin p 1 þ x1
4
jþ1
s:t: 10 6 x 6 10
Algorithm NFFM1 succeeds in finding an approximate global minimizer x ¼ 0:999832 with f ðx Þ ¼ 1:000174 and Algorithm
NFFM2 succeeds in finding an approximate global minimizer x ¼ 1:000041 with f ðx Þ ¼ 1:000043, respectively, which are in
the neighborhood of the global minimizer x ¼ 1. The computational results are summarized in Table 4.1.
Problem 4.2
min f ðxÞ ¼ 2x2 1:05x4 þ 16 x jxj
s:t: 0:8 6 x 6 1
The proposed algorithms succeed in finding an approximate global minimizer x ¼ 0:329111 with f ðx Þ ¼ 0:179652,
which is in the neighborhood of the global minimizer. The computational results are summarized in Table 4.2.
Problem 4.3
min f ðxÞ ¼ jx 1jð1 þ 10j sinðx þ 1ÞjÞ þ 1
s:t: 10 6 x 6 10
The proposed algorithms succeed in finding the global minimizer x ¼ 1 with f ðx Þ ¼ 1. The computational results are sum-
marized in Table 4.3.
Problem 4.4
x2 sin 1x if x–0
min f ðxÞ ¼
0 if x ¼ 0;
s:t: 0:4 6 x 6 0:4
Algorithm NFFM1 succeeds in finding an approximate global minimizer x ¼ 0:233933 with f ðx Þ ¼ 0:049571 and Algo-
rithm NFFM2 succeeds in finding an approximate global minimizer x ¼ 0:233710 with f ðx Þ ¼ 0:049572, respectively,
which are in the neighborhood of the global minimizer. The computational results are summarized in Table 4.4.
Problem 4.5
P
5
min f ðxÞ ¼ ij cosðði þ 1Þx þ iÞj þ 5
i¼1
s:t: 10 6 x 6 10
Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129 3123
Algorithm NFFM1 succeeds in finding an approximate global minimizer x ¼ 2:027586 with f ðx Þ ¼ 6:699800 and Algorithm
NFFM2 succeeds in finding an approximate global minimizer x ¼ 2:027433 with f ðx Þ ¼ 6:699793, respectively, which are in
the neighborhood of the global minimizer. The computational results are summarized in Table 4.5.
Problem 4.6
min f ðxÞ ¼ max 5x1 þ x2 ; 5x1 þ x2 ; x21 þ x22 þ 4x2
s:t: 4 6 x1 6 4; 4 6 x2 6 4
The proposed algorithms succeed in finding the global minimizer x ¼ ð0; 3ÞT with f ðx Þ ¼ 3. The computational results
are summarized in Table 4.6.
Problem 4.7
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! n
P n P
min f ðxÞ ¼ 20 exp 0:2 1n jxi j exp 1n cosð2pxi Þ þ 20
i¼1 i¼1
s:t: 20 6 xi 6 30; i ¼ 1; 2; . . . ; n
The proposed algorithms succeed in finding the global minimizer x ¼ ð0; 0; . . . ; 0ÞT with f ðx Þ ¼ 2:718282, for all n. The
computational results are summarized in Table 4.7, for n ¼ 2.
Problem 4.8
P
n
min f ðxÞ ¼ jxi 0:5j
i¼1
s:t: 5 6 xi 6 5; i ¼ 1; 2; . . . ; n
The proposed algorithms succeed in finding the global minimizer x ¼ ð0:5; 0:5; . . . ; 0:5ÞT with f ðx Þ ¼ 0, for all n. The com-
putational results are summarized in Table 4.8, for n ¼ 2.
Problem 4.9
P
n
ðixi 1Þ2 P
n
ðixi 1Þ2
min f ðxÞ ¼ max iþj1
þ min iþj1
j¼1;...;m i¼1 j¼1;...;m i¼1
s:t: 10 6 xi 6 10; i ¼ 1; . . . ; n
Algorithm NFFM1 succeeds in finding an approximate global minimizer x ¼ ð1:000000; 0:500000; . . . ; 0:100000ÞT with
f ðx Þ ¼ 0:000001 and Algorithm NFFM2 succeeds in finding an approximate global minimizer x ¼ ð1:000000;
0:500000; . . . ; 0:100000ÞT with f ðx Þ ¼ 0:000000, respectively, which are in the neighborhood of the global minimizer. The
computational results are summarized in Table 4.9 for n = 10 and m = 10.
The proposed algorithms succeed in finding the global minimizer x ¼ ð1; 1; . . . ; 1ÞT with f ðx Þ ¼ 0, for all n. The computa-
tional results are summarized in Table 4.12, for n ¼ 7.
We are now in a position to compare the proposed filled function methods to some other methods.
The branch-and-bound method works efficiently in nonsmooth global optimization. It aims to investigate the problem (P)
of finding the set of global minimizers and it is assumed that the number of minimizers is only finite.
The branch-and-bound method usually applies several interval techniques which are called accelerating devices to reject
regions in which the optimum can be guaranteed not to lie. For this reason, the original box X gets subdivided, and the sub-
regions which cannot contain a global minimizer of objective function are discarded, while the other subregions are divided
again until the desired accuracy of the boxes is achieved.
In Table 5.1, the proposed filled function methods are compared to the branch-and-bound method. The criteria for com-
parison are the run-time and stopping condition. The following notations are used: IM1 is the branch-and-bound method of
Ratz [4]; IM2 is the branch-and-bound method of Csendes [5]; run-time is the CPU time in seconds to obtain the final results,
e is the tolerance parameter of IM1 and IM2.
From Table 5.1, we observe the following results: the branch-and-bound methods IM1 and IM2 are feasible for univariate
nonsmooth functions, but, under several variables case, they must choose appropriate tolerance parameter e. In contrast, the
proposed filled function methods overcome the shortcoming and obtain improvements. This is partly due to the fact that, the
implement of branch-and-bound method depends on the quality of the objective function: the upper bound and the lower
bound can be determined easily and appropriately. Under several variables case, the higher the dimension is, the larger the
number of the subregions is. The similar phenomenon occurs in smooth case.
If problem (P) has several global minimizers, theoretically, the branch-and-bound method can find all global minimizers
of the objective function. However, the proposed filled function methods can only find a global minimizer in each algorithm
execution. Using several initial points, the proposed filled function methods may find more global minimizers but cannot
guarantee to find all of them. Naturally, we have a question: for the proposed filled function methods, can we design a cri-
terion by using the mechanism of the branch-and-bound method to try finding all global minimizers?
The tunneling method proposed by Levy and Montalvo [3] is a well-known and practical method for global optimization.
The tunneling algorithm is composed of two-phases, a minimizing phase and a tunneling phase. These two-phases are used
alternately to search for the better minimizer. In the first phase, a classical algorithm such as Newton’s method or steepest
descent method can be used to find a local minimizer x1 of objective function f ðxÞ. In the second phase, the idea is to search
for the root of a defined auxiliary function, tunneling function Tðx; x1 Þ. The tunneling function in [3] is defined as
f ðxÞ f ðx1 Þ
Tðx; x1 Þ ¼ : ð5:1Þ
½ðx x1 ÞT ðx x1 Þa
If we can get x such that x – x1 and Tðx; x1 Þ 6 0, i.e., f ð
xÞ 6 f ðx1 Þ, then
x is a new starting point for the next iteration.
The denominator of (5.1) is of a pole at x1 with power a which prevents from x1 from being a zero of the tunneling
function.
From above, we can see that the proposed filled function methods are essentially similar to that for the tunneling method.
The only difference between a filled function and a tunneling function is the way used for finding in Phase 2.
The objective functions considered in [3] are of number of local minimizers varies between a few and several thousand.
The proposed filled function methods are also able to find the global minimizer about such functions, such as Pr.4.4, Pr.4.5,
Pr.4.11 and Pr.4.12. Furthermore, it is assumed in this paper that the number of minimizers of problem (P) can be infinite, but
Table 5.1
Comparison of the proposed filled function methods and the branch-and-bound methods.
the number of different function values at the minimizers is finite. Thus, the proposed filled function methods can find the
global minimizer for problems which have infinite minimizers, theoretically.
However, the tunneling method in [3] has a number of disadvantages:
1. The pole strength a is problem dependent. While searching for a zero, a should be incrementally increased until the pole
in the denominator of (5.1) becomes strong enough to eliminate the last local minimizer of higher order. Every increase in
a requires the algorithm to be restarted, leading to increased computational effort.
2. The tunneling algorithm may find another local minimizer x2 , such that f x1 ¼ f x2 . In this case, an additional pole must
be placed at the second local minimizer, and the tunneling process must be restarted.
3. Division by a pole causes smoothing of f ðxÞ as x ! 1. This smoothing increases with a, yielding a tunneling function that
becomes very flat.
Then, some improvements have been made to partly overcome the disadvantages (see [26]).
Since the tunneling function and the filled function possess some common properties, in some sense, a tunneling function
can be viewed as a filled function, on the other hand, some filled functions can be looked on as tunneling functions. In this
sense, both tunneling function and filled function are unifiable. We can use either filled function or tunneling function for
finding a new initial point in the algorithm. Therefore, designing auxiliary functions combined with tunneling function
and filled function becomes interesting.
6. Conclusions
The filled function method is an effective approach to find a global minimizer of a nonsmooth function. The filled func-
tions reported in [23–25] have some drawbacks such as depending on the exponential term, requiring that f ðxÞ has only a
finite number of local minimizers, or the parameters heavily restricted by the minimal basin radius of local minimizers. All of
these characteristics are strongly undesirable in numerical applications as they are liable to the illness of computation.
In this paper, we propose a new two-parameter filled function and a new one-parameter filled function to tackle these
problems. The proposed filled functions have several advantages: (a) they do not include any exponential or logarithmic
terms; (b) they are defined on the sum of terms, rather than a product; (c) they have only the linear dependency on the
norm; (d) the choice of parameters is easier to make than those in [23–25]. The results of numerical experiments imply that
the new filled functions are more applicable to computational assignments.
Some suggestions on further improving the algorithms are as follows:
1. Since the nonsmooth global minimization in the whole region does not have a general criterion about how to evaluate the
convergence, under many conditions, we have to spend more time to judge whether the current local minimizer is global.
Therefore, the research on the convergence analysis, has evolved to a specialistic area.
Here, we have an idea based on the theoretical properties of the proposed filled functions and the analysis in Sub Section
5.1. Considering that the local minimizers of the filled function must belong to the set X2 ¼ fx 2 X j f ðxÞ < f ðx1 Þg, not in
the set X1 ¼ fx 2 X j f ðxÞ P f ðx1 Þ; x – x1 g, we divide the domain X into two parts, X1 and X2 . Then there is no global min-
imizer of f ðxÞ in X1 , and it is in X2 . If we can design a procedure which enlarges X1 and reduces X2 , then when meas
ðX2 Þ 6 e, the tolerance, we can deterministically say that the global minimizers of f ðxÞ is in X2 and every point in X2
is an approximate global minimizer of f ðxÞ. To achieve this goal, the interval analysis [6,7,27,28] is requisite.
2. How to decide a search direction in finding another better local minimizer is a key to the success in a filled function algo-
rithm. In smooth global optimization, the desirable search directions are frequently interrelated with gradient and the
search directions are deterministic. But, in nonsmooth global optimization, the generalized gradient is a set and the
search directions frequently become indeterministic, then, the choice of search directions is more difficult and compli-
cated. Therefore, this is a worthwhile issue to be studied.
3. Considering that the filled function is a complex function containing the objective function and the objective function
itself may have a complicated formulation, we should, thus, construct a filled function as simple as possible so that redun-
dant computation will be avoided to some extent. An interesting question is: whether there exists a filled function with-
out any parameter suitable for both smooth and nonsmooth case?
Appendix
Table 4.1
Computational results with initial point 6.
k NFFM1 NFFM2
r a xk f ðxk Þ xk f ðxk Þ l xk f ðxk Þ xk f ðxk Þ
1 – – 6 2.957107 5.000001 2.000001 – 6 2.957107 5.000001 2.000001
2 1 1 1.943394 1.910830 0.999832 1.000174 0.1 0.967814 1.033323 1.000041 1.000043
3126 Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129
Table 4.2
Computational results with initial point 0.5.
k NFFM1 NFFM2
r a xk f ðxk Þ xk f ðxk Þ l xk f ðxk Þ xk f ðxk Þ
1 – – 0.5 0.017708 0.219431 0.089000 – 0.5 0.017708 0.219431 0.089000
2 1 100 0.108362 0.103082 0.329111 0.179652 0.001 0.345090 0.179321 0.329111 0.179652
Table 4.3
Computational results with initial point 1.5.
k NFFM1 NFFM2
r a xk f ðxk Þ xk f ðxk Þ l xk f ðxk Þ xk f ðxk Þ
1 – – 1.5 15.485624 1.000000 3.000000 – 1.5 15.485624 1.000000 3.000000
2 2 1 0.918449 1.848264 1.000000 1.000000 0.1 2.141596 2.141605 1.000000 1.000000
Table 4.4
Computational results with initial point 0.1.
k NFFM1 NFFM2
r a xk f ðxk Þ xk f ðxk Þ l xk f ðxk Þ xk f ðxk Þ
1 – – 0.1 0.005440 0.092489 0.008417 – 0.1 0.005440 0.092489 0.008417
2 1 10 0.195502 0.035163 0.233933 0.049571 1 0.309344 0.008713 0.233710 0.049572
Table 4.5
Computational results with initial point 3.0.
k NFFM1 NFFM2
r a xk f ðxk Þ xk f ðxk Þ l xk f ðxk Þ xk f ðxk Þ
1 – – 3 14.660809 4.140632 12.464201 – 3 14.660809 4.140632 12.464201
2 11 1 3.625202 12.153843 3.616881 11.847343 0.01 3.616874 11.847532 2.655621 11.647864
3 11 10 1.955452 9.745973 2.027586 6.699800 0.001 1.952301 9.864222 2.027433 6.699793
Table 4.6
Computational results with initial point (1,1).
k NFFM1
r a xk f ðxk Þ xk f ðxk Þ
1 – – (1,1) 6.000000 (0.000000, 0.000000) 0
2 1 1 (0.000374, 0.920954) 0.919086 (0.000120, 0.923217) 0.922615
3 2 1 (0.000300, 2.423065) 2.421566 (0.000000, 3.000000) 3.000000
NFFM2
l xk f ðxk Þ xk f ðxk Þ
1 – (1,1) 6.000000 (0.000000, 0.000000) 0
2 0.1 (0.000210, 0.972498) 0.971468 (0.000210, 0.972498) 0.971468
3 0.1 (0.003132, 2.564391) 2.548732 (0.000000, 3.000000) 3.000000
Table 4.7
Computational results with initial point (16,1).
k NFFM1
r a xk f ðxk Þ xk f ðxk Þ
1 – – (16,1) 6.118365 (15.000000, 0.000000) 5.716410
2 5 1 (2.994102, 12.402390) 5.437874 (4.994102, 3.994200) 4.1949201
3 3 1 (0.000031, 3.245514) 2.847811 (0.000032, 3.245423) 2.847560
4 2 10 (0.000031, 3.238560) 2.847358 (0.000000, 0.000000) 2.718282
NFFM2
l xk f ðxk Þ xk f ðxk Þ
1 – (16, 1) 6.118365 (15.000000, 0.000000) 5.716410
2 0.1 (1.058482, 0.516490) 2.143306 (0.000111, 0.209404) 0.369045
3 0.01 (0.000732, 0.043484) 0.747033 (0.000000, 0.000000) 2.718282
Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129 3127
Table 4.8
Computational results with initial point (3,3) for n = 2.
k NFFM1 NFFM2
r a xk f ðxk Þ xk f ðxk Þ l xk f ðxk Þ xk f ðxk Þ
1 – – ð3; 3Þ 5 ð0:50000; 0:50000Þ 0.00000 – ð3; 3Þ 5 ð0:50000; 0:50000Þ 0.00000
Table 4.9
Computational results with initial point (5,5,. . .,5) for n = 10, m = 10.
k NFFM1
r a xk f ðxk Þ xk f ðxk Þ
1 – – (5, 5, 1823.849 (1.000000, 0.500000, 9.396803
5, 5, 5, 0.500000, 0.500000, 0.500000,
5, 5, 5 0.500000, 0.500000, 0.500000
5, 5) 0.500000, 0.500000)
2 8 1 (1.000034, 0.500000, 1.426482 (1.000000, 0.500000, 0.000001
0.374389, 0.443190, 0.590865, 0.333334, 0.249542, 0.200000,
0.163216, 0.133428, 0.220000, 0.166667, 0.142867, 0.124864
0.188603, 0.100680) 0.111111, 0.100000)
NFFM2
l xk f ðxk Þ xk f ðxk Þ
1 – (5, 5, 1823.849 (1.000000, 0.500000, 9.396803
5, 5, 5, 0.500000, 0.500000, 0.500000,
5, 5, 5 0.500000, 0.500000, 0.500000
5, 5) 0.500000, 0.500000)
2 0.1 (1.000000, 0.492333, 0.958095 (1.000000, 0.500000, 0.000000
0.374109, 0.341098, 0.490887, 0.333333, 0.250000, 0.200000,
0.161342, 0.235609, 0.120098, 0.166667, 0.142857, 0.125000
0.198103, 0.197430) 0.111111, 0.100000)
Table 4.10
Computational results with initial point (2,1).
k NFFM1
r a xk f ðxk Þ xk f ðxk Þ
1 – – (2,1) 65.000000 (1.878125, 0.907031) 53.873591
2 53 1 (1.392822, 0.535706) 22.618262 (1.757129, 2.571387) 6.794151
3 6 10 (0.782864, 0.868802) 5.501130 (0.000000, 0.000000) 0.000000
NFFM2
l xk f ðxk Þ xk f ðxk Þ
1 – (2,1) 65.000000 (1.878125, 0.907031) 53.873591
2 0.1 (0.925833, 0.057031) 7.341024 (0.782864, 0.868802) 5.501130
3 0.01 (0.276979, 0.362917) 0.529460 (0.000000, 0.000000) 0.000000
Table 4.11
Computational results with initial point (2,2).
k NFFM1
r a xk f ðxk Þ xk f ðxk Þ
1 – – (2, 2) 47.733380 (1.703601, 0.79600) 0.215467
2 1 1 (0.064844, 0.773586) 0.864239 (0.089842, 0.712656) 1.031632
NFFM2
l xk f ðxk Þ xk f ðxk Þ
1 – (2, 2) 47.733380 (1.703601, 0.79600) 0.215467
2 1 (0.064324, 0.694378) 1.026880 (0.089842, 0.712656) 1.031632
3128 Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129
Table 4.12
Computational results with initial point (6,6,. . .,6) and n = 7.
k NFFM1
r a xk f ðxk Þ xk f ðxk Þ
1 – – (6, 6, 78.53983 (0.996121, 2.013701, 39.96870
6, 6, 6, 2.995810, 2.998201,
2.996091,
6, 6) 3.001082, 2.995304)
2 39 1 (1.046313, 1.809435, 37.00684 (1.001709, 0.999912, 28.49854
1.015610, 2.870645, 0.513012, 2.837651,
2.980932, 2.997342,
3.013270, 2.995419) 2.997121, 2.997806)
3 27 10 (1.000111, 1.000000, 22.92612 (1.003601, 1.000000, 11.36549
0.999750, 0.598012, 1.000000, 1.003189, 0.228431,
1.895631,
2.997439, 2.937904) 2.934389, 1.999745)
4 10 10 (1.000000, 1.000000, 1.134004 (1.000000, 1.000000, 0.000000
0.999750, 0.587691, 1.000000, 1.000000, 1.000000,
1.003468,
1.054600, 1.000000) 1.000000, 1.000000)
NFFM2
l xk f ðxk Þ xk f ðxk Þ
1 – (6, 6, 78.53983 (0.996121, 2.013701, 39.96870
6, 6, 6, 2.995810, 2.998201,
2.996091,
6, 6) 3.001082, 2.995304)
2 0.1 (1.024109, 0.999999, 14.92303 (1.003624, 0.999999, 11.36543
0.989750, 0.218023, 1.000000, 1.003190, 0.228429,
0.224319,
2.983217, 1.999736) 2.934312, 1.999736)
3 0.01 (0.009999, 0.999999, 0.444333 (1.000000, 1.000000, 0.000000
1.002969, 1.002413, 1.000000, 1.000000, 1.000000,
1.002579,
1.000002, 1.007806) 1.000000, 1.000000)
References
[1] A. Sadegheih, Scheduling problem using genetic algorithm, simulated annealing and the effects of parameter values on GA performance, Appl. Math.
Model. 30 (2) (2008) 147–154.
[2] S.I. Birbil, S.C. Fang, An electromagnetism-like mechanism for global optimization, J. Global Optim. 25 (3) (2003) 263–282.
[3] F.J. Chen, Exponential stability of delayed cellular neural networks, J. Zhejiang Normal University (Natural Sciences) 30 (1) (2007) 34–38.
[4] Z.Y. Wu, L.S. Zhang, K.L. Teo, et al, New modified function for global optimization, J. Optim. Theory Appl. 25 (3) (2005) 181–203.
[5] A.V. Levy, A. Montalvo, The tunneling algorithm for the global minimization of functions, SIAM J. Sci. Statist. Comput. 6 (1) (1985) 15–29.
[6] D. Ratz, A nonsmooth global optimization technique using slopes: one-dimensional case, J. Global Optim. 14 (4) (1999) 365–393.
[7] T. Csendes, J. Pinter, The impact of accelerating tools on the interval subdivision algorithm for global optimization, Eur. J. Oper. Res. 65 (1993) 314–320.
[8] R.P. Ge, The theory of filled function methods for finding global minimizers of nonlinearly constrained minimization problems, J. Comput. Math. 5
(1987) 1–9.
[9] R.P. Ge, Y.F. Qin, The globally convexized filled functions for global optimization, Appl. Math. Comput. 35 (2) (1990) 131–158.
[10] Q.M. Han, J.Y. Han, Revised filled function methods for global optimization, Appl. Math. Comput. 119 (2) (2001) 217–228.
[11] L.S. Zhang, C.K. Ng, D. Li, et al, A new filled function method for global optimization, J. Global Optim. 28 (1) (2004) 17–43.
[12] L.S. Zhang, D. Li, Global search in nonlinear integer programming: filled function approach, Int. Confer. Optim. Technol. Appl. Perth 1998 (1999) 446–
452.
[13] Daqing Wang, Songbai Sheng, A modified filled function method for finding a global minimizer of a function of several variables, J. Comp. Math. Suppl.
Issue (1992) 60–65.
[14] Y. Shang, D. Pu, A. Jiang, Finding global minimizer with one-parameter filled function on unconstrained global optimization, Appl. Math. Comput. 191
(1) (2007) 176–182.
[15] N. Shor, Minimization methods for non-differentiable function, Springer-Verlag, Berlin, 1985.
[16] A. Fuduli, M. Gaudioso, G. Giallombardo, Minimizing nonconvex nonsmooth functions via cutting planes and proximity control, SIAM J. Optim. 14 (3)
(2003) 743–756.
[17] J.E. Kelley, The cutting-plane method for solving convex programs, J. Soc. Ind. Appl. Math. 8 (4) (1960) 703–712.
[18] J.L. Goffin, A. Haurie, J.P. Vial, Decomposition and nondifferentiable optimization with the projective algorithm, Manag. Sci. 38 (2) (1992) 284–302.
[19] M.V. Solodov, A bundle method for a class of bilevel nonsmooth convex minimization problems, SIAM J. Optim. 18 (2007) 242–259.
[20] H. Schramm, J.A. Zowe, A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results,
SIAM J. Optim. 2 (1) (1992) 121–152.
[21] J.I. Ardenghi, M.C. Maciel, A.B. Verdiell, A trust-region-approach for solving a parameter estimation problem from the biotechnology area, Appl. Numer.
Math. 47 (3) (2003) 281–294.
[22] J. Zowe, The BT-algorithm for minimizing a nonsmooth functional subject to linear constrains, in: F.H. Clarke, V.P. Demyanov, F. Giannessieds (Eds.),
Nonsmooth optimization and related topics, Plenum Press, New York, 1989, pp. 459–480.
[23] W.X. Zhu, A class of filled functions irrelevant to the number of local minimizers for global optimization, J. Syst. Sci. Math. Sci. 22 (4) (2002) 406–413.
[24] M. Kong, On the filled function method for nonsmooth program, J. Syst. Sci. Math. Sci. 20 (4) (2004) 149–154.
Y. Zhang et al. / Applied Mathematical Modelling 33 (2009) 3114–3129 3129
[25] Q. Wu, S.Y. Liu, L.Y. Zhang, et al, A modified filled function method for a global minimizer of a nonsmooth programming problem, Math. Appl. 17 (2)
(2004) 36–40.
[26] Y. Yao, Dynamic tunneling algorithm for global optimization, IEEE Trans Syst. Man Cybern. 19 (5) (1989) 1222–1230.
[27] Z.H. Shen, Y. Zhu, An interval version of Subbert’s iterative method for the localization of the global maximum, Computing 38 (1987) 275–280.
[28] C. Gorges, H. Ratschek, Global interval methods for local nonsmooth optimization, J. Global Optim. 14 (2) (1999) 157–179.