Xy.s- Thus, the parabola y= P(x), with P'(x) = Mf, PCS) —
e.6 PRACTICAL TESTS 108
FO), and PO.) = flys
such that
PU) > f(y) (sce
breaks down if = f(y), as expected
mn (4.23). (See the results for f, with M = 2, 2.1 )
It appears that the number of function evaluation’ does not depend
strongly on ¢: comparing N” with we see that the average number of fune-
tion evaluations required is only about twenty percent more for ¢ = 10°!
than for ¢ = 10-%
Finally, the efficiency E of the algorithm is fat
ch, even for the
these ex:
Section 7
SOME EXTENSIONS AND GENERALIZATIONS
So far we have assumed that f © C2fa, b] and
LO0,
Flu BY ~ fla) + flu =) < Mae 3)
— fl. Then, for all x € (a, bl,
fis) 2 C= DL += FO) _
for all ue (a+ h,
TMa—ab-%). (14)106 GLOBAL suniezaTiON cap. 6
Proof
There is no loss of generality in assuming that
Sa) = f(b) =0 5)
and
M=0, 76)
for we can consider f(x) — P(x), where P(x) is the right side of (7.4), instead
of f(x). Thus, we have to show that
o>
an
where 9, is the least value of f on [a, 6]. Suppose, by way of contradiction,
that
(28)
and let
= suplx € (2, 61), /09) = 94).
By the continuity of fu) = 9, <0, 80 ua oF b. Th
> 0,u ¢ [a+ hy 6 — A] and, from the defi
Su) > fo
and
flu +1) > fe. ai
Because of the assumption (7.6), this contradicts (7.3), so (7.8) is impossibl
and the result follows. (Note the close connection with the maximum pri
ciple for elliptic difference operators.)
THEOREM 7.2
Suppose that (7.3) ho
Then
Is, M>0, ae, <¢, fle,)=fie,)-
Me (a) — FE).
o-0> (gD aan)
Prot
Apply Theorem 7.1 with x replaced by o, and b by c,, The hypothesis
that f(c,) = fle.) gives, after some simplification,
.— ae, — a) = ‘Of ), (713)
and the result follows since ¢, ~ a> e, — a0.
THEOREM 7.3,
ippose that (7.3) holds, M>0, a fer.
ve 10 show that
Apply Theorem 7.1, first with x replaced by ¢ and b by x,
placed by
bby xy. The two resi
fla) = flo)
Ma cy
The right sid ive, so (7.14) holds.
Remarks
Theorems 7.1 to 7.3 generalize Theorems 2.1 to 2.3 respectively. Since
ihm described in Section 3 is based entirely on Theorems 2.1 to
that condition (7.3) is sufficient for to find a co
rect approximation to the global mi surprising, for
condition (7.3) iseq
global minimum of a function f of several variables, The conditions on f
fare much weaker than those required by Newman (1965), Sugie (964), or
Krolak and Cooper (1963). (See also Kaupe (1964) and Kiefer (1957).)
Section 8
AN ALGORITHM FOR GLOBAL MINIMIZATION
OF A FUNCTION OF SEVERAL VARIABLES
Suppose
iat D = (a, b,) * [a,,6,] i8 a rectangle in RY, f: D> R has
continuous second derivatives on D, and constants Af, and M, are known
such that
Tock @D
and
R by
90) = min fes (83)
Clearly 9(») is continuous, and
vane ie
Thus, we
to the
Sections 3 and 10) can be used to evaluate g(») for a given y, using condition108 GLosat wnnanesrion chp. 6
(8.1) If we could sho
° My 6s)
then procedure glomin could be used again (recursively) to minimize @(y),and
from (8.4),/(x, 3). Unfortunate
where in fa, 6,)- 80 (8.5) may be meat
fos
1. Then
ot) = min (y, —9) = 7
which is not differentiable at y= 0, and we cannot expect to prove (8.5)
The same problem may arise if the minimum in (8.3) occurs at an interior
point of D: one example is
£159) = (8 = 32)
on D= LVF. 9/3] x [-10, 10}. (fe
(0) = —2sin y | which is not differen
Fortunately, the following
tion like (7.3), $0 the ri
) need not be differentiable every-
less. For example, consider
=x 66)
on D=[=1, 1] x [—
(8.8)
) vanishes for x= 41, so
le at 0, -E2, etc.)
orem shows that gy) docs satisfy a condi-
of Section 7 show that procedure glomin can
f ov) even if gC») is not differentiable
THEOREM 8.1
Let f(x, 3) and p(y) be as above, Then, for all k > Oand y € [a, + 4,
b,-h,
OLY + A) — 29) + Cy — A) < MB (89)
Proof
From the definition (8.3) of g(»), there is a function p(y) from fa, 6,]
into fa,, ,] (not necessarily continuous), such that
00) = fH)... (8.10)
Thus
wt AS May)y + H), 1
80
(P+) = 299) + OEY =A) MMC IY — 2a
and the result follows from condition (8.2).
COROLLARY 2.1
For all y € [2,6] at which @”(9) exis
oO) EM,
8.13)
See GLOBAL MINILIRATION OF A FUNCTION OF SEVERAL VARIABLES 109
Functions of n variables
‘Theorem 8.2 generalizes Theorem 8.1 to functions of any finite number
of variables,
THEOREM 8.2
Suppose that n> 1; J, is a nonempty compact set in R fori 1,.-..
nel: Dah xT, Xo % yy, SRG Ff DR is continuous, and
IX + he) — 2408) + fx — he (4)
for all. sufficient A> 0, all x © ROY sue xx dD.
and i= 1,2,.-.,2-4 1 Let D’ =, x +++ % iy and define g: D’ > R by
(9) = min Ay x (8.15)
Then p is continuous on D
min f(x) = min gly), (8.16)
and
2oly) + or
A> 0, y © Re such that y, y + de; < D’, and
fn) 1 of
variables, provided upper bounds are known for the pai ratives f,.(X)
@=1,...,m. Itis interesting that no bounds on the cross derivatives
FaxkX) (ij) are necessary,
If a one-dimensional minimization using procedure glo!
about K function evalu
Je to use procedure glomin to find the
in requires
Kis likely to be in the range 10 < K < 100 in practice (sce S
computation involved i
‘of more than three variables, we probably must be sat
Which find local, but not necessarily global, minima (see Chapter 7). The
theorems of Section 5 have not been extended to functions of more than one
variable, so we do not know how far our procedute is from the best possi
(given only upper bounds On fa, for i = Thus, there is a chance
that a much better method for finding the gh
several variables exists. It is also possible that sl
conditions on f(e-g., both upper and lower bounds on certain derivatives)
might enable us to find the global110 ctonas atmunwzarion chan. 6
Minimization of a function of two variables: procedure glomin2d
In Section 10 we give an ALGOL 60 procedure (g/onin2d) for finding
the global minimum of a function f(x, 1) of two variables,
suggested above. Note that glomin2d uses procedure g!
‘manner, for glomtin is required both to evaluate and to minimize g. The error
bounds given in t
from the error bounds (3.36) and (3.37) for procedure glo
Procedure glomin2d was tested on an IBM 360/91 computer (using
ALGOL W), and some numerical results are summarized in T
2s shown in the
. 10°, and 10-! respectively. (Thus g, — I!
1.0002 x’ 10!9is guaranteed, where
value returned by the procedure.) In the table we give the upper bounds M,
‘and M, (see equations (8.1) and (8.2)), the total number of funetion evalu-
ns N, and the approximate global minimum @ (always very close to the
global minimum 9)
+
f
f 2210 200 13320)
h 200 ists °
986 0,39665296108547
100336 —0,396652961085468
130496 (0.396682961085434
Fels) ~ Fylss9) 091-2,
Soo.9 summary ano concwusions 111
Comments on Table 8.1
TThe results for the simple functions f, and f, are hardly surprising,
expected from the behavior of procedure glomin on functions of one
(Gee Seotions 5 and 6), the number of function evaluations (1) increases w
‘M, and M,
PY) = WOOLY — 32) + (F—3)? Fs the well-known Rosenbrock
function (Rosenbrock (1940)), and it has a steep curved valley along the
y= x%, Note that f, is just the Rosenbrock function in disguise,
and it is interesting that only 1813 function evaluations were required
to minimize f,, compared to 13320 for f,. Thus, itcan make a large difference
‘whether we minimize frst over x (With » fixed) and then over y, oF vice versa,
ould be done first. OF
tions is very high by
course, even the loner figure of 1815 function e
comparison with 100 of less for methods which seek local minima (see
Chapter isthe price which must be paid to guarantee that
wwe do have the global minimum. (This is only a conjecture, for the results
of Section 5 have not been extended to functions of several variables.)
The functions f, and f, are the same, domain of J, is four times
as large as the domain off, For this function the size of the domain has much
more influence on WV than do the bounds M, and M,: increasing the size of
the domain by a factor of four increased N by a factor of about 50, but
doubling Mf, and M, only increased N by about 30 percent. With a different
wwe could easily reach the opposite conclusion.
s possible to give upper bounds M, and M, on the
partial derivatives f.. and f,, then procedure g da guaranteed
imum, but a considerable number of
luations may be required if the domain of fis large or if the
Finally, we should note that we have restricted ourselves to rectang!
domains merely for the sake of simplicity: there is no essential di
in dealing with nonrectangular domains.
Section 9
SUMMARY AND CONCLUSIONS
In Section 1 we show that the problem of finding
9, = flu) of a Function f defined on a compact set is w
whereas,
jons on f
conditions are discussed in Section 1. We concentrate our attention mainly12 GLOBAL tmazaTioNn
1d the number of function evalua
to 5, and numerical results are
s for functions of one variable are used to give an
for finding the global minimum of a function of several variables (practically
useful for (wo or three variables), and ALGOL procedures are given in Sec:
tion 10. The ALGOL procedures are guaranteed to give correct rest
provided the basic ari
error, (See the remark
For practical prob! of this
chapter lies in finding the necessary bounds on second derivatives. One
intriguin, (3) is expressed in terms of elementary functions,
then the second derivatives can be computed symby upper bo:
can then be obtained from the symbolic second derivatives via simple inequ:
ities. Thus, the emtite process of finding the global minimum can be auto-
‘mated. In some cases functions defined on unbounded domains can also be
dealt with automatically by using suitable elementary transformations
Section 10
ALGOL 60 PROCEDURES
‘The ALGOL procedures glomin (for global minimization of a function
of one variable) and glomin2d (or global minimization of a function of two
variables) are given below. The algor some numerical results are
deseribed in Sections 3 to 6 and 8. A FORTRAN translation of procedure
(a,b, 6m, macheps, ¢, tf.)
racheps,
om, macheps e, x; real procedure f:
begin comment:
slomin returns the global minimum value of the function f(x)
defined on (a, 5]. The procedure assumes that f= C2fa, 6] and f”(a)
0 \a 0, a < 8)
mid: = 05 X (1-416 x macheps) X mi;
We 100000 /\ y < y2 then go to skip;
retry: ifg x (rx (yb — 92) +22 xq x G2—-Y+9)
< a2 x m2 xX (DX G1)
begin a3: = a2} rig: y3: = fla):
if y3 < y then
begin x: = a3; y: = y3
end
end;
‘comment: With probability about 0.1 do a random search fora lower
value of f- Any reasonable random number generator can be used
in place of the one here (it need not be very good);
skip: ke: = J611 x ks kz = k — 1048576 > (k + 1048576);
q=ljn=(b—a) (00000);
ifr < 22 then go to retry;
‘comment: Prepare to step as far as possible;
r= m2 x dO dl x d2; $: = sqrt((y2
A= 05 x (+ hs
Pim hx (DE2XKK Si grmr +05 x gs;
P= 05 x (0+ (204201 x edd x m2);
= a2+ (ifr -< 5 V dO <0 then s else r)
D+oI114 GLonaL imunexrion Chap. 6
comment: It is safe to step to r, but we may try to step further;
a3: = Ip x q > O then a2 + pig else r
inner: if a3 b then
Degin a3: =; y3: = yh end
else 93: = fad);
ify3 < y then
Degin x: = = Bend;
dO: = 03 ~ ad;
if a3 > then
Degin comment: Inspect the parabolic lower
(a2, a3);
Bp: = 2% (92 — sym x dO);
if abs(p) < (I +9 % macheps) x dO
A.0.5 x m2 x (d0 x d0+p x p) >
— 9) £03 — 9) 42% then
begin comment: Halve the step and try agains
@B: = 0.5 % (@2 + a3); fh: = 0.9 % A go to inner
end
end
if a3 2; >...> Aq with a set of cor
responding normalized eigenvectors t,t... Uy Since pis a local mit
mum of f(x), certainly
4, =0, 28)
and we suppose that 4,,>-0. (The position of the minimum is less well deter=
mined if 2, = 0.) If M6/2, is small compared to unity, and we take = t,
then (2.7) is possible
a~ Dawe aa
Bes, ‘an upper bound on |iji — yj can hardly be less than the right side of
The condition number
With the assumptions above, and d given by (2.9),
fe + 6u,) ~f(p) + xe | FQD|, 2.10)
where
@uy
is the spectral condition number of 4. We shall say that x is the condition
number of the minimization problem for the local minimum p. The condition
‘number determines the rate of convergence of some minimization methods
(e4g., steepest descent), and itis also important because rounding errors make
it difficult to solve problems with condition numbers of the order of e-*
or greater,26
ING A FUNCTION OF SEVERAL VARIABLES crea.
Sealing
of choosing 5 to
even if 4 is known ex
mizing the
} A good general rule is that SAS should be roughly row (and
1965a). In practical mini-
areasonable
to do this is described in Section 4.
Section 3
POWELL'S ALGORITHM
jon we briefly describe Powell’s algorithm for minimization
sulating derivatives, The algorithm is described more fully in
error in this paper is pointed out by Zangwi
(1967a), Numerical results are given in Fleteher (1965), Box (1966), and
Kowalik and Osborne (1968). A modified algorithm, which is suitable for
use on a parallel computer, and which converges for strictly convex C?
funetions with bounded level sets, is described by Chazan and Miranker
(1970),
Powell's method is a modifica
proposed by Smith
number of steps, for
of some properties of c
n of a quadratically convergent method
Conjugate directions
If A is positive defini
funeti
id. symmettic, then minimizing the q
tic
fe ne ee
is equivalent to sol of linear equations
AX =b, 2)
for example, by forming the Cholesky
jons of interest here, A is the Hessian
See POWELL ALGORITHM 125
matrix of a certain function, and is not known explicit
of the problems (3.1) and (3.2) is still useful,
, bul
the equivalence
DEFINITION 3.1
‘Two vectors wand v are said to be conjugate with respect to the positive
ix Ait
way = 0. ¢
3)
When there is no risk of confusion, we sh
gate, By a set of conjusare
pairwise conjugate
simply say that u and vare
set of vectors which are
Romark
AF (u,,.--,u,} is any set of nonzero conjugate directions in Re,
+ -+es Uy are linearly independent, Thus m fl,) unless the ridge is sharply curved. This is one motivation for
the method suggested by Rosenbrock (1960), and improved by Davies,
‘Swann, and Campey. (See Swann (1964), and also Andrews (1969), Baer
(1962), Fletcher (1965, 1969, d), Osborne (1969), Palmer (1969), Powell
(1968a), and Section 7.)
Finding another point on the ridge
If fincar searches from the pois
lution ridge is suspected, then the following strategy may be successful: take
ep of length about 105 in a random direction from x,, reaching the point
xq. Then perform one or more linear search
the point x). As Diagram 5.1 shows, the pi
soa linear search in the di
Although he does noi
uses such a strategy as pat
strategy during the regular
(1968)
ing criterion, We propose to use this
rations as well,
Incorporating a random step into Powell's basic procedure
Suppose that we are commencing iteration k of Powell’s basic procedure,
and 2 << k 0. Numerical tests indi
often approximated fairly well by the space-curve
Wi) A dak = digs yA de
did ara
63)
which satisfies x(—d,) = x’, x(0) = x”, and x(d,) — x". Before the third,
fourth, fifth, ... singular value decompositions, procedure praxis (Section
9) moves to the point x(Ap), where 2 approximately minimizes f(x(2)).
2, is computed by the procedure that performs linesr searches.
See Some ruRTHER DETALS 138
Section 6
SOME FURTHER DETAILS
In this section we give some more details of the ALGOL procedure
praxis of Section 9. The criterion for discarding search directions, th
search procedure, and the stopping criterion are described briefly
The discarding criterion
Suppose for the moment that (x) is the quadra ion given by
equation (3.7), In steps 2 and 3 of Powell’s basic procedure (Section 3), we
effectively discard the search direction w, and replace it by x, — x. The
algorithm suggested by Powell
mentioned in Section 3, it discards one of u,,
to maximize
instead, as
=X, 80.as.
[dewy, -..¥)
where y, is given by equation (3.14) after rei
discard the direction, from uy, Uy gor 0"
nant (6.1).
Suppose that the new direction x, — x, = tye, satisfies
Tinalissy = Barats Are
The effect of dis
directions, is to
is to choose i, wi
git by
and the linear mi
then, from (3.7),
A= Pina, (63)
so /Bi/ fy} may be used as an estimate of (af4u)", (If f, = 0 we use the
result of a previous iteration.)
‘Suppose that the random step procedure described in Section 5 moves
from x, to
yas t Daa, (64)136 AINUIZING A FUNCTION OF SEVERAL VARIABLES comp.
before the linear seatches in the directions u,,...,u, are performed. Then
=x x = DG, +2, 65)
1) are given by
(Ben its
= 66)
may
From (6.2), (6.3), and (6.5),
(uf, Au, (6.7)
so we must discard direction uw to maximize the
‘modulus of the tight side of (67). Since this does not explicitly depend on
quadr
mn that
”)
ed even if fis not nec
apart from our restri
random steps (ie., ify, =O for i= 1,
convergence is guaranteed if we ensure that, for k = 2....,
Bi= B= = Bien =0 (68)
never holds at iteration &
The linear search
‘Our linear search procedure is similar to that suggested by Powel
We wish to find a value of 4 which approximately mi
964).
(A) = fx, + Au), (6.9)
where the initial point xp and direction w~ 0 are given, and 9(0) = fix.)
is already known. Ifa linear search in the direction w has already been per-
ed, oF iw resulted from a singular value decomposition, then an estimate
{(0) is available. A parabola P(j) is fitted to g(A), using g(0), the estimate
ble, and the computed value of g(2) at another point, or at
mate of @”(0). If PA) 2
and g(2*) < (0), then 2* is accepted as & value of A to minimize (6.9)
approximately. Otherwise A* is replaced by 1/2, (2%) is re-evaluated, and
the test is repeated. (After a number of unsuccessful tries, the procedure
returns with 2 = 0.)
The stopping criterion
user of procedure praxis provides two parameters: ¢ (a positive
absolute tolerance), and ¢ ~ macheps (the machine precision). The proce-
dure attempls to return x satistying
Ix 6.10)
See. LMUMERICAL RESULTS AND COMPARISON WITH ODHER METHODS 137
ion of the 4 form of
10) is ly be changed. It w
‘chosen because of the analogy with 1
It is impossible to guarantee that (6.10) wil
C? near pO
case (Chapter
hold for all
ping criterion is, howev
iC one of
The sole exception is the extremely
Section 7,
Foo) = Ax,
where 4 is a twelve by twelve Hilbert matri
at the stopping crit
iproved criterion could
rent best imum before an
procedure, and let x” be the best approximation after
eration. We test if
Let x’ to
imply to stop, and return the approxim
if (6.12) is satisfied for a prescribed number of consecutive iterations. The
ions depends on how cai
for the examples described
andom step strategy described in Section 5 is always used if
ere is no need for a more
sed by Powell (1964),
Section 7
NUMERICAL RESULTS AND COMPARISON
WITH OTHER METHODS
‘The ALGOL W proce
‘on IBM 360/67 and 360/91
tested
In
and compare
lerature. Our proce-
ion of ALGOL: see
results for other
dure has also been translated
inehart and Sproul
les on a PDP 10 computer
ing problem is described
Table 7.1 summatizes the performance of procedure pra
ions descried es the tolerance ¢ ~ 10
The table gives the number of variables, n; the initial step-size (a
igh estimate of the distance to the minimum), f; and the st
precision 2, The par128 MINMSIZNNG A FUNCTION OF SEVERAL VARIABLES han 7
can be compared with those of methods with a different
1s and the
2X4. So that the resul
stopping criterion, we give the number 7, of f
number 7, of linear searches (in
quired to redu
Of f As f(x) was only printed o
(ie, after every 1 linear mi ction evaluations
required to reduce /(x) ~ fiw) t0 10°” is usu less than n,, $0 We
also give the actual value of f(x) — f() after n, function evaluations
, the table gives x, the estimated condition number of the problem.
Except for the few cases where analytically, is estimated
from the computed singt s, and may be rather inaccurate.
where /(p) isthe true mi
sie procedure,
TABLE 7.1 Results for various test functions
Funetion
120
Rosenbroci
Rosenbroek
Rosenbrock
Cube
Teale
Helix
Powell
Box*
Singular
‘Wood
Chebyquad
Chebyquad
‘Chebyquad
Chebyquad
1s
17
4.510
1613
1716
t
4
See? NUWERICAL RESULTS AND COMPARISON WITH OTHER METHODS 139
For those examples marked with an asterisk, the random step strategy
‘was used from the start, (In the initialization phase of procedure praxi
the variable H/e was set to true.) For the other examples the proce
used as given in Section 9 (wi
is feature was switched off for the examples
ziven in the table. (The bound seh of equation (4.15) was set to 1.)
Definitions of the test functions, and comments on the results sum-
marized in Table 7.1, are given below.
A cautionary note
When c
and Stewart’ 10 forget tl imerical results reported
for the methods may have been obtained on different computers (with differ-
ent word-lengths), and with different linear search procedures. Except for
he effect of different word-lengths should only be
rounding errors determine
racy at reason why we prefer 10
consider the number of function evaluations required to reduce f(x) — fn)
toa reasonable threshold, rather than the number required for convergence.
Because apparently minor differences in the linear search
be quite important, Flet
searches, n, instead of the number of func!
discriminates against methods such as Powell's, which use most of the search
directions several times, and can thus use second derivative estimates to
reduce the number of function evaluations required for the second.
searches in each direction. Note that, for the examples given
2,]n, Ties between 2.1 and 2.7, but it would be at least 3.0 for m
do not use second derivative informa
involves
n of the parabola. Also, there are
ich do not use linear searches at all (ee Broyden (1967),
Davidon (1968, 1969), Goldstein and Price (1967), and Powell (1970e)),
and these methods can be adapted to accept diffe pro.
derivatives. Thus, we prefer to compare methods on the basis of the number
of function evaluations required, and regard the linear search procedure, if
‘any, as an integral part of each method.
Definitions of the test functions and comments on Table 7.1
Rosenbrock (Rosenbrock (1960):
£09) = 1006, — 4) 4
I-known function wit
nto the valley and then
Tax aay
valley. Descent methods tend140 sMnWIMZING A FUNCTION OF SEVERAL VARIAELES Chop. 7
Details of the progress of the algorithm, for the starting point (—1.2, 1,
are given in Table 7.2. In Diagram 7.1 we compare these results wit
reported for Stewart's method (Stewart (1967)), Powell's method, and the
method of Davies, Swann, and Campey (as reported by Fletcher (1965)).
‘The graph shows that our method compares favorably with the other
methods. Although the function (7.1) is rather artificial, similar curved
valleys often arise when penalty function methods are used to reduce
constrained problems to unconstrained problems: consider minimizing
by a simple
(1 =x, with the constraint that x, =
funtion method,
Cube (Leon (196)
P18) = 100(x3 — (7.2)
‘This function is similar to Rosenbrock’s, and much the same remarks apply.
Here the valley follows the curve x; = a.
Beale (Beale (198)
inded penalty
FO) = Ble, — x0 — 4 (73)
(1968) report that the Davidon-Fletcher-Powell algorithm reduced f to
2.18 x 10" in 20 function and gradient evaluations (equivalent to 60 f
n evaluations if the usual (7 + 1) weighting factor is used), and Powell
thod required 86 function evaluations to reduce f to 2.94 x 10"*. Thu
r method compares favorably on this example
(Fletcher and Powell (1963)):
Flos) = 100Ffr, = 108)? ++ (r — IP] + a, (74)
where
ra Git aye 9)
and
nef ansaia,) if xy =a
nF arctan(xylx,) if x, <0.
This function of three variables has a helical valley, and a minimum at
1, 0, 0/7. The results are given in more detail in Table 7.3 and Diagram 7.2
For this example our method is faster than Powells, but slightly slower
than Stewart
Powell (Powell (1964)):
10)=3-( ry q@tay)- see) ol
(76)
S007 [NUMERICAL RESULTS AND COMPARISON WITH OTHER METHODS 141
For a descrip , see Powell (1964). Perhaps by good luck,
our procedure had no difficulty with this function: it found the true
quickly and did not stop prematurely.
Box (Box (1966)):
10) — exp(—ixy/10]
xslexp(—i/10) — exp(—)]
10, 1)", and also atong the line {(2, 2, OY")
f= 78)
(Our procedure found the first minimum.) Kowalik and Osborne (1968)
10-7, so itis faster,
than any of the methods compared by Box (1966), wi
method for sums of squares (Powell (1965)). See the commer
Section 1 about special methods for minimizing sums of squates!
Singular (Powell (1962)):
FO) = (5 + Ox) ory — XP + Oy — ey) + 1G — Yt.
(79)
‘This Function is difficult (o minimize, and provides a severe test ofthe stop-
Ping entero, beeaus the Hessian matric a the minimum (x = 0) dovbly
singular. The function varies very slowly near 0 in the two-dimensional su
space {(04,,—4,, 2,, 47}. Table 7.4 and Diagram 7.3 suggest that the
algorithm converges only incary, as does Powel’ algorithm. Its interesting
to note that the output from our procedure would strongly suggest the si
larity, if we did not know it in advance: after 219 funetion evaluations, wi
J (8) = 7.67% 10°, the computed eigenvalues were 101.0, 9.999, 0.003790,
and 0.001014. (The exact eigenvalues at O are 101, 10, 0, and 0.) After 384
function ev. 5, with f(x) reduced to 1.02 < 10-17, the two smallest
‘eigenvalues were 1.56 x 10-7 and 5.98 x 10-. Thus, our procedure should
allow singularity of the Hessian matrix to be detected, in the unlikely event
that it occurs in a practical problem. (For one example, see Freudenstein and
Roth (1963)
Wood (Colville (1968)):
A(S) = 100(x, — aD" + = x)? + 90(, — PE +L x)
4 1OAG%s — DF + Gx = 1) 419.80, = Dory —
This function is rather tike Rosenbrock’s, but with four variables
of two. Procedures with an inadequate stopping criterion may terminate142° MINIMIZING 4 FUNCTION OF SEVERAL VARIABLES cten.7
prematurely on this function (McCormick and Pearson (1969)), but our
procedure successfully found the minimum at w= (1, 1,1, 1.
Chebyauad (Fletcher (1963):
{02 defiaa by the ALGOL procedure given by Fletcher (1969,
As the minimization problem is still valid, we have not corrected a sm
error in this procedure, which docs not compute exactly wl wher
intended. In contrast to most of our other test functions, wi
to be difficult to minimize, this function is fairly easy to mi
1()7 and 9 the minimum is 0; for other m it is nonzero. (For n = 8 itis ap-
proximately 0,00351687372568,) The results given below, and illustrated in
Diagrams 7.4 to 7.7, show that our method is faster than those of Powell
or Davies, Swann, and Campey, but a little stower than Stewart
‘Watson (Kowalik and Osborne (1968)):
= 8+
= iP
Gay
Here a polynomial
PO =x, +4 7.12)
is fitted, by least squares, to appro
tion
& +, (0) = 0, 7.13)
for 4c (0,1). (The exact solution is 2 = tan 1.) The minimization problem
ned, and rather difficult to solve, because of a bad choice of
functions {1,1 ‘oF m= 6, the minimum i fiw) =
2.28767005355 % 10°, at w ~ (0.015725, 1.012435, —0.252992, 1.260430,
—1,513729, 0.992996)". For m= 9, f(w) = 1,399760138 x 10-*, and p=
(0.000015, 0.999790, 0.014764, 0.146342, 1.000821, —2.617731, 4.104403,
3.143612, 1.052627)". (We do not claim that all the figures given are
significant)
»walik and Osborne (1968) report that, after 700 Function evaluations,
Powell's method had only reduced f to 2.434 x 10°? (for 1 = 6), so our
method is at least twice as fast here. The Watson problem for m —9 is very
ill-conditioned, and seems to be a good test for a minimization procedure.
‘Tridiag (Gregory and Karney (1969), pp. 4t and 74):
J) = TAX — 2x, 4)
S007 NUMERICAL RESULTS AND COMPARISON WITH OTHER METHODS 143
(7.13)
This function is useful for testing the quadratic convergence property. The
num f(x) = —m occurs when p is the fist column of A-!, ic,
ne 22.0F (7.16)
given in Table 7.1 show that, as expecied, the minimum is found
ns. The eigenvalues of 4 are just 2, =
f(x) = x"Ax, 7)
where 4 is ann by n Hilbert matrix, i.
mo (118)
rapidly with 1. B
rounding errors, more than n° linear ns were required to reduce
£10 10° for m = 4, The procedure successfully found the minimum w= 0,
prescribed tolerance, for n= 10. For
um were greater than 0.
lustrates how
Same more detailed results
es 7.2 t0 7.8 give m
Rosenbrock, Helix, $)
7.1 to 7.7, we plot
details of the progress of our procedure (B)
lar, and Chebyguad functions. In Diagrams
A= loge (£9) — FW) 7.19)
ie number of function evaluations. Using the’ results given by
Fletcher (1965) and Stewart (1967), the correspond 8 araphs For the methods
of Davies, Swann, and Cai
for purposes of compar
(n= 8) are not available.Fag taeeral aiae tie ae pov tad vaneael chp? S007 [NUMERICAL RESULTS AND COMPARISON WITH OFHER METHODS 145
TABLE 7.2 Rosenbrock TABLE 7.4 Singular®
TABLE 7.3 Vieix148 INTINLING A FUIVCTION OF SEVERAL VARIABLES
TABLE 76
SBT = 0.026728, 06062057, 05937963,
03973272)
Chebyquad: 2 = 4*
S00)
4ay7
186-8
78-14
Lass
TABLE 7.7 Chebyquad: n ~ 6°
195
238
or = onsse7,
0711289, 0833129)
feo
409-2
[NUMERICAL RESULTS AND COMPARISON With OrHER METHODS
TABLE 7.8 CI
“HT 008
o.s00000, 0735
Thebyquad: 2 — 8°
™ feo
0,0386176982859
0.917 124813073,
0.0109131813978
o.0102s60260995
10.0093337335931
©.0071908895069
10.0049952481593
0,0044832513468
o.00s7940816125
0.0038300722159
0.0035260968747
©.0035191392605
0.0035180637576
0,0035176361629,
0.0035168737890
0.0038168737290,
0.0035168737288
19300, 0.266329, 0500000,
0.808910, 98684),
ver140 tunnnnzins A FUNCHION OF SEVERAL variAaLes ng. Se 7
Key:
oy
NUMERICAL RESULTS AND COMPARISON WITH OTHER METHODS 148
AF og, 9(0— He)) A ieereersu tar
2 30 75 vo 15 13a,
po
730
DIAGRAM 7.1. Rosentrack
DIAGRAM 7.2 Helix
method:
1¢ method of Davies, Swi
(1965);
Powell’s (1964) method, as
n by Fletcheroan. S007 AUMERICAL RESULTS AND COMPARISON WTH OTHER METHODS 161
alos
a EEE pee eeag eee ey 1
mo a a a) 7;
DIAGRAM 7.3. Singular : DIAGRAM 7.4 Chebyauad, n= 2152° AMINIUIZING A FUNCTION OF SEVERAL VARIABLES Cr ee
209,06) fie!)
-s
° 25 Ea 75 100 «125 7m
DIAGRAM 7.5. Chebyqvad, n= 4 . He
NUMERICAL RESULTS AND COMPARISON WITH OTHER METHODS
700) 200 ae a0
DIAGRAM 7.6 Chebyquai, n= 6
Es
7
152194 suOIMeING & FLINCTION OF SEVERAL VARIABLES cop. 7
° vo mC OSC ™
DIAGRAM 7.7 Chebyquad, » = 8
Section 8
CONCLUSION
Powell (1964) observes that, wi
new. rections (Section 3)
be accepted less often as the number of variables increases, and the quadratic
"2 property of his basic procedure is lost. Our aim was to avoid
keep the quadratic convergence property, and ensure that the
search directions continue to span the whole space, while using basically the
same method as Powell to generate conjugate directions.
nerical results given in Section 7 suggest that our a
is faster than Powell's, and comparable to Stewart's, if the criterion is the
See. 9 AN ALGOL W PROCEDURE AND TEST PROGRAM 185
Also, our algor
Jems like Watson
breaks down because of
Rosenbrock and Singul
algorithm wi
hm Keeps on perfor
sctions, the si
coordinate seare!
joved. Since
jogonal di
convergence of the method
of our algorithm to a local
converge to the (unique)
conver, and satisly
ited pact
for 7
ay be very impori
algorithm converges
ve definite at
easy (0 prove Zook
wre described in Brent (1971¢),
Section 9
AN ALGOL W PROCEDURE AND TEST
PROGRAM756 MINWIMING A FUNCTION OF SEVERAL VARIABLES chp 7
1s (LONG EAL VALUE Te MACHEPS, HE
HAL RATE OF CONVERDENCE
SERVE THE COMMENT UN FE
me nee
pur tnrenmeotare
A-AND SCALE FACTORS ARE ALSO
bX AME PRINTED AFTER EVERY FEW LINEAR
keruaus
TMITIALIZATICN MUST SE. DO
THe. BRICEDURE” LS. RACH
SraTenenTs. ND.
‘neuer eRow iW2 qurpur
WEPSe ME ASSUME THAT
‘DOES THEN wacHeDs. MUST
10 FLOATING SDTNT UNDERPLOM. THE
PROCEDURE HINEIT (161
bent commen
covus &
The URTHUGOWAL MATRIX V SUCH 1
HERE UTS ANCTHER ORTHOGONAL MAT
1F Eco THEN LonGsuRES?
Elbe “Lowesan:
Se0.8 [AN ALGOL W PROCEDURE AND TEST EROGRAM 167
FUNTIL N00 F t= F 6 AatKeTPSARER LITE
ND TRANSFORM
ASCE fe ARUN
accuse Foes
m1 urn, 1 90
SeSLEtLy 1-695 THEN Goro TeSTECCHVERGENCE:
IF ges (QUC-1) reeees Tew abTO CANCELLATION168° mmuanzING A FUNCTION OF SEVERAL VARIABLES
eu
60 TO TESTFSPLITTIN
Gowvensence:
Teneo TaN
enn MFT
PROceOURE saRts
SEGIN COWMENT:
RUINS GF
rows
f= ol) enor
nHes vores ae yess
no THEN ABSUHIY
Fess g THEN.
else of
SCENDING ERDERE
rere tune wom
Sees ‘AN ALGOL W PROCEDURE AND TEST PROGRAM 159,
Guosaly AND ESTIMATES THE
DF FAT THe ME USED ONLY FoR
NG LOGLEX = FIN
sunt
(180) VALUE 5 LONG REAL ARRAY
arene v coLuMN ey caLuNns
1 ow @ bo
0) THEN N ELSE 64K?
STRINGES2) VALUE 5; LONG REAL ORRAY Vie14
INTs THE
ING S AND NAVECTOR vs
(TIL 18 0D WRETEONROUNOTOREAL (VK1I9)
INTEGER VALUE Jy WL
ING REAL VALUE
LONG AEAL VALUE:
SUaLeaN VALUE Fx
wINIMIzes F FRUM x IN thE DERECTION
UMLESS JeLy WHEN & QUADRATIC. SEARGN TS, GONE
TH THe PLANE DEFINED BY dy QL.aND” x,
RETURNED AS The. DI
xt ano
2 SES AND ALTERS xy Px,
Ted Cr Uses vsti ants
USES Ae Ne Ty Mey
Vining macneess
LONG SEAL PROCEDURE FLIN (LONG REAL VALUE LI
(COMMENT: THE FUNCTION OF ONE VARIABLE L“KHICH TS
NUNUMIZED BY PROCEDURE. RINe
BEGIN LENG REAL ARRAY a169 anunnzinG A FUNCTION OF SEVERAL VARIABLES hap.7
UerH WOO et
deoawectysacwoncny
Ten counreRs
(02 € mncsersis
hoo 8 i= 5 + xtLMHees
em os FL EnDE
ew te FL ENDS
FUN At ANOTHER POINT AND.
‘OF Secono peRtvartves
else IF K 5g THEN 9 ELSE 0: “
ce SMALL THEN 02 t= SHALLT
x1 t= Sx1 enDe
See 8 AN ALGOL W PROCEDURE AND TEST PROGRAM Yor
comment Fu Linean seaecn Sut no rus anana.ie
tes >o = t + eteve
PRocevuae ai
BEGIN comme!
Loves 0% re wHNIMUR aLcns & CURVE
anes ben
exp
END unosMINIMHRING A FUNCTION OF SEVERAL VARIABLES chap. 7
COMENTS WINIMIZE ALONG FIRST DIRECTION
WU Ui 2+ DULy Se Fy PALS
F's c= 0" THeN ean
ist <2 (o.oe000)
Foal += 2"burie m0 Bt
FOR k= 2 UNTIL W BD
beer STUNT N 90 very ee x
tte au er > ore
AEGI Gowers RanoOm STEP TO GET OFF RESOLUTION VALLEYS
Fost fe 1 uN
© NUMBER ONIFURRLY DISTREBUTED.
THAT ANY UNIT TSU IZATLON OF HE
SEMERATIR HAS ALREADY. BEEN DOM
INTICN OO ACI) fe xtd) © 38
Bie
Besta’ su"
ehawents
se raver 6070 LT
be a am pion oe
braecrioNs:
Trubs cay heh 7
LoraceLors 1 Lor < Los THEN LoT += Lt
FoR L f= 1 UNILL Won 12 22 12 + xcLIeas
12s waecenosaartray + Ti
Comments See Tf Ste LENGTH EXCEEDS HALE THE TOLERANCES
END
sen 8 ‘AN ALGOL W PROCEDURE AND TEST PROGRAM 168
COMMENT: Tay GUADRATIC EXTRAPOLATION IN CASE AE ARE STUCK
NEN OLAECTIONS™, Vy Ny NIE
FHL UNTIC N00 VOyat 22 sevetysy
N COMMENT? SCALE axeS Te TRY Ta RELUCE CoNDITEON
Nnoens
1 viTiL 90
eT UNTION 99 St t= stovt
ihe THEN att
See eS > ree rMeN
Lyre 9
Le sree
BEGIN Ste 17Sca04
coment? “raansouse v
Fowl z= 2/ONTIC A 00
coment
=5e80 fl
ie nt tHbr S00)
Stes 4 viusineers
SPvGse
SMALL THEN VLaRGE ELSE LONSDE
ese.
wstay
vecoRtyt
TE paIM > 1 THEN VECPRINT (Et CeAVAL My Oy NI
Tf DRIN > 3° THEN MATPRINT {EIGENVECTORS OF iy Va Ne NOE
Chwwents” Go'8ack 10 MAIN (90s
Sort
LB: TP PRIN > 0 THEN VEGPRINT (4x E595 Ky NIE
Ep Pearse
PROCEDURE RANDOM KETUANS 4 LENE REAL RANDOM NUMBER U164 muinr2iNG 4 FUNCTION OF SevERAL VARIABLES chp.
DUSTAISUTED IW (O91) INCLUDING 0 BUT NOT
wert fiw BEFORE Tor FIRST CALL. TO. KANGDM, a
ee
mS DF Ra
+ Aan2 AND aaNS HUST BE GLOBAL
THE ALGORTTUM RETURNS X(N) /20%S6y HHERE
KOM tne) a eda) (HOD 2085:
Since 1s xe xeetey tS PAIMITIVE (MOO 2), THE PERTUD TS
Se SEE KNUTH P 264 Shy 4686
ReRSt 2eni2T <1 > Lo
swerve
LONG REAL RANLE INTEGER RANZ; LONG HEAL ARRAY RAN
Ranz te 12
Rew S190 +
Eoaan2 S & on
FoR :
FANS (RaNe) te Rat
TrimaNy < OL THEM RANL © O¥5L
Else RAN * 0451
Rant + 65.
END RANDDNG
LONG AFAL PROCEDURE ROS {LONG REAL ARRAY KC) INTEGER VALUE NIE
LOOLFLUK(2) = KCLISAZIEF2) +
LOKG REAL PRUCEOURE SINGKLONG MEAL ARRAY XLe):INTEGER VALUE ND
MENT: See. POWELL.
Tye wgeexczy te
OtetxtLy Kear
2 + (xz retex anes
LONG REAL PROCEDURE HELIACLUNG REAL AKAAY XC) INTEGER VALUE 81
COMMENTS. “SEE FLETCHER € POWELL (1963)
2g men T
xis nee
LONG geAL PROCEDURE cu!
Toouetnizy = xt1) 983}
LONG REAL PROCEDURE WATSON (LONG REAL ARRAY xt=IE
sees
Love aeau PRocenune
‘AN ALGOL W PROCEDURE AND TEST PROGRAM 168:
WUTEGER VALUE Wy
resem vate
Gowen SEE
sein
(one aeat es otra, raLuss
WOOLEAN EVENS
TENG RGAE ARbay ve try rmts
CHFOYQUAD CLUNG REAL ARRAY xC+)
beta 25
ese on
Eno cresvousps
LONG REAL PROCEDURE POWELL «LONG REAL AKKAY x0#)
Tnrecea VALUE
Gownents "See Fi
ata s te
OL ELSE LONGEAP(= 40x
LONG REAL PROCEDURE Hoi
Comments SEE HCcokw
REAL ARRAY x¢e
Turecck VALUE N)s
COMMENTS COMPUTES xY.Rexy WHERE A 15 THEN BY W HILBER
seorn ventas SE ERARNEY 1909}, PPL Sty ce
Eeears166 HMIMNNLING A FUNCTION OF SEVERAL VARIABLES chap? See.9 AN ALGOL W PROCEDURE AND TEST PROGRAM 167
xan
& (40) THEN ©
Ole FMIN t= we
RMAT OF INTEGERS:
ABOLIC vABIBLIOGRAPHY
also some references dei
converting const
References on
cluded, and we
Jacoby, Kowali
arily been assigned the year 1971
1 and integer progra
169170° reuiocRAPHY
‘Andrews, A. M., 1969, The calculation of orthogonal vectors, Comp. J. 12. 411
as) i
Armijo,L., 1966, Minimization of itz continuous first partial
tives, Pacific J. M
sof
2, 2-315.)
4.5, 193. (7.5)
Math. 15,
Baer, RM.
Baker, C. T. H.
315-319,
1962, Note on an extre
970, The error in
ard, Y., 1970, Compari
an rameter estimation problems, STAM
Bard, Y., see Greenstadt (1970),
ACM 12, 266-268, (7.1)
b, G.H., and Saunders, M. A., 1970,
ng, Report CS 162, Computer
aver HE, Bikes and Graham, $1948, ALGOL W lage des
meena C59 (eed 25 CS 110 wth Es Serta, 109, Camper
Depts Stanford Une (58,6679)
reed fo
Koyo.
"mimeton Univ 2.79)
esl, EM. 1 1865, Marea pro
eb
Becker, 8 se Bast, eskr, and Graham (1968),
Beightler, C. S., see Wilde and Beightler (1967).
Mand Pike, M,C, 1966, Remark
Comm 4c809, 686.0.)
Belinan, R. Ey 198%, Bymom
Rowe. (13)
RF and Drv, S198 Api dic por
Unis Press, Prine, Now Joey. 184.)
1M, |
Sz
xy, New York,
ne fn practice, W
'78(E4), Direct Seat
programming, Princeton Univ, Press, Princeton,
1g, Princeton.
aner. Math. 7,
Ben!
Berman, G., 1969, Lattice approximations to the minima of functions of several
variables, J. ACM 16, 286-294. (7.1)
ARijSrek, A., 1967, Solving linear least squares problems by Gram-Schmidt ortho
gonalization, BIT, I-21. (7.1)
Bjorck, A., 19676, Iterative
257-278, (7.1)
Bjorck, A., 1968, Iterative refinement of
8-30. (7.1)
Boothroyd, J., 1965a, Algorithm 7, MINIX, Conip. Bulletin 9, 104, (5.3)
Boothroyd, J.. 1965b, Certification of Algorithm 2, Fi
Bulletin 9, 105. (5,3)
Bowler, H., Martin, R. S., Reinsch, C., and Wi 1968, The QR and
QL algorithms for symmetric m |. 293-306. (7.4)
Bos, G. E. P., 1957, Evolutionary operations: a method for increasing
productivity, Appl. Star. 6, 3-23. (7.1)
Box, M, J., 1965, A new method for constrained optimizatior
with other methods, Comp. J. 8, 42-52. (7.1)
Box, M. J., 1966, A comparison of several current of
use of transformations in constrained problems, C
11,79)
Box, M. J,, Davies, D., and Swann, W. H., 1969, Non
lates, {CI Monograph No. $, Oliver
R
wement of linear least squares 5
linear least squares solutions 11, BIT 8,
ace Sear
Comp.
A note on the Dav ‘method for solving nonlinear
ws Report RC 3506, IBM T. J. Watson Research Lab, Yorktown
‘New York, to appear in JBAT.
cf Of algorithms for solving systems
3725, IBM, Yorktown He
algorithm for unconstrained.
Brown, K. M., and Dennis, JE. 1968, On Newton-like iteration functions: general
convergence and a specific algorithm, Numer. Marh. 12, 186-191.
ra)
Brown, K. M., and Dennis, J. E,
i97la, On the second order convergence
Brown’s method for solving si near equations, to appear. (7.1)
Brown, K. Mand Dennis, JE, 1971b, Derivative-ree analogues ofthe Levenberg-
Marquardt and Gauss algorithms for nonlinear least squares approximation
0 appear in Numer. Math, (7.1)rm sisuocearHy
Brosden, C. G., 1965, A 12 nonlinear simultaneous
‘equations, Mazh. Comp. 19, $77-593, (1.1)
Broyden, C. G., 1967, Quasi-Newton methods and their appli
imization, Math. Comp. 21, 368-381. (7.1, 7.7)
Broyden, C. G., 1969, A new method of solving nonlinear simultaneous equations,
Comp. 5.12, 94-99. (7.1
Broyden, C. G., 1970a, The convergence of a class of doub!
algorithms, Pacts Land I,J. dast. Maths. Apps. 6, 16-90 a1
Broyden, C.G., 19708, The convergence of single-rank qui
Math, Comp. 24, 365-382. (7.1)
Buchler, R. J., sce Shah, Buehler, and Kempthorne (1964).
1d Golub, G. H., 1965, Linear least squares solutions by Hou
transformations, Numer. Math. 7, 269-276. (7.1)
Buys, J. D., see Haashoff and Buys (1970),
Cantrell, 1. W., 1969, Relation betwee
Newton methods,
memory gradient method and the
px. and Apps. 4, 67-11, (1.1)
840, Sur les fonctions
‘Oeuvres completes, Gaul
snousko, F. L., 1970, On opt
0a). A
(Cla, N.A., Cody, W. J, Hillstrom, K.E., and Thieleker, E.A.
‘Statistics of the FORTRAN IV (ID) library for the 18M System!360, Argonne
Nat. Lab, Report ANL-7321. (6.3)
Cody, W. 1, sce Clark, Cody, Hillstrom, and Thieleker (1967).
Berlin (translation by H. Oser, Academic Press, New York, 1966). (3.1)
1A. R198, 4 comparative study of nonlietrprogranming codes, IBM.
New York etme Center Report 3202983, (71,77, 19)
Conte, S, 9, se Brown aed Conte (1967)
Cooper se Kelak and Cooper (1963)
Cox MG 1970, bracketing fecigue or computing 20 function, Comp.
cris iol-o2, a4
Cage. FE. and Levy, A.V. 168, Study of a upermemory ardent method fo
“the minimization of fancons, J Open Try and Ap 191-1)
aieuiocnarny 170
Crowder. H., and Wolfe, P., 1971, Linear convergence of the coniugate gradient
tthod, Report RC3330, IBM T. J. Watson Research Lab, Yorktown
Heights, New York, to appear in IBM Jour. Res. andi Dev. (74, 74)
Cutty, H., 1944, The method of steepest descent for nonlinear minimization probe
lems, Quart. Appl. Math, 2, 258-261, (7.1)
|. W.. 1967a, The conjugate gradient method for
‘operator equations, SIAM J. Numer, Anal 4, 10-26. (1
|. J. W,, 1967, Convergence of the conjugate gradient method with computa
Wenient modifications, Numer. Marh, 10, 125-131. (7.1)
J. W., 1970, A correction concerning the convergence rate for the conjugate
gradient method, S14 J. Numer. Anal. 7, 277-280. (7.1)
Davidon, W. C., 1959, Variable ization, Argonne Nat, Lab.
Report ANLL-990. (5.7, 7.1)
rear and nonlinear
1m for minimization, Comp, J. 10, 406-410.
Davidon, W. C., 1969, Variance algorithms for minim
4,79)
Davies, D., see Box, Davies, and Swann (1969), Matthews and Davies (1971),
‘Swann (1968),
Davis, P. J., 1965, tnterpotarion and approximation, 2nd ed., Blaisdell, New York
and London. (6.2)
ion, in Fletcher (19698).
Dejon, B., and Henrici, P. (eds), 1969, Constructive aspects of the fundamental
‘theorem of algebra, Interscience, New York
Dekker, TJ. (ed), 1963, The series 4P200 of procedures in ALGOL 60, The Mathe-
matical Centre, Amsterdarn,
ing a zoro by means of successive linear interpolation, in
169). (1.2, 44,4.2, 43,44)
324-330. (7.1)
ie local convergence of Brosden's method for nonlinea
is, Tech, Report 69-46, Dept. of Computer Science, Cornel
Dennis, J. E., 1969b, Ow the conrergence of Broyden’s method for nonlinear systems
of equations, Report 69-88, Dept. of Computer Science, Cornell Univ.
appear in Math. Comp, (7.1)
see Brown and Dennis (1968, 197la, 6).
E. W., See van Wijngaarden, Zonneveld, and Dijkstra (1963).
Dixon, L. C. W., 1971a, Variable ns: necessary ane sufficient cond
‘ions for idestcal behaviour on now-guadratic functions, Report 26, Nuse
Optimisation Centre, The Hatfield Polyte
Dixon, L. C. W., 19716,
appear. (7.1)
to1968), Springer Nerlag, Bet
nn, B., 1970b, see Spmposi
inger-Verlag, Betlin. (7.1)
Dreyfus, S. E., so Bellman and Dreyfus (1962)
Dyer, P,, see Hanson and Dyer (197).
B., see Dold and Eckman (1970s, b)
Ehslich, L, W., 1970, Eigenvalues of symmetric five-diagonal ms
aa)
Evans, J. P., and Gould, F. J, 1970, Stabi
18.7.)
Fiacco, A.V
a
Fiacco, A. V., 1969, A general regulatized sequential unconstrained minimization
technique, SI4M J. Appl, Math. 17, 1239-1245, (7
Fiacco, A. V., and Jones, A. P., 1969, Generalized penalty m
spaces, STAM J. Appl. Math. 17, 996-1000. (7.1)
Fiacco, A. V., and McCo
Flanagan, P.D.,
Oper. Res. 9, 184,
trical investigation
inear regression problems,
265-284. (6.4)
R., 1965, Fu
review, Comp. 5.8, 33-41. (12, T, 73, 75,1. 19)
Fletcher, R., 1966, C 251, Conun. ACM 9, 686. (7.1)
1g derivatives—a
of systems of non-linear equa 392-399. (7.1)
Fletcher, R., 1968b, Programming under tinea of inequa
ICI Manageme!
Fletcher, R. (ed), 1969, Opi
Fletcher, R., 1969, 4 class of
‘and convergence properties, Report TP 386, AERE, Harwell, Englan
Fletcher, R., 1969, A review of methods for unc
Fletcher (1969a). (71, 7.5)
Fletcher, R., 19694, A technigue for orthogonalization, J. Inst. Maths. Apps. 5,
162-166. (7.5)
Fletcher, R., 1970, A new approach
317-322. 7.1)
er, R., and Powell, M. J. D., 1963, A
‘minimization, Comp. J. 6, 165-168. (7.1, 7.7.7.9)
Fletcher, R., and Reeves, C. M., 1964,
radients, Comp. J. 7, 149-154, (5.4, 7
a)
imiaation, in
strained 0)
o variable metric algo
Fiet
sieuioGRAPHY 175
of linear algel
"-Hall, Englewood Chifs, New Jersey. (7.2)
Fox, L., Henrie, P.
eigenvalues of el
Francis, J, 1962, The QR tr
mn, Comp. J. 4, 265-271. (7.4)
stein, F., and Roth, B., 1963, Num
‘equations, J. ACM 10, $50-556, (7.7)
‘a unitary analogue to the LR transforma-
corpon
Gill, PLE.
Tech. Report Mi
Goldfarb, D,
tion und
739-164. (
ically sable form ofthe simplex ale
Teddington, England. (7.1)
iable metric method to ma
nis, STAM J. Appl. Ata
Goldfarb, D., 1970, A family of variable-metric methods derived by
neans, Math. Comp. 24, 23-26, (7.1)
quadratic
Res. Mem. 95, Princeton Univ, (7.
Goldstein, A. A., 1962, Cauchy's method of
150. (7.1)
. On steepest desce
Price, 3. F.
84-189. (7
Foldstein, A. A., and Prive, 1. F197
25, 569-574. (6.1)
Golub, G. H., 1965, Numerical methods for solving linear least squares problems,
Numer. Math. 7, 206-216, (7.1)
Golub, G. H., see Businger and G.
5 (1970).
, W., 1965, Calculating the singular
inverse of a matrix, STAM J. Numer, Anal. 2, 205-224. (1.4)
Reinsch, C., 1970, Singular
14, 403-40. (7
inima, Math, Comp.
(1965), Bartels and Golub (1969), Bartels,
il pseudo-
n and least176 BieuIocRAPHY
approximation of ce)
C3 72, Compu
Golub, G.
sfuares sol
Gould, F.4., see Evans and
Graham, S..see Bauer, Heeker, and Gran
Greenst 1967, On
‘Comp, 21, 360-367. (1.2.7.1)
ve refinement of
iable metric methods, Math. Comp. 24,
1-22 (appendix by Y. Bard)
Gregory, R. T.. and Karney, D. L., 1969, 4 collection of 0
ices far testing
Gross, 0.
convex function, MAC (now Math, Comp.)
Haarhoft, P. C., and
imax search for a zero of &
jon of a
inear constrain . 178-184,
if, Addison-Wesley, Reading,
Massachusetts. (7.1)
Hanson, R. J., 1970, C quadratic programming problems: linear ineq
and equelity constraints, Tech, Memo. 240, JPL, Pasadena. (7.1)
Hanson, R. J, and Dyer, P., 1971, A computational algorithm for sequential
Comp. J. 14, 285-290, (7.1)
Hartley, H. O., te modified Gauss-Newton method for fitting of nonlinear
regression functions by least squares, Technomezrics 3, 269-280. (7.1)
Henrici, P., see Dejon and Henrici (1969), Fox, Henrici, and Maler (1967)
and James (1967)
trom, and Thieleker (1967).
Himsworth, P. R., see Spendley, Hext, and F tn (1962)
Hoare, C., see Wirth and Hoare (1966).
Hooke, R., and Jeeves, T. A., 1961, Direct search solution of numerical
statistical problems, J. ACA 8, 212-229..(7.1)
Householder, A. S., 1964, The theary of matrices in numerisol analysis, Blaisd
New York. (7.4)
Householder, A. S., 1970, The
McGraw-Hill, New York. (.1)
uang, H. ¥., 1910, Unified approach to quadratically convergent algorithms for
Finction minimization, J. Opizn. Tary. and Apps. 5, 405-423. (7.1)
and
eal
earment of a single nonlinear equation,
sreuocRariy 177
Isaacson, E., and Keller, H. B., 1966, Analysis of
York. (2.2, 2.4)
Jacoby, $. L.S., Kow:
‘aptimization proble
64,7.
James, F. D., see Pike, Hill, and James (1967).
Jarratt, P., 1967, An iterative method for locating turning points, Comp. J. 10,
82-84. (1.2, 3.1, 3.2, 36, 3.7, 38, 3.9, 5.1)
Jarratt, P,, 1968, A numerical m
35, (12, 3.1, 32, 3.6, 39)
Jeeves, T. A., see Hooke and Jeeves (1961),
merical methods,
ley, New
‘methods for nonlinear
few Jersey, to appear.
hod for determi
1 points of inflexion, BIT 8,
ft iterations for the solution of
bownds far the zeros, Report CS 138,
er Sci, Dept, Stanford Univ. (3.5)
Johnsen, §. E, J,, see Allran and Johnsen (1970),
Johnson, I. L., and Myers, G. E., 1967,
nd cuble fen
Spacecraft Center, Houston. (5.7)
Johnson, 8, M., 1955, Best exploration for
Report RM-1590. (5.3)
Johnson, S. M., see Gross and Johnson (1959), Bk
Drevfus (1962),
Jones, A. P., 1970, Spiral—a new algorithm for non-linear parameter estimation .
using least squares, Comp. J. 13, 301-308, (7.1)
Jones, A. P., see Fiseco and Jones (1969),
nds, Report No8-18823 (NASA), Manned
is Fibonacelan, RAND Corp.
Moscow (trans
York, 1964). (3.1)
Kaplan, J. L., see Mitchell and Kaplan (1968),
Karey, D, L,, see Gregory and Karney (1969),
Karp, R. M., and Miranker, W. L., 1968, Par
J. Comb,
inimax Search for a ma
ues, Comm. ACM 7, 38, (6.1)
. Buehler, and Kempt
inno andl Kettler (1969).
Kiefer, J., 1953, Sequential minimax search for a maximum, Proc. Amer, Math. Soe.
4, 503-506. (1.2)
1987, Optimal sequential search and approximation methods under
imptions, S/AM J. Appl. Math. 5, 105-136. (6.7)
ene (1964).178 sie.socRseRy
1969, The art of computer progran
a9)
Reading, Massac
E.G,, 1955, Sol
ik, 1. S,, and Osborne, M. R., 1968, Methods for unconstrained optimization
Elsevier, New York. (1.2,2.6,3.7, 5.3, $4,7.1, 77,79)
1d Ryan, D. M., 1969, A new method for con-
and Pizzo (1971
sions of Fibonaccian search to nonl
SIAM J. Control 6, 288-265. (5.3)
Krolak, P. D., and Cooper, L., 1963, An extension of Fi
bles, Comm. ACM 6, 639. (6.7)
onaccian search to several
thms for the solu!
of the complete
rang: Neuere Verfahren
eal methods of
‘Academic Press, New York. (7.1)
Lancaster, P., 1966, Error analysis for the Newton-Rapbson method, Num
Math, 9, 35-68. (5.2)
Lapidus, L, see Goldfarb and Lapidus (1968).
Lavi, A., and Vogl T. P. (eds), 1966, Recent advances in optimization techn
ey, New York. (7.1, 8)
Lawson, C.L., 1968, Bibliography of recent publicarions in approximation theory with
‘emphasis on computer applications, Tech, Mera, 201, IPL, Pasadena,
Leon, A., 1966, A comparison of eight known opt
Vogl (1966). (7.7, 7.9)
Levenberg, K.A., 1944, A method for the solution of certain non-linear problemsin
Feast squares, Quart. Appl. Math. 2, 164-168. (7.1)
Levy, A. V., see Cragg and Levy (1969).
ing procedures, in Lavi and
FA, 1968, Co
Report 23, 408. (
Lootsma, F. A.
970, Boundary properties of penalty Junetions for constrained
m by vector space methods, Wiley, New York
Hyperbolic pairs in the method of conjugate gradients,
1263-1267. (7.1)
STAM I. Appl. Mat
aibuiocnary 170
Luenberger, D. G., 1970, The conjugate residual method for constrained
tion problems, S/AMJ. Numer. Anal. 7, 390-398. (7.1)
Magee, E.J., 1960, An empirical of procedures for locating th
, Lincoln Lab, Report 22G-0046,
peak of a multiple-peak vegress
a2
18, O. L., 1969, Nonlinear programming, McGraw-Hill, New York. (7.1)
D. W., 1963, An algorithm for least squares estimation of non
parameters, J. STAM 11, 431-441, (7.1)
1 R.S., see Bowdler, Martin, Reinsch, and Wiki
1 R.S., Reinsch, C., and Wilkinson, J. H., 1968, Housel
mn of a symmet
Matthews, A.,.and Davies, D., 1971. A comparison of modified New
ined optimization, Comp. J. 14, 293-294, (7.1)
ick, G. P., 1969, The rate of convergence of the reset Davidon variable metric
‘method, MRC Report 1012, Univ. of Wisconsin. (1.2, 7.1, 7.8)
1ec0 and MeCormick (1968
‘MeCormick, G. P., and Pearson, J. D., 1969, Variable metric met
strained optimization, in Fletcher (19698). (1.2, 7.1, 7.7, 7.9)
Mead, R., see Nelder and Mead (1968).
Meinardus, G., 1967, Approximation of fumetions: theory and numerical methods,
Springer-Verlag, Berlin, (3.7)
‘Mendelsohn, J., see Flanagan, Vitale, and Mendelsolin (1969)
Miele, A., and Cantrell, J. W., 1969, Study of a memory gradient method for the ”
minimization of functions, J. Optan, Tiny. and Apps. 3, 459-470, (7-1)
Milne, W. E., 1949, Numerical eafeulus, Princeton Univ, Press, Princeton, New
Jersey. (2.2)
Milne-Thomson, L. M., 1933, The caleuhs offi
22
Mirenker, W.L., 1969, Pari
IBM Jour. Res, and Dev, 13, 297-301. (4.5, 5.7)
Miranker, W. L., see Chazan and Miranker (1970), Karp and
R. A., and Kaplan, J L., 1968, Nonlinear constrained optimiz
non-random complex method, J. Res. NBS (Er
Moler, C. B., see Forsythe and Moler (1967), Fox, Hetiei, and Moles (1967).
ferences, Mace
Nan, London,
hods for approximating the root of a function,
ternational syn
ling, Princeton, New Jersey, 1967. (7.1)
Murray, W., see
Murtagh, B. A., and
quadratica
13, 185-194, (7.1)
‘and Davidon methods,
‘4. Optzn. Thy. anu Apps. 2, 209-219, (7.1)180° s1eLioGRAeHY
Myers, G. E,, see Johnson and Myers (1967).
Naur, P. (ed.), 1963, Revised report on the
ACM 6, 1-17.
Nelder, J. A., and Mead, R., 1965, A simplex method for funetion mi
guage ALGOL 60,
ation,
n of the maximum on unimodal surfaces, J, ACM
1970),
Ortega, J. M., 1968, The Newton-Kantorovich theorem,
75, 658-660. 3
Press, New York. @.
‘on Powell's method for calculating orthogonal vectors,
as)
Osborne, M. R., see Kowalik and Osborne (1968), Kowalik, Osborne, and Ryan
(1969),
ear programming,
. Canberra. (7.1)
Osborne, M. R., and Ryan, D, M., 1971, On penalty function methods for nonlinear
programming problems, J. Math. Anal. Apps., to appear. (7-1)
Ostrowski, A. M., 1966, Solution of equations and systems of equations, Academie
Press, New York (2nd edition). (1.2, 3.1, 32, 3.6,3.7, 4.2, 5.1.7.1)
Ostrowski, A. M., 19672, Contributions to the theory of the method of steepest
descent, Arch. Rational Mech, Anal. 26,
Ostrowski, A. M., 1967b, The round-off stabil
Mech. 47, 77-82, (5.2)
lt, K. J, 1965,
methods, BIT 5, 284. (5.3)
Overholt, K. J., 1967, Note on Algorithm 2, Algorithm 16 end Algorithes 17,
Comp. 1.9, 414. 5.3)
Palmer, J. R,, 1969, An improved procedure for or
n Rosenbrock’s and Swan's direct sei
9. (7.5)
Parlett.B. N., 1971, Analysis of
13, 197-208. (7.4)
Pearson, J. D., 1969, Vs
178. 0.0)
Pearson, J. D.. soe MeCe
1970, A new m mizing a sum of s
tients, Comp. J. 13, 418-420, (7.1)
Peters, G., and Wilkinson, J. H., 1969, Eigenvalues of 4x = ZBx with band sy
metric 4 and B, Comp. J. 12, 398-404. (1.2, 4.1.4.2)
terations, Z. Angew. Math.
cr
y in the Fibonacci andthe golden section search
0% the search vectors
‘optimization methods, Comp. J
sorithms for reflections in bisectors, SIAM Review
‘ble metric methods of minimization, Comp. J. 12, 171-
s1auiocRAcny et
Pierte, D. A., 1969, Optimizat
Pietraykowski, T., 1969, An exact potent
J. Numer. Anal. 6, 29. 7-1)
Pike, M.C., H James, F. D., 1967, Note on Al
5.9, 416. (5.2)
» 1967, Algorithm 2, Fibonacci Search, Comp. Bulletin
ith applications,
ley, New York. (5.4)
tethod for constrained maxima, SLAM
Pike, M, C.
8, 147. (5.3)
Pike, M. C,, see Bell and Pike (1966),
Pinner, J., see Pike and Pixner (1967),
Pizzo, J.T, see Jacoby, Kowalik, and Pizza (1971).
Powell, M. J. D., 1962, An iterative method for
tion of several variables, Ci
Powell. M. 1. D., 1964, An efficient method for finding the minimum of a
of several variables without calculating derivatives, Comp. J. 7, 135-162, (
73,15, 16,71, 78,79)
965, A method of minimizing a sum of squares of not
nat calculating derivatives, Comp. J. 7, 303-307. (7.17.2)
'966, Minimization of functions of several variables, in Walsh
nis
J.5, 147-151. (1.7, 7.9)
of a fune-
n of orthogonal vectors, Comp. J
Powell, M. J. D., 1968, 4 FORTRAN subrou
eqations, Report R-$947, AERE, Han
Powell, M. J. D., 19692, 4 hy
AERE, Harve
Powell, M.J. D., 19696, On the convergence ofthe valable metric algorithn, Report
TP 382, AERE, Harwell, England. (7.1)
Powell, M. J. D., 1968¢, A theorem on rank one modifications to a matrix and its
verse, Comp, J. 12, 288-290. (7.1)
Powell, M. J. D., 1970, A survey of numerical methods for unconst
tion, SIAM Review 12, 79-97. (7.1)
M. 1. D., 1970b, A new algorithon for
TP 393, AFRE, Harwell, England. (7.1)
M. J. D., 19702, Rank one
adie (1970). (7.1)
Powell, M. J. D., 1970d, 4 FORTRAN minimization,
es of the objective function, Report R-6469, AERT,
re for solving systems of nor-lincar
land, 7
id method for nonlinear equations, Report TP 368,
England. (7.1)
red optimiza-
Pow
neonstrained optimization, Report
hods for unconstrained optimization, in
tization, Report TP
Powell, M. J. D., see Fletcher and Powell (1963).192 wwe.iocRaPHY
Price, J. F., see Goldste
Quandt, R. E., see Goldfeld, Quandt, and T
nd Price (1967, 1970).
er (1968).
Vol. 2, Wiley, New York
|, L. B., 1966, Convergence of the Newton process to multiple solu
Math. 9, 23-37. (7.1)
I, L. B., 1969, Comput
New York. (7.1)
Ralston, A., 1963, On differer
21,26)
A., 1965, A first course in numerical analysis, McGraw-Hill, New York.
(12,26)
Ralston, A..and Wi 1960, Mathematical methods for digitalcomputers,
Vol. 1, Wiley,
Ralston, A.,and Wilf, H-S. (eds), 1967, Mathematical methods for digital computers,
Vol. 2, Wiley, New York
Ramsay, J. O., 1970, A fa
413-417. 7.1,
hods for optimization, Comp, J. 13,
Some mumericel experiments on Zan
strained minimization, Working Paper ICSU 319, Univ. of London. (7.3)
Rheinboldt, W. C., see Ortega and Rheinboldt (1970).
Rice, J. R., 1970, Minimization and techniques in nonlinear approximation, SZAM
lies in Numer. Anal. 2, 80-98. (7.1)
P. Lu, 1968, e-caleulus, Report CS 105, Stanford Univ. (1.2, 5.3
1970, Bounds on a polynomial, J. Res. NBS 74B, 47-54, (1.2, 6.1)
Robbins, H., 1952, Some aspects of the sequer nis, Bull
mer. Math. Soe. $8, 827-836, (1.2)
Rosen, J. B., 1960, The gradient projection method for nonl
Pact 1: Linear constraints, J. SIAM 8, 181, (7.1)
Rosen, J. B., 1961, The gradient projection method for non!
Past 2: Nonlinear constraints, J. STAM 9, $14, (7.1)
Rosen, J. B., and Suzuki, S., 1968, Construs
problems, Comm. ACM 8, 113. (7.1)
Roseabrock, H. H., 1960, An automatic method for finding the greatest or least
value of a function, Com 75-184. (68, 7.5, 7.7, 7.9)
Roth, B., see Freudenstein and Roth (1963).
Ryan, D. M., see Osborne and Ryan (1970, 1971), Kowalik, Osborne, and Ryan
196,
Sargent, R. W. H., see Murtagh and Sargent (1969, 1970)
Satterthwaite, E., see Bauer, Becker, and Graham (1968)
design of exper
ar programming,
1ear programming.
hear programing test
Saunders, M., see Gol
(1970).
Schriider, E., 1870, Uber unendlich vi
ungen, Math. Av, 2, 317-365. 3,
Schubert, L. K., 1970, Modification of a quasi-Newton method for
tions with a sparse Jacobian, Marh, Comp. 24, 27-30. (7.1)
RJ. and Kempthorne, ©., 1964, Some algorithms for
ion of several variables, STAM J. Appl. Math. 12, 74-92.
Saunders
169), Bartels, Golub, and Saunders
le Algori
‘aur Aufldsung der Gleich-
inear equa
izing a fur
ay
Shanno, D. F., 1970a, Parameter selection for modified Newton met
n minimization, S/4M J. Numer, Anal. 7, 366-372. (7.1)
inno, D. F.. 1970b, An accelerated gradient projection method for lined
STAM J. Appl. Math, 18, 322-334, (7.1)
inno, D. F., and Kettler, P, C., 1989, Optimal conditioning of quast-Newton
methods, Center for Math, Studies in Business and Economies Report 6937,
Univ. of Chicago. (7.1)
Smith, C. S., 1962, The automatic computation of maxi
NCB Sci. Dept, Report SC 846/MRJ40, (7.1.73)
L. B., see Golub and Smith (1967)
is for fune-
Spang, H. A., 1962, A review of
SIAM Review 4, 343-365, (7.1)
H., 1967, The damped Taylor series
‘and for solving syster
Spendley, W., Hext, G. Rv
implex desi
441. 7.1)
Sproull, R., see Swinehart and Sproull (1970),
Stewart, G. W., 1967, A modification of Davido
nee approximations of derivatives,
717,18)
Spa
imizing a sum of squares
ions, Comm. ACM 10, 726-728, (7.1)
th, F.R,, 1962, Seque:
AC-9, 105, (6.7)
Suzuki (1965),
2 on the development of a hod of
timization, ICI Ltd. Cent. last, Lab. Research Note 64/3, (1.2, 7.1, 73)
‘Swann (1960),Yet sie.ocnapHy
Swinchart, D., and Spr
Operating Note $7.1. 7.7)
Takahashi, I, 1965, A note on the conjugate gradient method, Information Pro:
cessing in Japan 5, 45-49. (7.1)
‘Thieleker, E. A., see Clark, Cody, Hillstrom, and Thieleker (1967)
Tornheim, L., 1964, Convergence of multipoint iterative methods, J. ACM 1
210-220. (3.2)
1964, Iterative methods for the solution of equ
fs, New Jersey. (2.2, 3.1,3.2, 4.5)
Traub, J. F., 1967, The solution of transcende:
(1967. B.1, 32)
‘Trotter, H. F., see Goldfeld, Quandt,
‘Teschach, H. G., see Kiinzi, Teschach, and Zehnder (1968)
and Mendelsoha (1968),
and Vogt (1966).
Voigt, R. G., 1971, Orders of convergence
‘Anal, 8, 22-243. (3.2, 7.1)
R,, 1970, SAIL, Stanford Arti
fence Project
ns, Prentice-Ha
ns, in Ralston and Wilf
iterative procedures, STAM J. Nu
Ish, J. (ed), 1966,
York.
M,, 1965, Algori
a.)
van Wijngaarden, A., Zonneveld, J. A., and Difkstra, B. W.
and AP230 de serie AP200, in Dekker (1
Wilde, D. J., 1964, Option seeking methods, Prentice-Hal
New Jersey. (1.2, 4.5, 5.3, 87, 7.1, 7.3)
Wilde, D. 4., and Beightler, C. S, 1967, Foundations of o
few Jersey. (7.1)
Wilf, H. S., see Ralston and Wilf (1960, 1967).
Jnson, J. Hy 1963, Rownding
42, 63,72)
500, J. H., 1965a, The algebraic eigenvalue probl
Oxford. (7.2, 7.4)
1965b, Error analysis of transfo
the form 1-2ww#, in Rall (1968). (7.4)
J. HL, 1967, Two algorithms based on
Report CS 60, Computer Soi, Dept., Stanford
Wilkinson, J. H., 1968, Global convergence of
TEIPS Congress (Edinburgh, 1968), (7.4)
Wilkinson (1969), Golub and Wilkinson (1966),
inson (1968), Bowdler, Martin, Reinsch, and Wilkin
1s in aleebraic processes, HMSO, London,
1, Oxford Univ, Press,
rations based on
he use of
wecessive linear interpolation,
11.2, 4.4, 4.2)
QR algor Proceedings of
Martin, Reinsch, and Wi
son (1968).
Winfield, D. H., 1967, Function
development of ALGOL,
Comm, ACM 9, 413-431. (1.1, 4.4, 5.6, 66, 7.9)
Witzgall, C., 1969, Fibonaect search with arbitrary frst evaluation, Report D1-82-
0916, Roeing Scientific Research Labs.
Woll, P,, 1959, The secant method for sim:
ACM 2, 12-13, (7.1)
Wolle, P., 1963, Methods of nonlinear programming,
programming, edited by R. L.. Graves and
a
Wolle, P., 1969, Convergence condi
226-235. (7.1)
Wolfe, P., 1971, Convergence conditions for ascent methods IT: some corrections,
Review 13, 185-188. (7.1)
see Crowder and Wolfe (1971), Winograd and Wolfe
Zadeh, L. A. (ed.), 1969, Computing methads in opt
Academic Press, New York, (7.1)
W. 1, 19672, Minimizing
Comp. 4. 10, 293-296, (7.1, 7.3)
|, W. 1., 1967b, Nonlinear programming via penalty functions, Mem. Sci
344-358. (7.1)
71),
cation problems, Vol. 2,
ition without calculating derivatives,
near progre
s, New Jersey, (7.1)
algorithms, Mgmt. Sei, 16 1. (7.1)
ler, C. A., ste Kin2i
ns, J ACM
ik, G., 1960, Methods of feasible direct
rork. (7.1)
Zoutendiik, G., 1966, Nonlinear programming: a numerical survey, SIAM J.
Control 4, 194-210. (7.1)
ul NewAPPENDIX
procedures zero, loca
‘The FORTRAN subro,
possible,
IBM 3%
been tested with 2 FORTRAN H compiler on an
1897
losely as =serenon
4 rograan
See Paoceoune ZERO. SECT ton
ANSLATION DF THE ALGOL pROCEOURE ZERO.
0
Di-te-TOL? Ga ro 110
10 10 120
no
10
0
2 10 10
6 10 19
140
PROCEDURE LOCALMINe
OMENS eTCL
ba ra 49
190.6243 .OHEG-ABSCTOL¥G)) sOReIPAGELABS(OL5#5ARN)I GO TO 90)
F2.U Vee FUNF VE PGP
fo
0
100
0
130
to
56.12) -ANd.1S8-Ue6E.T2)) 60 10 90
IF (X.GE.MI GO TO 50 ot
¥ 60 10 160
60 T0170
10 SB Sy
APPEND
fOstS8%1) Go Ta 40
109190 aprenon
20
TE UCFULGTAFMD SANDS (WoNE.KI) 60 TO 180
TEV) SANDS (VeNE=X) 0
fu HEA) GOT 10
so 70 10
Tocatn'? ex
neruan
A FORTRAN TRANSLATION OF
OL PROCEDURE GLOMIN,
16," FOR COMMENTS EF
By Ge he ACHE
sYOVYLAY24¥Sy¥D4 20921422
Neeital
Te (¥GsGe.¥) G0 10 10
60 10 30
GELZ24N2eRS (2260-2)) GO TO 50
DOES NOT OVERFLOW.
+0-00001#FLOArIKD
Wye rime
APPEND
"ra 40
19 Te (PegiuF.9.9) 69 10 89
As Snes 579)
63 Taga
0 13°90
so Tr (Asrce.a) Go Ta 140
ao SE
se 2 8s
BiaINDEX
ALGOL 60, 3, 38-60, 19-80, 112-15
ALGOL W, 3, 54, 76,103, 110,137,
135-67
6, 47,78, 100-103
7, 78-79, 124
ie constant, 21-22, 32, 34
Barter function, 117
Base, 51, 92
Beales function, 138,
Bos 27, 29,32, 34-35,
linear, 2, 24, 49, 72,119, 141
193
‘Convergence, order, 21-22, 28-46, 54, $7,
76,79
‘quadratie, 8,119, 124,127, 143
rate, §, 97-1
iva
at, 22, 26-28
22,50
3,22, 24, 26, $3-54, 76,119,
Cov's 181, 56-57
Dekker’ algorthin, 6, 48-80, 55-57, 72
S-monotoniciy, 68
Seunimodlity, 7, 64, 68-71, 73
Desceat property, 120,
Difference, divided, 4 5,
Difference equation,TPA, 0. 69) 10
ference operators, 84, 105
equations, SI
"s theorem, 38
"method, 3, 20, 45, 62
172, 1618
18, 119, 128
arithmetic, $1-33, 63-65, 68, Local, function, 189
16, 92-57, 123
overflow, 51
‘onderfiow. 4
FORTRAN, 3,
FORTRAN
ye 27, 32-37
36-27
Greenstadt’s method, §
130, 35-36, 82
Guard dit, 92,95, ‘0
swenk, 21, 28, 35,41, $4, 76
vergence, 31-22, 28-46, S4, $7,
79
PDP 10 computer, 137
Penalty function, 117
EX 195
thods,
Pyado-randon
SAIL, 137
Scaling. 124, 131-32,
Searching ordered fe,
Singular fanstion, 138,
45, 150rrbo1 20/08/10 3:30 PM
Errata for Algorithms for Minimization without Derivatives
(Prentice-Hall edition)
© Page 80, line 11, "p q x (a- x)".
‘The corresponding Fortran code on page 189 is correct. Thanks to Jason M. Lenthe for finding this
error.
© Page 163, immediately before the line reading
COMMENT: TRANSPOSE V FOR MINFIT;
insert the following:
FOR J := 1 UNTIL N DO V(1,J) := SL*V(I,J) END END;