[go: up one dir, main page]

0% found this document useful (0 votes)
22 views17 pages

C62 Lecture2

Uploaded by

maisiechengn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views17 pages

C62 Lecture2

Uploaded by

maisiechengn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Lecture 2: Problems and solutions.

Optimality
conditions for unconstrained optimization
(continued)
Coralia Cartis, Mathematical Institute, University of Oxford

C6.2/B2: Continuous Optimization

Lecture 2: Problems and solutions. Optimality conditions for unconstrained optimization (continued) – p. 1/13
Summary of first-order conditions. A look ahead

minimize f (x) subject to x ∈ Rn . (UP)


First-order necessary optimality conditions: f ∈ C 1 (Rn );
x∗ a local minimizer of f =⇒ ∇f (x∗ ) = 0.

x̃ = arg maxx∈Rn f (x) f(x)



∇f (x̃) = 0.
x1 x2 x
a b

Look at higher-order derivatives to distinguish between


minimizers and maximizers.
. . . except for convex functions.

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization (continued) – p. 7/13
Second-order optimality conditions (nonconvex fcts.)

Second-order necessary conditions: f ∈ C 2 (Rn );


x∗ local minimizer of f =⇒ ∇2 f (x∗ ) positive semidefinite, i.e.,
sT ∇2 f (x∗ )s ≥ 0, for all s ∈ Rn . [local convexity]
Example: f (x) := x3 , x∗ = 0 not a local minimizer but
!!
f # (0) = f (0) = 0.

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization (continued) – p. 9/13
Second-order optimality conditions (nonconvex fcts.)

Second-order necessary conditions: f ∈ C 2 (Rn );


x∗ local minimizer of f =⇒ ∇2 f (x∗ ) positive semidefinite, i.e.,
sT ∇2 f (x∗ )s ≥ 0, for all s ∈ Rn . [local convexity]
Example: f (x) := x3 , x∗ = 0 not a local minimizer but
!!
f # (0) = f (0) = 0. The second order necessary conditions are not sufficient.

Second-order sufficient conditions: f ∈ C 2 (Rn );


∇f (x∗ ) = 0 and ∇2 f (x∗ ) positive definite, namely,
sT ∇2 f (x∗ )s > 0, for all s &= 0.
=⇒ x∗ (strict) local minimizer of f .

Example: f (x) := x4 , x∗ = 0 is a (strict) local minimizer but


!!
f (0) = 0. The second order sufficient conditions are not necessary.

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization (continued) – p. 9/13
Proof of second-order conditions

Recall second-order Taylor expansions (see (4) and (5)


earlier, Lecture 1): let x and s in Rn be fixed; then for any
α > 0, we have
f (x + αs) = f (x) + αsT ∇f (x) + α2 T
2
s ∇ 2
f (x + α̃s)s (5)
for some α̃ ∈ (0, α).

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 14/17
Proof of second-order conditions

Recall second-order Taylor expansions (see (4) and (5)


earlier, Lecture 1): let x and s in Rn be fixed; then for any
α > 0, we have
f (x + αs) = f (x) + αsT ∇f (x) + α2 T
2
s ∇ 2
f (x + α̃s)s (5)
for some α̃ ∈ (0, α).
Proof of second order necessary conditions. Assume there
exists s ∈ Rn with sT ∇2 f (x∗ )s < 0.

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 14/17
Proof of second-order conditions

Recall second-order Taylor expansions (see (4) and (5)


earlier, Lecture 1): let x and s in Rn be fixed; then for any
α > 0, we have
f (x + αs) = f (x) + αsT ∇f (x) + α2 T
2
s ∇ 2
f (x + α̃s)s (5)
for some α̃ ∈ (0, α).
Proof of second order necessary conditions. Assume there
exists s ∈ Rn with sT ∇2 f (x∗ )s < 0. Then s #= 0 and f ∈ C 2
imply there exists α̂ > 0 such that
sT ∇2 f (x∗ + αs)s < 0 for all α ∈ [0, α̂]. (6)

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 14/17
Proof of second-order conditions

Recall second-order Taylor expansions (see (4) and (5)


earlier, Lecture 1): let x and s in Rn be fixed; then for any
α > 0, we have
f (x + αs) = f (x) + αsT ∇f (x) + α2 T
2
s ∇ 2
f (x + α̃s)s (5)
for some α̃ ∈ (0, α).
Proof of second order necessary conditions. Assume there
exists s ∈ Rn with sT ∇2 f (x∗ )s < 0. Then s #= 0 and f ∈ C 2
imply there exists α̂ > 0 such that
sT ∇2 f (x∗ + αs)s < 0 for all α ∈ [0, α̂]. (6)
Let α ∈ (0, α̂). Then (5) with x = x∗ and ∇f (x∗ ) = 0 imply
f (x∗ + αs) = f (x∗ ) + α2 T
2
s ∇ 2
f (x∗
(7) + α̃s)s.
for some α̃ ∈ (0, α). Since 0 < α̃ < α ≤ α̂, (6) implies that
sT ∇2 f (x∗ + α̃s)s < 0. Thus (7) implies f (x∗ + αs) < f (x∗ ), and
this holds for all α ∈ (0, α̂]. Contradiction, as x∗ is a local minimizer.!

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 14/17
Proof of second-order conditions ...

Proof of second order sufficient conditions. f ∈ C 2 and


∇2 f (x∗ ) % 0 imply that

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 15/17
Proof of second-order conditions ...

Proof of second order sufficient conditions. f ∈ C 2 and


∇2 f (x∗ ) % 0 imply that there exists a neighbourhood N (x∗ , δ)
of x∗ such that
∇2 f (x∗ + s) % 0 for all x∗ + s ∈ N (x∗ , δ). (8)

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 15/17
Proof of second-order conditions ...

Proof of second order sufficient conditions. f ∈ C 2 and


∇2 f (x∗ ) % 0 imply that there exists a neighbourhood N (x∗ , δ)
of x∗ such that
∇2 f (x∗ + s) % 0 for all x∗ + s ∈ N (x∗ , δ). (8)
Use (5) with x = x∗ , α = 1 and for any s with x∗ + s ∈ N (x∗ , δ):
f (x∗ + s) = f (x∗ ) + sT ∇f (x∗ ) + 12 sT ∇2 f (x∗ + α̃s)s (9)
for some α̃ ∈ (0, 1).

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 15/17
Proof of second-order conditions ...

Proof of second order sufficient conditions. f ∈ C 2 and


∇2 f (x∗ ) % 0 imply that there exists a neighbourhood N (x∗ , δ)
of x∗ such that
∇2 f (x∗ + s) % 0 for all x∗ + s ∈ N (x∗ , δ). (8)
Use (5) with x = x∗ , α = 1 and for any s with x∗ + s ∈ N (x∗ , δ):
f (x∗ + s) = f (x∗ ) + sT ∇f (x∗ ) + 12 sT ∇2 f (x∗ + α̃s)s (9)
for some α̃ ∈ (0, 1).
Note that &x∗ + α̃s − x∗ & = α̃&s& ≤ δ since α̃ ∈ (0, 1) and
x∗ + s ∈ N (x∗ , δ) (so that &s& ≤ δ ); thus x∗ + α̃s ∈ N (x∗ , δ)
which ensures that ∇2 f (x∗ + α̃s) % 0 due to (8).

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 15/17
Proof of second-order conditions ...

Proof of second order sufficient conditions. f ∈ C 2 and


∇2 f (x∗ ) % 0 imply that there exists a neighbourhood N (x∗ , δ)
of x∗ such that
∇2 f (x∗ + s) % 0 for all x∗ + s ∈ N (x∗ , δ). (8)
Use (5) with x = x∗ , α = 1 and for any s with x∗ + s ∈ N (x∗ , δ):
f (x∗ + s) = f (x∗ ) + sT ∇f (x∗ ) + 12 sT ∇2 f (x∗ + α̃s)s (9)
for some α̃ ∈ (0, 1).
Note that &x∗ + α̃s − x∗ & = α̃&s& ≤ δ since α̃ ∈ (0, 1) and
x∗ + s ∈ N (x∗ , δ) (so that &s& ≤ δ ); thus x∗ + α̃s ∈ N (x∗ , δ)
which ensures that ∇2 f (x∗ + α̃s) % 0 due to (8).
This and (9), as well as ∇f (x∗ ) = 0, imply f (x∗ + s) > f (x∗ ) for
all s #= 0 with x∗ + s ∈ N (x∗ , δ), q.e.d. !

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 15/17
Stationary points of quadratic functions

H ∈ Rn×n symmetric (wlog), g ∈ Rn : q quadratic function


q(x) := g T x + 12 xT Hx.
∇q(x∗ ) = 0 ⇐⇒ Hx∗ + g = 0: linear system.
∇2 q(x) = H for all x ∈ Rn .
H nonsingular: x∗ = −H −1 g unique stationary point.
H positive definite =⇒ x∗ minimizer (a), e)).
H negative definite =⇒ x∗ maximizer (b), e)).
H indefinite =⇒ x∗ saddle point (c), f)).
H singular and g + Hx = 0 consistent:
H positive semidefinite =⇒ infinitely many global
minimizers (d), g)).
Similarly H negative semidefinite or indefinite.
General f : approximately locally quadratic around x∗ stationary.

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization – p. 16/17
Stationary points of quadratic functions...

x*
x*

x*

(a) Minimum (b) Maximum (c) Saddle (d) Semidefi-


nite
x2 x2 x2

x* x*

x1 x1 x1

(e) Maximum or (f) Saddle (g) Semidefinite


minimum

Lecture 1: Problems and solutions. Optimality conditions for unconstrained optimization (continued) – p. 13/13

You might also like