[go: up one dir, main page]

0% found this document useful (0 votes)
15 views43 pages

Extrema of Multivariable Functions

The document provides an introduction to the optimization of multivariable functions, covering both constrained and unconstrained optimization methods. It defines key concepts such as absolute and local extrema, critical points, and the necessary conditions for optimality using first and second-order derivatives. Additionally, it discusses the nature of critical points and provides examples to illustrate these concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views43 pages

Extrema of Multivariable Functions

The document provides an introduction to the optimization of multivariable functions, covering both constrained and unconstrained optimization methods. It defines key concepts such as absolute and local extrema, critical points, and the necessary conditions for optimality using first and second-order derivatives. Additionally, it discusses the nature of critical points and provides examples to illustrate these concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Mathematical analysis 3

n
Chapter 4 : Extrema of multivariable functions

te
es
il
2024/2025

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Course outline

n
1 Introduction to optimization of multivariable functions

te
2 Optimization without constraints

3 Optimization with constraints


Direct method
es
Lagrange multipliers
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Absolute maximum and minimum values

Let D be a subset of Rm and let p ∈ D.

n
Definition (Absolute maximum and minimum values)
A function f defined on D has

te
an absolute minimum at p if

∀u ∈ D : f (p) ≤ f (u).

an absolute maximum at p if
es
∀u ∈ D : f (p) ≥ f (u).

an absolute extremum at p if it has an absolute minimum or an


absolute maximum at p.
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Local maximum and minimum values

Definition (Local maximum and minimum values)

n
A function f defined on an open set D has
a local minimum at p if ∃V (p) ⊂ D such that

te
∀u ∈ V (p) ∩ D : f (p) ≤ f (u),

a local maximum at p if ∃V (p) ⊂ D such that


es ∀u ∈ V (p) ∩ D : f (p) ≥ f (u).

a local extremum at p if it has a local minimum or a local


maximum at p.
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Some remarks

n
Remarks

te
A local extremum may not be absolute.
A function f has a global extremum at p over D if f (u) − f (p) does
not change sign for all u ∈ D, and a local extremum at p over an
open set D if there exists V (p) ⊂ D such that f (u) − f (p) does not
es
change sign for all u ∈ V (p) ∩ D.
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Course outline

n
1 Introduction to optimization of multivariable functions

te
2 Optimization without constraints

3 Optimization with constraints


Direct method
es
Lagrange multipliers
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Optimality necessary condition of first-order

Definition (Critical point)

n
Let D be an open set in Rm . A point p ∈ D is called a critical point of a
differentiable function f : D → R if ∇f (p) = 0.

te
The following result gives a necessary condition for a point to be an
extremum:
Theorem (Necessary condition of first-order optimality)
es
Let D be an open set in Rm and f : D → R be a C1 function. Then, if f
has a local extremum at a point p ∈ D, it is necessarily a critical point.

Remark. The converse is false.


il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Nature of critical points

n
te
es
To determine the nature of critical points, we have two methods:
By definition,
Using second-order partial derivatives.
Otherwise, we use the Taylor formula around the critical point to a
il
high enough order.
Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions
Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Study of extrema using the definition: Examples


Example. Let f (x, y) = x2 + y2 + xy. We have f ∈ C1 (R2 ). The critical
points of f : We have
à ∂f !

n
µ ¶
∂x 2x + y
∇f (x, y) = ∂f = .
∂y
2y + x

te
Thus, (x, y) is a critical point of f if and only if
n
2x + y = 0,
2y + x = 0,
which implies
es (
x = 0,
y = 0.
Nature of the critical points: We have
1 3
f (x, y) − f (0, 0) = (x + y)2 + y2 > 0 ∀(x, y) ∈ R2 \ {(0, 0)}.
il
2 4
Therefore, f has a unique extremum at (0, 0) which is a global minimum.
Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions
Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Study of extrema using the definition: Examples

Example. Let f (x, y) = x2 + y2 − 2x − 4y. We have f ∈ C1 (R2 ). The


critical points of f : We have

n
à ∂f ! µ ¶
∂x 2x − 2
∇f (x, y) = ∂f = .

te
∂y
2y − 4

Thus, (x, y) is a critical point of f if and only if


(
2x − 2 = 0,
es 2y − 4 = 0,

which implies (
x = 1,
y = 2.
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Study of extrema using the definition: Examples

n
Nature of the critical points: We use the change of variables
(x, y) = (h + 1, k + 2) Then,

te
f (x, y) − f (1, 2) = f ((h + 1, k + 2)) − f (1, 2)
= (h + 1)2 + (k + 2)2 − 2(h + 1) − 4(k + 2) + 5
= h2 + k2 > 0.
es
Therefore, f has a unique extremum at (1, 2) which is a global
minimum.
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Study of extrema using the definition: Examples

Let f (x, y) = x3 + y2 . We have f ∈ C1 (R2 ). The critical points of f : We


have

n
à ∂f ! µ ¶
3x2
∇f (x, y) = ∂∂xf = .
∂y
2y

te
Thus, (x, y) is a critical point of f if and only if
(
3x2 = 0,
2y = 0,
es
which implies (
x = 0,
y = 0.
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Study of extrema using the definition: Examples

Nature of the critical points: We observe that

n
f (−1, 0) = −1 < f (0, 0) < 1 = f (1, 0),

te
thus f does not have a global extremum at (0, 0).
Furthermore,

f (x, 0) − f (0, 0) = x3 < 0 if x < 0


es f (x, 0) − f (0, 0) = x3 > 0 if x > 0.

Therefore, f (x, y) − f (0, 0) changes sign in any neighborhood of (0, 0).


We conclude that f does not even have a local extremum at (0, 0).
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Optimality necessary condition of second-order

We will generalize to functions of several variables what is known for

n
extrema of real functions of one variable, namely if f : R → R is C2 in
the neighborhood of a critical point p (i.e., if f ′ (p) = 0), then:

te
if f ′′ (p) > 0, f has a local minimum at p,
if f ′′ (p) < 0, f has a local maximum at p,
if f ′′ (p) = 0, we cannot conclude, further calculations are needed
es
(for example, Taylor expansion of order greater than 2).
For functions of several variables, if f is C2 in the neighborhood of a
critical point p, we will focus on a "certain notion of positivity of the
Hessian matrix.
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Notions of "positive definite" and "negative definite" matrix

Definition

n
A symmetric matrix A ∈ Mm (R) is:
positive definite if:

te
∀X ∈ Rm \ {0}, X T AX > 0

negative definite if −A is positive definite i.e.:


es
∀X ∈ Rm \ {0}, X T AX < 0.

indefinite if A is neither positive definite nor negative definite.


il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Notions of "positive definite" and "negative definite" matrix

The following rule is very practical for determining if a matrix is


positive definite:

n
Proposition (Sylvester’s criterion)

te
A matrix A is positive definite if and only if ∀k ∈ {1, . . . , n}, det(∆k ) > 0,
where det(∆k ) is the k-th leading principal minor.

Example.

The matrix A =
2 1
es µ ¶
is positive definite.
1 2

Proposition
A matrix A is positive definite if and only if sp(A) ⊂ R∗+ .
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Notions of "positive definite" and "negative definite" matrix


Example: Let f (x, y) = x2 + y2 + 2y3 . We have
µ ¶
2 2 0
∇ (f )(x, y) = .

n
0 2 + 12y
In particular,

te
µ ¶
2 2 0
∇ (f )(0, 0) =
0 2
is a positive definite matrix.
µ ¶
es ∇2 (f )(x, −1) =
2 0
0 −10
is an invertible and indefinite matrix.
µ ¶ µ ¶
1 2 0
∇2 (f ) 0, − =
il
6 0 0
is a non-invertible matrix.
Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions
Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Fundamental theorem

Theorem
Let D be an open set in Rm and f : D ⊆ Rm → R be a function of class

n
C2 with p a critical point of f (i.e., ∇f (p) = 0). Then,
if Hess(f )(p) is positive definite, then f has a strict local

te
minimum at p,
if Hess(f )(p) is negative definite, then f has a strict local
maximum at p,
if Hess(f )(p) is invertible and indefinite, then f has neither a local
es
maximum nor a local minimum at p (i.e., p is a saddle point),
if Hess(f )(p) is non-invertible, no conclusion. In this case, the
critical point p is said to be degenerate (another method is
needed to determine its nature, i.e., to determine the sign of
il
f (p + h) − f (p) for h very close to 0 in Rm ).

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Some examples

Example.
Let f be the function defined on R3 by f (x, y, z) = x2 + y2 + z2 . The only

n
critical point of f is p = (0, 0, 0). Using the Hessian of f , we deduce
that f has a strict local minimum at (0, 0, 0).

te
Example.
Let f be the function defined on R3 by f (x, y, z) = x3 − 3x + y2 − 2y + z2 .
The function f has 2 critical points at (1, 1, 0) and (−1, 1, 0).
es
We deduce that since Hess(f )((1, 1, 0)) is positive definite, f has a
strict local minimum at (1, 1, 0).
Also, since Hess(f )((−1, 1, 0)) is invertible and indefinite, f has
neither a local maximum nor a local minimum at (−1, 1, 0) (it is a
il
saddle point).

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Optimization without constraints
Optimization with constraints

Special case of functions with two variables:


For a function of class C2 , let’s define
∂f ∂f
c= (α, β), d= (α, β), ∇f (α, β) = (c, d)t

n
∂x ∂y
∂2 f ∂2 f ∂2 f
r= (α, β), s= (α, β), t= (α, β).

te
∂x 2 ∂x∂y ∂y2
Theorem (Monge’s theorem)
Let D be an open set in R2 , f : D ⊆ R2 → R be a function of class C2 ,
and p = (α, β) ∈ D. Then, (α, β) is a critical point if and only if
es
c = d = 0. Furthermore,
If rt − s2 > 0 and r > 0, then f has a strict local minimum at p.
If rt − s2 > 0 and r < 0, then f has a strict local maximum at p.
If rt − s2 < 0, then f does not have an extremum at p (saddle point).
il
If rt − s2 = 0 (the critical point p is degenerate), no conclusion.
Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions
Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Course outline

n
1 Introduction to optimization of multivariable functions

te
2 Optimization without constraints

3 Optimization with constraints


Direct method
es
Lagrange multipliers
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Introduction

n
What is the problematic?

te
Determine the local extrema of a function f in a set A, where

A = {x ∈ D | g1 (x) = 0, . . . , gk (x) = 0},


es
D is an open subset of Rm , f and g1 , . . . , gk : D → R, (k ≤ m − 1).
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Introduction

Definition

n
We say that a function f has a maximum or minimum at p ∈ A under
the constraints g1 (x) = 0, . . . , gk (x) = 0, if the restriction f |A has a

te
maximum (resp. minimum) at this point p, that is,

es g1 (p) = . . . = gk (p) = 0,

f (p ) = max f (x )
x∈D,g1 (x)=...=gk (x)=0
or
f (p) = min f (x).
x∈D,g1 (x)=...=gk (x)=0
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Introduction

n
How to do?

te
To solve this problem, two methods are proposed.
es
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Introduction

n
How to do?

te
To solve this problem, two methods are proposed.

1 Direct method,
2
es
Lagrange multipliers.
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Course outline

n
1 Introduction to optimization of multivariable functions

te
2 Optimization without constraints

3 Optimization with constraints


Direct method
es
Lagrange multipliers
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Direct method

n
The case n = 2

te
In this case, let A = f (x, y) ∈ U = {(x, y) | g(x, y) = 0}, the direct method
consists of solving the explicit equation g(x, y) = 0 and expressing y as
a function of x or vice versa, then substituting into f (x, y). Finally, one
es
must find the extrema of this new function, which is a function of one
variable, and go back to f .
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Direct method

Example: Find the extrema (if they exist) of f :

n
f (x, y) = x + y2

te
subject to the constraint x − 2y = −1.
Answer: Df = R2 , f is C1 on R2 because it’s a polynomial.
We need to determine the extrema of f under the constraint
h(x, y) = x − 2y + 1 = 0.
es
We have x − 2y = −1 ⇒ x = 2y − 1. The problem is reduced to finding
the extrema of the function g:

g(y) = f (2y − 1, y) = y2 − 2y + 1
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Direct method

n
Let’s study the variations of g: g′ (y) = 2y − 2, g′′ (y) = 2, g′ (y) = 0 ⇒
y = 1:

te
y −∞ 1 +∞
g′ (y) − 0 +
g ↘ minimum at y = 1 ↗
So, g has a minimum at y0 = 1 (it gives a global minimum). Going
es
back to x0 = 2y0 − 1 = 1, we obtain that (1, 1; f (1, 1)) is a minimum
under constraint for f .
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Direct method

The case n = 3

n
• If A = f (x, y, z) ∈ U = {g1 (x, y, z) = 0, g2 (x, y, z) = 0}, the method
consists of solving the system of equations

te
(
g1 (x, y, z) = 0
g2 (x, y, z) = 0

and expressing 2 variables in terms of the third, then substituting into


es
f (x, y, z). Finally, one must find the extrema of this new function,
which is a function of one variable, and go back to f .
• If A = f (x, y, z) ∈ U = {g(x, y, z) = 0}, the method consists of solving
the equation g(x, y, z) = 0 and expressing 1 variable in terms of the
other two, then substituting into f (x, y, z). Finally, one must find the
il
free extrema of this new function, which is a function of two
variables, and go back to f .
Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions
Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Direct method
Example: Find the extrema (if they exist) of f :
f (x, y, z) = x2 + y2 + z2

n
subject to the constraints:

te
A = {(x, y, z) ∈ R3 | x + y + z = 3, z = 1}

Answer: Df = R3 , f is C∞ on R3 because it’s a polynomial.


We need to determine the extrema of f under the constraints ϕ1 and
ϕ2
es
ϕ1 (x, y, z) = x + y + z − 3, ϕ2 (x, y, z) = z − 1.

We have:
il
x + y + z − 3 = 0, z−1 = 0

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Direct method
This implies:
y = 2 − x, z=1

n
The problem reduces to finding the extrema of the function g:

te
g(x) = f (x, 2 − x, 1) = x2 + (2 − x)2 + 1 = 2x2 − 4x + 5

Let’s study the variations of g: g′ (y) = 4x − 4, g′ (x) = 0 ⇒ x = 1:


es
x −∞ 1 +∞
g ′ (x ) − 0 +
g ↘ minimum at x = 1 ↗
il
So, g has a minimum at x0 = 1. Going back to y0 = 2 − x0 = 1 and
z0 = 1, we obtain that (1, 1, 1; f (1, 1, 1)) is a minimum for f .
Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions
Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Course outline

n
1 Introduction to optimization of multivariable functions

te
2 Optimization without constraints

3 Optimization with constraints


Direct method
es
Lagrange multipliers
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Theorem (Lagrange multipliers)


Let f be a real-valued function on U and let a be an element of A.
Suppose that f and ϕ1 , ϕ2 , . . . , ϕp are C1 functions.

n
If:
(a, f (a)) is an extremum of f under the constraints ϕ1 , ϕ2 , . . . , ϕp ,

te
The vectors ∇ϕ1 (a), ∇ϕ2 (a), . . . , ∇ϕp (a) are linearly independent,
Then, there exist p real numbers λ1 , λ2 , . . . , λp and an auxiliary
function F = F (X ) = f (X ) + λ1 ϕ1 (X ) + λ2 ϕ2 (X ) + . . . + λp ϕp (X ), such
that a1 , a2 , . . . , an and λ1 , λ2 , . . . , λp are solutions of the system:
es 
∂F
 ∂x (x1 , . . . , xn ) = 0



 .. 1
.

 ∂F (x , . . . , x ) = 0

∂xn 1 n
ϕ (x , . . . , xn ) = 0
.. 1 1




.ϕ (x , . . . , x ) = 0

il


p 1 n

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Lagrange multipliers

Remarks

n
1) The points (a1 , a2 , . . . , an ) resulting from the solutions of (S) are
called critical points of f under the constraints ϕ1 , ϕ2 , . . . , ϕp .
2) The working method consists of:

te
First determining the "doubtful" points such that the vectors:
∇ϕ1 (a), ∇ϕ2 (a), . . . , ∇ϕp (a) are linearly dependent, then rejecting
those that do not satisfy the constraints and testing the remaining
ones (using the definition method, without forgetting to apply the
constraints).
es
Then determining the critical points of f - using the Lagrange
multipliers - and testing them (using the definition method,
without forgetting to apply the constraints).
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Lagrange multipliers

Example.

n
Find the extrema of f :
f (x, y) = −x + y2

te
under the constraint ϕ:

ϕ(x, y) = x − 2y + 1

using the method of Lagrange multipliers.


es
Answer. Let Df = Dϕ = R2 : f , ϕ are C1 functions R2 as they are
polynomials. Search for doubtful points: We search for points (x, y)
where the vector ∇ϕ is linearly dependent, i.e., we search for (x, y)
such that ∇ϕ(x, y) = (0, 0).
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Lagrange multipliers

We then solve the system:

n
( ∂ϕ (
∂x (x, y) = 0 1=0
∂ϕ −→
∂y (x, y) = 0 2=0

te
In this case, the doubtful points are the critical points of ϕ. This
system has no solution, so there are no doubtful points. Search for
critical points: Using Lagrange multipliers, consider the auxiliary
es
function (the Lagrangian):

F (x, y) = f (x, y) + λϕ(x, y) = −x + y2 + λ(x − 2y + 1)

If (x, y) gives an extremum of f under the constraint ϕ(x, y) = 0, then


there exists λ ∈ R such that ∇F (x, y) = (0, 0).
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Lagrange multipliers

Solve the system: (S)

n
∂F

 ∂ x (x , y ) = 0


∂F
∂y (x, y) = 0

te

ϕ(x, y) = 0

−1 + λ = 0 (1)
2y − 2λ = 0 (2)
es x − 2y + 1 = 0 (3)
From (1) and (2), we get λ = 1 and y = λ = 1. From (3), we get
x = 2y − 1, so x = 1, and thus (1, 1) is the only solution to the system
(S). We obtain a single critical point: M = (1, 1).
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Lagrange multipliers

Testing M: To do this, we use the definition and see the sign of


f (x, y) − f (M ) = (x, y)2 .

n
f (1 + h1 , 1 + h2 ) − f (1, 1) = −(1 + h1 ) + (1 + h2 )2 + 1 − 1

te
with
(1 + h1 ) − 2(1 + h2 ) + 1 = 0
i.e.,
es
f (1 + h1 , 1 + h2 ) − f (1, 1) = −h1 + h22 + 2h2 , with h1 − 2h2 = 0

Then
f (1 + h1 , 1 + h2 ) − f (1, 1) = h22 − 0 = h22
il
Therefore, (1, 1) is a constrained minimum for f .

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Lagrange multipliers

n
Theorem

te
Let f be a real-valued function on U and a an element of A: If A is a
closed bounded set and f is continuous on A, then f attains its bounds,
i.e., it has a maximum value (given by a constrained maximum) and a
minimum value (given by a constrained minimum).
es
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Lagrange multipliers
Example: Determine the maximum value of f (x, y) = x2 + 2y2 on the
circle with equation x2 + y2 = 1:
Answer: Df = R2 ; f is C1 on R2 as it is a polynomial. The set

n
C((0, 0); 1) is a closed bounded set, so f attains its bounds.
We determine now the extrema of f under the constraint

te
ϕ = ϕ(x, y) = x2 + y2 − 1, we have:
Dϕ = R2 ; ϕ is C1 on R2 as it is a polynomial.
Search for doubtful points: We search for (x, y) such that the vector
∇ϕ(x, y) is linearly dependent, i.e., we search for solutions of the
system:
es
∂ϕ

∂ x (x , y ) = 0



 ∂ϕ (x, y) = 0


∂y


2x = 0
il

2y = 0

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Lagrange multipliers
In this case, the only solution is (0, 0), but this point does not satisfy
the constraint, since ϕ(0, 0) , 0, so this doubtful point is rejected (i.e.,

n
this doubtful point will not give a constrained extremum for f ).
Search for critical points: Using Lagrange multipliers, consider the

te
Lagrangian function:

F (x, y) = f (x, y) + λϕ(x, y) = x2 + 2y2 + λ(x2 + y2 − 1)

If (x, y) gives a constrained extremum of f under the constraint


es
ϕ(x, y) = 0, then there exists λ ∈ R such that ∇F (x, y) = (0, 0). First,
solve the system (S):
∂F

 ∂ x (x , y ) = 0


∂F
∂y (x, y) = 0 
ϕ(x, y) = 0

il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions


Introduction to optimization of multivariable functions

co
Direct method
Optimization without constraints
Lagrange multipliers
Optimization with constraints

Lagrange multipliers

n
2x + 2λx = 0 (1 )
4y + 2λy = 0

te
(2 )
x 2 + y2 = 1 (3)
There are four critical points: (0, 1), (0, −1), (1, 0), (−1, 0). Then,
according to theorem 2, f attains its bounds, so it is unnecessary to
es
perform the tests, it suffices to calculate:
f (0, 1) = 2, f (0, −1) = 2, f (1, 0) = 1, and f (−1, 0) = 1. Conclusion: The
maximum value of f on the circle with equation x2 + y2 = 1 is 2.
il

Mathematical analysis 3 Chapter 4 : Extrema of multivariable functions

You might also like