Ppt-Mathematics 1 Part 1
Ppt-Mathematics 1 Part 1
Eigenvalue
Ax x
x
Eigenvector
x 7.2
Ex 1: Verifying eigenvalues and eigenvectors
2 0 1 0
A x1 x 2
0 1 0 1
Eigenvalue
2 0 1 2 1
Ax1 2 2x1
0 1 0 0 0
Eigenvector
Eigenvalue
2 0 0 0 0
Ax 2 1 (1)x 2
0 1 1 1 1
Eigenvector
7.3
Thm. 1: The eigenspace corresponding to of matrix A
If A is an nn matrix with an eigenvalue , then the set of all
eigenvectors of together with the zero vector is a subspace
of R . This subspace is called the eigenspace of
n
Proof:
x1 and x2 are eigenvectors corresponding to
(i.e., Ax1 x1 , Ax2 x2 )
(1) A(x1 x 2 ) Ax1 Ax 2 x1 x 2 (x1 x 2 )
(i.e., x1 x 2 is also an eigenvector corresponding to λ)
(2) A(cx1 ) c( Ax1 ) c( x1 ) (cx1 )
(i.e., cx1 is also an eigenvector corresponding to )
Since this set is closed under vector addition and scalar
multiplication, this set is a subspace of Rn .
7.4
Ex 3: Examples of eigenspaces on the xy-plane
For the matrix A as follows, the corresponding eigenvalues
are 1 = –1 and 2 = 1:
1 0
A
0 1
Sol:
For the eigenvalue 1 = –1, corresponding vectors are any vectors on the x-axis
For the eigenvalue 2 = 1, corresponding vectors are any vectors on the y-axis
x x 0 x 0
Av A A A A
y 0 y 0 y
x 0 x
1 1
0 y y
7.6
Thm. 2: Finding eigenvalues and eigenvectors of a matrix AMnn
Let A be an nn matrix.
(1) An eigenvalue of A is a scalar such that det( I A) 0
(2) The eigenvectors of A corresponding to are the nonzero
solutions of ( I A)x 0
Note: follwing the definition of the eigenvalue problem
Ax x Ax Ix ( I A)x 0 (homogeneous system)
( I A)x 0 has nonzero solutions for x iff det( I A) 0
(The above iff results comes from the equivalent conditions on Slide 4.101)
Characteristic equation of A:
det( I A) 0
Characteristic polynomial of AMnn:
det( I A) ( I A) n cn 1 n 1 c1 c0
7.7
Ex 4: Finding eigenvalues and eigenvectors
2 12
A
1 5
Eigenvalue: 1 1, 2 2
7.8
3 12 x1 0
(1) 1 1 (1 I A)x
1 4 x2 0
3 12 G.-J. E. 1 4
1 4 0 0
x1 4t 4
t , t 0
x2 t 1
4 12 x1 0
(2) 2 2 (2 I A)x
1 3 x2 0
4 12 G.-J. E. 1 3
1 3 0 0
x1 3s 3
s , s 0
x2 s 1 7.9
Ex 5: Finding eigenvalues and eigenvectors
Find the eigenvalues and corresponding eigenvectors for
the matrix A. What is the dimension of the eigenspace of
each eigenvalue?
2 1 0
A 0 2 0
0 0 2
Sol: Characteristic equation:
2 1 0
I A 0 2 0 ( 2)3 0
0 0 2
Eigenvalue: 2
7.10
The eigenspace of λ = 2:
0 1 0 x1 0
( I A)x 0 0 0 x2 0
0 0 0 x3 0
x1 s 1 0
x 2 0 s 0 t 0 , s , t 0
x3 t 0 1
1 0
s 0 t 0 s, t R : the eigenspace of A corresponding to 2
0 1
7.12
Ex 6:Find the eigenvalues of the matrix A and find a basis
for each of the corresponding eigenspaces
1 0 0 0
0 1 5 10
A
1 0 2 0
1 0 0 3
x1 0 0
5t 5
G.-J.E. x
2 t , t 0
x3 t 1
x4 0 0
0
5 is a basis for the eigenspace
1 corresponding to 2 2
0
0
5 is a basis for the eigenspace
0 corresponding to 3 3
1
※The dimension of the eigenspace of λ3 = 3 is 1
7.16
Thm. 3: Eigenvalues for triangular matrices
If A is an nn triangular matrix, then its eigenvalues are
the entries on its main diagonal
Ex 7: Finding eigenvalues for triangular and diagonal matrices
1 0 0 0 0
0 0
2 0 0 2 0 0
(a) A 1 1 0 (b) A 0 0 0 0 0
5 3 3 0 0 0 4 0
0 0 0 0 3
Sol:
2 0 0
(a) I A 1 1 0 ( 2)( 1)( 3) 0
5 3 3
1 2, 2 1, 3 3
(b) 1 1, 2 2, 3 0, 4 4, 5 3 7.17
Ex 8: Finding eigenvalues and eigenvectors for standard matrices
Find the eigenvalues and corresponding eigenvectors for
1 3 0 ※ A is the standard matrix for T(x1, x2,
A 3 1 0 x3) = (x1 + 3x2, 3x1 + x2, –2x3) (see
Slides 7.19 and 7.20)
0 0 2
Sol:
1 3 0
I A 3 1 0 ( 2) ( 4) 0
2
0 0 2
eigenvalues 1 4, 2 2
T (x)B ' A ' xB ' , where A ' T ( v1 )B ' T ( v 2 )B ' T ( v n )B '
is the transformation matrix for T relative to the basis B '
7.19
2 Diagonalization
Diagonalization problem :
For a square matrix A, does there exist an invertible matrix P
such that P–1AP is diagonal?
Diagonalizable matrix :
Definition 1: A square matrix A is called diagonalizable if
there exists an invertible matrix P such that P–1AP is a
diagonal matrix (i.e., P diagonalizes A)
Definition 2: A square matrix A is called diagonalizable if A
is similar to a diagonal matrix
※ In Sec. 6.4, two square matrices A and B are similar if there exists an invertible
matrix P such that B = P–1AP.
Notes:
This section shows that the eigenvalue and eigenvector problem
is closely related to the diagonalization problem
7.20
Thm. 4: Similar matrices have the same eigenvalues
If A and B are similar nn matrices, then they have the
same eigenvalues
Pf:
1 For any diagonal matrix in the
A and B are similar B P AP form of D = λI, P–1DP = D
7.23
Thm. 5: Condition for diagonalization
An nn matrix A is diagonalizable if and only if it has n
linearly independent eigenvectors
※ If there are n linearly independent eigenvectors, it does not imply that there are n distinct
eigenvalues. In an extreme case, it is possible to have only one eigenvalue with the
multiplicity n, and there are n linearly independent eigenvectors for this eigenvalue
※ On the other hand, if there are n distinct eigenvalues, then there are n linearly
independent eigenvectors, and thus A must be diagonalizable
7.24
Ex 4: A matrix that is not diagonalizable
Show that the following matrix is not diagonalizable
1 2
A
0 1
Sol: Characteristic equation:
1 2
I A ( 1) 2 0
0 1
The eigenvalue 1 1, and then solve (1I A)x 0 for eigenvectors
0 2 1
1 I A I A eigenvector p1
0 0 0
Step 3: 1 0 0
1
0 2 0
P AP D
0 0 n
where Api ipi , i 1, 2,, n
7.26
Ex 5: Diagonalizing a matrix
1 1 1
A 1 3 1
3 1 1
Find a matrix P such that P 1 AP is diagonal.
7.27
1 1 1 1 0 1 x1 0
1 2 1 I A 1 1 1 G.-J. E.
0 1 0 x2 0
3 1 3 0 0 0 x3 0
x1 t 1
x 0 eigenvector p 0
2 1
x3 t 1
3 1 1 1 0 14 x1 0
2 2 2 I A 1 5 1
G.-J. E.
0 1 14 x2 0
3 1 1 0 0 0 x3 0
x1 14 t 1
x 1 t eigenvector p 1
2 4 2
x3 t 4
7.28
2 1 1 1 0 1 x1 0
3 3 3 I A 1 0 1 G.-J. E.
0 1 1 x2 0
3 1 4 0 0 0 x3 0
x1 t 1
x t eigenvector p 1
2 3
x3 t 1
1 1 1
P [p1 p 2 p 3 ] 0 1 1 and it follows that
1 4 1
2 0 0
P 1 AP 0 2 0
0 0 3
7.29
Note: a quick way to calculate Ak based on the diagonalization
technique
1 0 0 1k 0 0
0
0 0 2k 0
(1) D 2
D
k
k
0 0 n 0 0 n
(2) D P 1 AP D k
P
1
AP P 1
AP
P 1
AP P 1 k
AP
repeat k times
1k 0 0
k 1 0 2k 0
A PD P , where D
k k
k
0 0 n 7.30
Thm. 6: Sufficient conditions for diagonalization
If an nn matrix A has n distinct eigenvalues, then the
corresponding eigenvectors are linearly independent and
thus A is diagonalizable.
7.31
Ex 7: Determining whether a matrix is diagonalizable
1 2 1
A 0 0 1
0 0 3
7.32
Ex 8: Finding a diagonalized matrix for a linear transformation
Let T : R 3 R 3 be the linear transformation given by
T (x1 ,x2 ,x3 ) (x1 x2 x3 , x1 3x2 x3 , 3x1 x2 x3 )
Find a basis B ' for R 3 such that the matrix for T relative
to B ' is diagonal
Sol:
The standard matrix for T is given by
1 1 1
A 1 3 1
3 1 1
From Ex. 5 you know that λ1 = 2, λ2 = –2, λ3 = 3 and thus A is
diagonalizable.
7.33
B ' {v1 , v2 , v3} {(1, 0, 1),(1, 1, 4),( 1, 1, 1)}
7.34
3 Symmetric Matrices and Orthogonal Diagonalization
Symmetric matrix :
A square matrix A is symmetric if it is equal to its transpose:
A AT
a b, c 0
a c a 0
A itself is a diagonal matrix
c b 0 a
※ Note that in this case, A has one eigenvalue, a, whose multiplicity is 2,
and the two eigenvectors are linearly independent
(2) (a b)2 4c 2 0
The characteristic polynomial of A has two distinct real roots,
which implies that A has two distinct real eigenvalues.
According to Thm. 6, A is diagonalizable
7.38
Orthogonal matrix :
A square matrix P is called orthogonal if it is invertible and
P1 PT (or PPT PT P I )
Thm. 8: Properties of orthogonal matrices
An nn matrix P is orthogonal if and only if its column vectors
form an orthonormal set
Pf: Suppose the column vectors of P form an orthonormal set, i.e.,
P p1 p 2 p n , where pi p j 0 for i j and p i pi 1
p1T p1 p1T p 2 p1T p n p1 p1 p1 p 2 p1 p n
T
p p p T
2 p2 p 2T p1 p 2 p1 p 2 p 2 p 2 p1
P P
T 2 1
In
T
p n p1 p n p 2
T
p n p n p n p1 p n p 2
T
pn pn
P 5 1
5
0
2 4 5
3 5 3 5 3 5
PPT 25 0 1 0 I
1
0 2
3
1 4
2
5 5 3 5
0 1
3 5 3 5 3 5 3 0 3 5 0
4 5 2 5
7.40
1 2 2
3 3
3
Moreover, let p1 5 , p 2 5 , and p3 0 ,
2 1
2 4 5
3 5 3 5 3 5
we can produce p1 p 2 p1 p3 p 2 p3 0 and p1 p1
p 2 p 2 p3 p3 1
7.41
Thm. 9: Properties of symmetric matrices
Let A be an nn “symmetric” matrix. If 1 and 2 are distinct
eigenvalues of A, then their corresponding eigenvectors x1 and x2
are orthogonal.
Pf:
1 (x1 x2 ) (1x1 ) x2 ( Ax1 ) x2 ( Ax1 )T x2 (x1T AT )x2
because A is symmetric
(x1T A)x2 x1T ( Ax2 ) x1T (2 x2 ) x1 (2x2 ) 2 (x1 x2 )
The above equation implies (1 2 )(x1 x 2 ) 0, and because
1 2 , it follows that x1 x 2 0. So, x1 and x 2 are orthogonal
※ For distinct eigenvalues of a symmetric matrix, their corresponding
eigenvectors are orthogonal and thus linearly independent to each other
※ Note that there may be multiple x1’s and x2’s corresponding to 1 and 2
7.42
Orthogonal diagonalization :
A matrix A is orthogonally diagonalizable if there exists an
orthogonal matrix P such that P–1AP = D is diagonal
7.43
Orthogonal diagonalization of a symmetric matrix:
Let A be an nn symmetric matrix.
(1) Find all eigenvalues of A and determine the multiplicity of each
※ According to Thm. 9, eigenvectors corresponding to distinct eigenvalues are
orthognoal
(2) For each eigenvalue of multiplicity 1, choose the unit eigenvector
(3) For each eigenvalue of the multiplicity to be k 2, find a set of k
linearly independent eigenvectors. If this set {v1, v2, …, vk} is not
orthonormal, apply the Gram-Schmidt orthonormalization process
It is known that G.-S. process is a kind of linear transformation, i.e., the
produced vectors can be expressed as c1 v1 c2 v 2 ck v k (see Slide 5.55),
i. Since Av1 v1 , Av 2 v 2 , , Av k v k ,
A(c1 v1 c2 v 2 ck v k ) (c1 v1 c2 v 2 ck v k )
The produced vectors through the G.-S. process are still eigenvectors for
ii. Since v1 , v 2 , , v k are orthogonal to eigenvectors corresponding to other
different eigenvalues (according to Thm. 7.9), c1 v1 c2 v 2 ck v k is also
orthogonal to eigenvectors corresponding to other different eigenvalues. 7.44
(4) The composite of steps (2) and (3) produces an orthonormal set of
n eigenvectors. Use these orthonormal and thus linearly
independent eigenvectors as column vectors to form the matrix P.
i. According to Thm. 8, the matrix P is orthogonal
ii. Following the diagonalization process , D = P–1AP is diagonal
Therefore, the matrix A is orthogonally diagonalizable
7.45
Ex 7: Determining whether a matrix is orthogonally diagonalizable
Symmetric Orthogonally
matrix diagonalizable
1 1 1
A1 1 0 1
1 1 1
5 2 1
A2 2 1 8
1 8 0
3 2 0
A3
2 0 1
0 0
A4
0 2
7.46
Ex 9: Orthogonal diagonalization
Find an orthogonal matrix P that diagonalizes A.
2 2 2
A 2 1 4
2 4 1
Sol:
(1) I A ( 3) 2 ( 6) 0
orthogonal
7.47
Rolle’s Theorem and the Mean
Value Theorem
Rolle’s Theorem
If you connect from f (a) to
f (b) with a smooth curve, f(a)=f(b)
there will be at least one a b
place where f ’(c) = 0
Rolle’s Theorem
Rolle's theorem is an important
basic result about differentiable
functions. Like many basic
results in the calculus it seems
very obvious. It just says that
between any two points where
the graph of the differentiable
function f (x) cuts the horizontal
line there must be a point where
f '(x) = 0. The following picture
illustrates the theorem.
Rolle’s Theorem
If two points at the same height
_______ are
connected by a continuous,
differentiable function, then there has
to be ________
at least one place between those
two points where the derivative, or
zero
slope, is _____.
Rolle’s Theorem
If 1) f (x) is continuous on [a, b],
2) f (x) is differentiable on (a, b), and
3) f (a) = f (b)
a b
Example
Example 1 f ( x) x 4 2 x 2 on [2, 2]
( f is continuous and differentiable)
f (2) 8 f (2)
continuous on [-1, 1]
not differentiable at 0
not differentiable on (-1, 1)
f(-1) = 1 = f(1)
f (b) f (a)
f ' (c )
ba
Example
Example 6 f ( x) x3 x 2 2x on [-1,1]
(f is continuous and differentiable)
f ' ( x) 3x 2 x 2
2 MVT applies
20
f ' (c) 1
1 (1)
3c 2 2c 2 1
(3c 1)(c 1) 0
1
c , c 1
3
Mean Value Theorem- MVT
Note:
f '( x) 0 on (a, b)
f is increasing on (a, b)
Note:
f '( x) 0 on (a, b)
f is decreasing on (a, b)
Note:
f is constant on (a, b)
b
In defining a definite integral f ( x) dx ,
a
we dealt with a function f defined on a finite
interval [a, b] and we assumed that f does
not have an infinite discontinuity
Improper Integrals
1
lim A(t ) lim 1 1
t t
t
INFINITE INTERVALS
1 t 1
1 x 2
dx lim 2 dx 1
t 1 x
INFINITE INTERVALS
t
If f ( x) dx exists for every number t ≥ a,
a
then
f ( x) dx lim f ( x) dx
t
a t a
b
If f ( x) dx exists for every number t ≤ a,
t
then
f ( x) dx lim f ( x) dx
b b
t t
b
f ( x) dx are called:
f ( x) dx f ( x) dx f ( x) dx
a
a
is convergent or divergent.
IMPROPER INTEGRALS OF TYPE 1 Example 1
According to Definition 1 a,
we have: t
1 t 1
dx lim dx lim ln x
1 x t 1 x t 1
lim(ln t ln1)
t
lim ln t
t
1 1
1 x 2
dx converges 1 x
dx diverges
Evaluate xe dx
0
x
Using Definition 1 b,
we have: 0
xe dx lim xe dx
0
x x
t t
IMPROPER INTEGRALS OF TYPE 1 Example 2
xe dx xe e dx
0 0 0
x x x
t t t
te 1 e
t t
IMPROPER INTEGRALS OF TYPE 1 Example 2
lim (e )t
t
0
IMPROPER INTEGRALS OF TYPE 1 Example 2
Therefore,
0
xe dx lim (te 1 e )
x t t
t
0 1 0
1
IMPROPER INTEGRALS OF TYPE 1 Example 3
1
Evaluate
1 x 2
dx
1 1 1
1 x2 dx 1 x2 dx 0 1 x2 dx
0
IMPROPER INTEGRALS OF TYPE 1 Example 3
1
1 x 2 dx
0
1
0 1 x 2 dx
0 dx
lim
t dx
lim
t t 1 x 2
t 0 1 x 2
0
lim tan x
1
t
lim tan x
1
t
t 0 t
1
lim(tan t tan 0) 1 lim (tan 1 0 tan 1 t )
t
t
lim tan 1 t
t
0
2
2
2
IMPROPER INTEGRALS OF TYPE 1 Example 3
1
1 x2 dx
2 2
IMPROPER INTEGRALS OF TYPE 1 Example 3
Then,
1
dx lim x dx
t
p
p t 1
1 x
x t
x
p 1
lim
t p 1
x 1
1 1
lim 1
t 1 p t
p 1
IMPROPER INTEGRALS OF TYPE 1 Example 4
1 1
Therefore,
1 x p
dx
p 1
if p 1
1
1 x p
dx is:
Convergent if p > 1
Divergent if p ≤ 1
TYPE 2—DISCONTINUOUS INTEGRANDS
a
DISCONTINUOUS INTEGRANDS
f ( x) dx lim f ( x) dx
b t
a t b a
DISCONTINUOUS INTEGRANDS
f ( x) dx lim f ( x) dx
b t
a t b a
f ( x) dx lim f ( x) dx
b b
a t a t
a
is called:
c b
and both f ( x) dx and f ( x) dx are
a c
convergent, then we define:
f ( x) dx f ( x) dx f ( x) dx
b c b
a a c
IMPROPER INTEGRAL OF TYPE 2 Definition 3 c
2
x 2 t 2 t x 2
5
lim 2 x 2
t 2 t
lim 2( 3 t 2)
t 2
2 3
Thus, the given improper integral is convergent.
IMPROPER INTEGRALS OF TYPE 2 Example 5
/2
sec x dx
t
sec x dx lim
0 x ( / 2) 0
t
lim ln sec x tan x
x ( / 2) 0
lim ln(sec t tan t ) ln1
x ( / 2)
This is because sec t → ∞ and tan t → ∞ as t → (π/2)-.
0 x 1 0 x 1 1 x 1
where t
dx t dx
lim lim x 1
1
0 x 1 t 1 0 x 1 t 1 0
lim(ln
t 1 ln 1)
t 1
lim ln(1 t )
t 1
Thus, dx /( x 1) is divergent.
1
3
This implies that dx /( x 1) is divergent.
0
3
We do not need to evaluate dx /( x 1).
1
WARNING
ln 2 ln1
ln 2
b
f ( x) dx , you must decide, by looking at
a
the function f on [a, b], whether it is either:
An improper integral
IMPROPER INTEGRALS OF TYPE 2 Example 8
Evaluate ln x dx
1
0 t 0 t
IMPROPER INTEGRALS OF TYPE 2 Example 8
ln x dx x ln x dx
1 1 1
t t t
1ln1 t ln t (1 t )
t ln t 1 t
IMPROPER INTEGRALS OF TYPE 2 Example 8
1/ t
lim
t 0 1/ t
2
lim(
t )
t 0
0
IMPROPER INTEGRALS OF TYPE 2 Example 8
Therefore,
1
ln x dx lim(
t ln t 1 t )
0 t 0
0 1 0
1
IMPROPER INTEGRALS OF TYPE 2 Example 8
a. Ifa
f ( x) dx is convergent, then
a
g ( x) dx
is convergent.
b. If a g ( x) dx is divergent, then
a
f ( x) dx
is divergent.
COMPARISON THEOREM
If a g ( x) dx is convergent, a
f ( x) dx may
or may not be convergent.
If a f ( x) dx is divergent, a
g ( x) dx may
or may not be divergent.
COMPARISON THEOREM Example 9
x2
Show that e dx is convergent.
0
We write:
dx e dx e
1
x2 x2 x2
e dx
0 0 1
2
So, –x2 ≤ -x and, therefore, e -x ≤ e-x.
COMPARISON THEOREM Example 9
e dx lim e dx
t
x x
1 t 1
1 t
lim(e e )
t
1
e
COMPARISON THEOREM Example 9
2
Thus, taking f(x) = e-x and g(x) = e-x
x2
in the theorem, we see that e dx
1
is convergent.
x2
It follows that e dx is convergent.
0
COMPARISON THEOREM
x2
In Example 9, we showed that e dx
0
is convergent without computing its value.
1
(1/ x) dx is divergent by Example 1 or
by Definition 2 with p = 1.
COMPARISON THEOREM