[go: up one dir, main page]

0% found this document useful (0 votes)
1 views50 pages

Chapter1_Algebra Parcial 1

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 50

Chapter 1

Linear Algebra

1.1 Motivation

Object with mass m subject to force F.

F
γ
F = m·γ where m is a number (scalar) and γ is the acceleration vector. Here we

are multiplying a scalar and a vector.

Object with mass m subject to the forces F1 and F2 .

F2 F
F1
F = F1 +F2 Here we are adding two vectors.

We need a mathematical framework in which adding two vectors or multiplying a vector and a scalar has a

meaning. This framework is the structure of vector space.

7
1.2 Structure of vector space over R

A vector space over R is a set V together with two binary operations (vector addition and scalar multipli-

cation) that satisfy eight axioms.

+ : V × V −→ V (Vector addition)

· : R × V −→ V (Scalar multiplication)

+ is an internal operation (everything is inside V )

· is an external operation (R is outside V )

The Eight Axioms

1. Associativity of addition

∀ a, b, c ∈ V we have a+(b+c) = (a+b)+c

2. Identity element of addition

∃ 0V ∈ V / ∀ a ∈ V we have 0V +a = a (0V is called the zero vector of V .)

8
3. Inverse elements of addition

∀ a ∈ V, ∃ a0 ∈ V / a+a0 = 0V (The usual notation of a0 is −a.)

4. Commutativity of addition

∀ a, b ∈ V we have a+b = b+a

5. Distribution of scalar multiplication with respect to vector addition

∀ a, b ∈ V, ∀ α ∈ R we have α · (a+b) = α · a+α · b

6. Distributivity of scalar multiplication with respect to addition in R

∀ a ∈ V, ∀ α, β ∈ R we have (α + β)·a = α·a+β·a

7. Compatibility of scalar multiplication with multiplication in R

∀ a ∈ V, ∀ α, β ∈ R we have α · (β · a) = (α · β) · a

8. Identity element of scalar multiplication

∀ a ∈ V we have 1 · a = a

Example 1

(R2 , +, ·)

R2 = {(x, y) ∈ R × R}

• Definition 1 of + (Vector addition)

∀x, x0 , y, y 0 ∈ R we have (x, y)+(x0 , y 0 ) = (x + x0 , y + y 0 )

• Definition 2 of · (Scalar multiplication)

∀ α ∈ R, ∀ x, y ∈ R we have α·(x, y) = (α · x, α · y)

Prove that (R2 , +, ·) is a vector space over R. (Checking Axiom 4)

Axiom 4 says: ∀ a, b ∈ V we have a+b = b+a.

We want to prove that ∀ (x, y) , (x0 , y 0 ) ∈ R2 we have (x, y) + (x0 , y 0 ) = (x0 , y 0 ) + (x, y)

9
Now we start the proof:

By definition 1 we have (x, y) + (x0 , y 0 ) = (x + x0 , y + y 0 )

By definition 1 we have (x0 , y 0 ) + (x, y) = (x0 + x, y 0 + y)

As the two results are the same, we have proved that Axiom 4 holds.

QED (this is an initialism of the Latin phrase quod erat demonstrandum. It is put at the end of a proof to say

roughly what had to be proved is now proved )

1.3 Subspaces

Let (V , + , ·) be a vector space. Let W ⊂ V (W is a subset of V ). If (W, +, ·) is a vector space, then we say

that W is a subspace of V .

Property

W is a subspace of V iff:

1. ∀ a, b ∈ W we have a + b ∈ W

2. ∀ a ∈ W, ∀ λ ∈ R we have λ · a ∈ W

or equivalently:

∀ a, b ∈ W, ∀ α, β ∈ R we have α · a + β · b ∈ W

10
Example 1

V = (R2 , +, ·)

W = {(x, 0) where x ∈ R}.

Show that W is a subspace of R2

We want to show that ∀ (x, 0) ∈ W, ∀ (y, 0) ∈ W, ∀ α, β ∈ R we have α · (x, 0) + β · (y, 0) ∈ W

We have α · (x, 0) + β · (y, 0) = (α · x , 0) + (β · y , 0) = (α · x + β · y , 0) ∈ W

Thus we have proved that W is a subspace of R2 .

Example 2

V = R2

W = {(x , y) ∈ R2 / x2 + y 2 ≤ 1}

Show that W is NOT a subspace of V

We show that the second property is not satisfied, therefore, we want to show that:

∃ a ∈ W , ∃ α ∈ R / α · a 6∈ W

If we take a = (0 , 1) ∈ W (since 02 + 12 ≤ 1) and α = 2, we have:

2 · (0 , 1) = (0 , 2) 6∈ W (since 02 + 22 6≤ 1)

Thus, we have proved that W is not a subspace of V

Properties

• {0V } is a subspace of V .

• If W1 and W2 are subspaces of V then W1 ∩ W2 is a subspace of V .

W1 ∩ W2 = { x ∈ V / x ∈ W1 and x ∈ W2 }

• We can find an example of subspaces W1 and W2 such that:

W1 ∪ W2 is not a subspace of V .

11
W1 ∪ W2 = { x ∈ V / x ∈ W1 or x ∈ W2 }

1.4 Linear combinations

Let u1 , . . . , uk ∈ V (vector space)

Let λ1 , . . . , λk ∈ R

The vector u = λ1 · u1 + . . . + λk · uk ∈ V is called a linear combination of the vectors u1 , . . . , uk

Example

V = R2

u1 = (1 , 2); u2 = (−3 , 4)

λ1 = 0; λ2 = 2
√ √ √
u = 0 · (1 , 2) + 2 · (−3 , 4) = (−3 · 2 , 4 · 2) ∈ R2

Definition

hu1 , . . . , uk i = {λ1 · u1 + . . . + λk · uk , λ1 , . . . , λk ∈ R}

that is the set of all linear combinations of u1 , u2 , . . . , uk .

Property

hu1 , . . . , uk i is a subspace of V .

12
1.5 Generators of V

We say that the vectors u1 , . . . , uk ∈ V are generators of the vector space V if

hu1 , . . . , uk i = V or equivalently:

∀ u ∈ V , ∃ λ1 , . . . , λk ∈ R / u = λ1 · u1 + . . . + λk · uk

k > 0 is an integer.

Example

V = R2 u1 = (1 , 2) u2 = (−3 , 4)

hu1 , u2 i = {λ1 · u1 + λ2 · u2 , λ1 , λ2 ∈ R}

Claim

13
Consider some arbitrary u ∈ R2 . I can find λ1 and λ2 such that u = λ1 (1 , 2) + λ2 (−3 , 4)

Proof

u = (a , b) ∈ R2

(a , b) = λ1 (1 , 2) + λ2 (−3 , 4)

= (λ1 − 3λ2 , 2λ1 + 4λ2 )

a = λ1 − 3λ2

b = 2λ1 + 4λ2 If we solve that system of equations, we get that:


2a 3b
λ1 = +
5 10
a b
λ2 = − +
5 10
As we have found λ1 and λ2 such that u = λ1 · u1 + λ2 · u2 we can say that u1 and u2 are generators of R2

1.6 Linearly independent vectors

Let u1 , . . . , uk ∈ V (vector space). We say that u1 , . . . , uk are linearly independent if:

∀λ1 . . . λk ∈ R we have:

λ1 u1 + . . . + λk uk = 0V =⇒ λ1 = . . . = λk = 0

Example

V = R2

Take the following vectors: u1 = (1 , 2) and u2 = (−3 , 4)

We want to show that ∀λ1 , λ2 ∈ R we have:

λ1 (1 , 2) + λ2 (−3 , 4) = (0 , 0) ⇒ λ1 = λ2 = 0

To this end, let λ1 , λ2 ∈ R be such that λ1 (1 , 2) + λ2 (−3 , 4) = (0 , 0). Then (λ1 − 3λ2 , 2λ1 + 4λ2 ) = (0 , 0)

Therefore:

14
λ1 − 3λ2 = 0

2λ1 + 4λ2 = 0

If we solve this system of equations we obtain that λ1 = 0 and λ2 = 0.

Therefore, u1 and u2 are linearly independent.

1.7 Basis

Let u1 , . . . , uk ∈ V (vector space)

If u1 , . . . , uk are linearly independent and are generators of V , then we say that (u1 , . . . , uk ) is a basis

of V .

Example

u1 = (1 , 2) and u2 = (−3 , 4) form a basis of R2 .

Property

Let u1 , . . . , un be a basis of the vector space V . Then ∀ u ∈ V ,

∃! λ1 , . . . , λn ∈ R/u = λ1 u1 + . . . + λn un .

λ1 , . . . , λn are called the coordinates of vector u in the basis (u1 , . . . , un )

Example

u1 = (1 , 0) and u2 = (0 , 1)

(u1 , u2 ) is a basis of R2 means that for any u ∈ R2

there exist unique λ1 and λ2 / u = λ1 (1 , 0) + λ2 (0 , 1)

15
Theorem

Every basis of the vector space V has the same number of elements called dimension of V .

Example

Dim (R2 ) = 2 because u1 = (1 , 2) and u2 = (−3 , 4) form a basis of R2 .

Property

Dim (Rn ) = n.

Properties

Let V be a vector space with Dim (V ) = n ∈ N; Let u1 , . . . , uk ∈ V .

1. If u1 , . . . , uk are generators of V then k ≥ n.

2. If u1 , . . . , uk are linearly independent then k ≤ n.

3. If k = n and u1 , . . . , un are generators of V then they are linearly independent. (=Basis).

16
4. If k = n and u1 , . . . , un are linearly independent, then they are generators of V . (=Basis).

5. If k = n and if the determinant |u1 u2 . . . un | =


6 0, then u1 , . . . , un is a basis of V .

For example:

in R2 , u1 = (1 , 2) and u2 = (−3 , 4)

1 −3
6= 0
2 4

⇒ (u1 , u2 ) is a basis of R2 .

6. If W is a subspace of V then Dim (W ) ≤ Dim(V )

Example

Find the subspaces of R3 .

Answer

The dimension of R3 is 3 so that the dimension of any subspace of R3 can only be 0, 1, 2, or 3.

• The subspace of dimension 0 is {(0, 0, 0)}.

• A subspace of dimension 1 has the form hui where u is a nonzero vector.

• A subspace of dimension 2 has the form hu, vi where u and v are linearly independent.

• The subspace of dimension 3 is R3 .

Remark: For the vector space Rn (of Dimension n), define the vectors:

e1 = (1 , 0 , . . . , 0)

e2 = (0 , 1 , 0 , . . . , 0)
..
.

en = (0 , . . . , 0 , 1)

The vectors e1 , . . . , en form a basis of Rn called the standard basis of Rn (Spanish: base canónica)

17
7. The dimension of hu1 , . . . , uk i is the rank of the matrix whose columns are precisely u1 , . . . , uk . If this

rank is k then the vectors u1 , . . . , uk are linearly independent.

For example:

In R4 find whether the vectors u1 = (1, −1, 0, 2), u2 = (2, 1, 0, 0), u3 = (3, 0, 0, 2) are linearly dependent

or independent. To this end, we compute the rank of the matrix whose columns are u1 , u2 , u3 and find

that the rank is 2 so that u1 , u2 , u3 are linearly dependent.

Example

We consider the vector space R3 and the subspace

W = h(1, −1, 1), (−2, 2, −2), (1, 1, 0)i.

(i) Find a basis and the dimension of W .

(ii) Find the implicit equations of W .

Answer

 
 1 −2 1 
 
  
(i) The rank of the matrix  −1 2 1  is 2 so that the dimension of W is 2. A basis of W is (1, −1, 1), (1, 1, 0)
 
 
 
1 −2 0
since they are linearly independent.

1 1 x
−1 1 1 1 1 1
(ii) −1 1 y =0⇔x −y +z =0⇔
1 0 1 0 −1 1
1 0 z
− x + y + 2z = 0. Thus

W = (x, y, z) ∈ R3 : −x + y + 2z = 0 .


18
Example

In R6 we consider the vectors u1 = (1, 0, −2, 3, 0, 0), u2 = (2, 1, 0, −1, 1, 1), u3 = (4, 3, 4, −9, 3, 3), u4 =

(0, 1, 1, −5, −2, 1), u5 = (0, 0, −3, 2, −3, 0). Let W be the subspace of R6 generated by u1 , u2 , u3 , u4 y u5 .

(a) Find a basis of W and the dimension of W .

(b) Find the implicit equations of W .

Answer

(a) Consider the matrix M whose rows are precisely the vectors u1 , u2 , u3 , u4 , u5 , that is
 
 1 0 −2 3 0
0 
 
 

 2 1 0 −1 1 1 
 
 
M =
 4 3 4 −9 3 3 .
 
 

 0 1 1 −5 −2 1 

 
 
0 0 −3 2 −3 0
     
 1 0 −2 3 0 0  1 0 −2 3 0 0   1 0 −2 30  0
     
     

 0 1 4 −7 1 1 
 0 1 4 −7 1 1



 0
 1 4 −7 1 1 
     
     
We get M ∼ 
 0 3  ∼  0 0 0
12 −21 3 3   0 0 0  ∼  0
   ∼
0 −3 2 −3 0 
     
     

 0 1 1 −5 −2 1 

 0 0 −3 2 −3 0



 0
 0 −3 2 −3 0 

     
     
0 0 −3 2 −3 0 0 0 −3 2 −3 0 0 0 0 0 0 0
 
 1 0 −2 3
0  0
 
 

 0 1 4 −7 1 1 
 
 
.
0 0 −3 2 −3 0 


 
 

 0 0 0 0 0 0 

 
 
0 0 0 0 0 0

19
Thus, a basis of W is (u1 , v1 , v2 ) where

v1 = (0, 1, 4, −7, 1, 1),

v2 = (0, 0, −3, 2, −3, 0).

Conclusion: A basis of W is (u1 , v1 , v2 ) and the dimension of W (that is the number of vectors of the basis)

is 3.

(b) Let x = (x1 , x2 , x3 , x4 , x5 , x6 ) ∈ W . Then the vectors u1 , v1 , v2 , x are linearly dependent since x is a linear

combination of u1 , v1 , v2 . This means that the matrix whose rows are u1 , v1 , v2 , x must have rank 3.
   
 1 0 −2 3 0 0   1 0 −2 3 0 0 
   
   
 0
 1 4 −7 1 1   0 1
  4 −7 1 1 
 ∼ 
   
 0
 0 −3 2 −3 0   0 0
  −3 2 −3 0  
   
   
x1 x2 x3 x4 x5 x6 0 x2 2x1 + x3 −3x1 + x4 x5 x 6
 
 1 0 −2 3 0 0 
 
 
 0 1
 4 −7 1 1 

∼



 0 0
 −3 2 −3 0 

 
 
0 0 −4x2 + 2x1 + x3 7x2 − 3x1 + x4 −x2 + x5 −x2 + x6
 
 1 0 −2 3 0 0 
 
 
 0 1
 4 −7 1 1 

∼



 0 0
 1 − 23 1 0 

 
 
0 0 −4x2 + 2x1 + x3 7x2 − 3x1 + x4 −x2 + x5 −x2 + x6
 
 1 0 −2 3 0 0 
 
 
 0 1 4
 −7 1 1 

∼

.

 0 0 1
 − 32 1 0 

 
 
0 0 0 − 35 x1 + 13
3 2
x + 23 x3 + x4 −2x1 + 3x2 − x3 + x5 −x2 + x6

20
For the last matrix to have rank 3 we must have

5 13 2
− x1 + x2 + x3 + x4 = 0,
3 3 3
−2x1 + 3x2 − x3 + x5 = 0,

−x2 + x6 = 0.

These three equations are the implicit equations of W . That is

5 13 2
W = (x1 , x2 , x3 , x4 , x5 , x6 ) ∈ R6 : − x1 + x2 + x3 + x4 = 0,

3 3 3
−2x1 + 3x2 − x3 + x5 = 0, −x2 + x6 = 0 .

1.8 Other vector spaces

1.8.1 The vector space of polynomials

Consider the following function p : R → R such that p(x) = 3 − 2x + 5x2 for any x ∈ R. The function p is

called a polynomial of degree 2.

Consider the set R2 [X] of all polynomials of degree at most two with real coefficients. Then any polynomial

p ∈ R2 [X] has the form p(x) = a0 + a1 x + a2 x2 where a0 , a1 , a2 ∈ R. It has degree 2 if a2 6= 0, degree 1 if a2 = 0

and a1 6= 0, and degree 0 if a1 = a2 = 0.

Define the addition of polynomials as follows. Take p, q ∈ R2 [X] with p(x) = a0 + a1 x + a2 x2 and q(x) =

b0 + b1 x + b2 x2 for all x ∈ R. Then

(p + q)(x) = a0 + b0 + (a1 + b1 )x + (a2 + b2 )x2 .

Also, take α ∈ R, then

(α · p)(x) = αa0 + αa1 x + αa2 x2 .

21
With these operations, R2 [X] is a vector space of dimension 3. A basis of R2 [X] is (e1 , e2 , e3 ) where

e1 , e2 , e3 : R → R with e1 (x) = 1, e2 (x) = x, and e3 (x) = x2 for all x ∈ R (= standard basis). That is

R2 [X] = he1 , e2 , e3 i.

For example, the coordinates of p(x) = 3 − 2x + 5x2 in that basis are (3, −2, 5).

1.8.2 The vector space of matrices

Consider the set MR (2, 3) of matrices of two rows and three columns with real coefficients. Take M, N ∈

MR (2, 3) where    
 m11 m12 m13   n11 n12 n13 
M =

,
 N =

.

m21 m22 m23 n21 n22 n23
Define  
 m11 + n11 m12 + n12 m13 + n13 
M +N =

.

m21 + n21 m22 + n22 m23 + n23
For any α ∈ R define  
 αm11 αm12 αm13 
α·M =

.

αm21 αm22 αm23
With these operations, MR (2, 3) is a vector space of dimension 2 × 3 = 6. The standard basis of MR (2, 3)

is (e1
, e2 , e3 , e4 , e
5 , e6 ) where
       
 1 0 0   0 1 0   0 0 1   0 0 0   0 0 0 
e1 = 

 , e2 = 
 
 , e3 = 
 
 , e4 = 
 
 , e5 = 
 
,

0 0 0 0 0 0 0 0 0 1 0 0 0 1 0
 
 0 0 0 
e6 = 

, so that M = m11 e1 + m12 e2 + m13 e3 + m21 e4 + m22 e5 + m23 e6 . Thus

0 0 1

MR (2, 3) = he1 , e2 , e3 , e4 , e5 , e6 i.

1.9 Change of basis

22
Example

B = standard basis of the vector space R2 .

B = (e1 , e2 ) where [e1 ]B = (1 , 0) and [e2 ]B = (0 , 1).

B 0 = (e01 , e0 2 ) where [e01 ]B = (−1 , 1) and [e02 ]B = (2 , −1).

In other words: e01 = (−1) · e1 + 1 · e2 , e02 = 2 · e1 − e2 .

B 0 is a basis because:
−1 2
6= 0.
1 −1

[u]B = (2 , 1) that is u = 2 · e1 + 1 · e2 .

Find [u]B 0 .

Let V be a vector space of dimension n. Consider two bases B and B 0 of V .

B = (e1 , . . . , en ) and B 0 = (e01 , . . . , e0n ).

The problem of change of basis consists in finding the coordinates of a vector u relative to the basis B 0 ([u]B 0 )

knowing the coordinates of u relative to the basis B ([u]B ) or vice-versa.

Theorem

[u]B 0 = B 0 −1 · [u]B

B 0 −1 is called the B − B 0 change-of-basis matrix. Observe that [u]B = B 0 · [u]B 0 so that B 0 is the B 0 − B

change-of-basis matrix.

   
−1 2  1 2 
B 0 = (e0 1 , e0 2 ) = 


 thus B 0 −1 = 

.

1 −1 1 1

     
1 2 2 4
[u]B 0 =   ·   =  .
     
1 1 1 3

23
That is [u]B 0 = (4, 3). In other words u = 4e0 1 + 3e0 2 .

1.10 Linear maps (=Mappings)

1.10.1 Definition

Let E and F be two vector spaces over R. Let f : E → F be a map.

We say that f is a linear map if:

1. ∀ x , y ∈ E , f (x + y) = f (x) + f (y).

2. ∀ x ∈ E , ∀ α ∈ R , f (α · x) = α · f (x).

Or equivalently

∀ x , y ∈ E , ∀ α , β ∈ R , f (α · x + β · y) = α · f (x) + β · f (y).

24
Example

f : R2 → R3 f (x , y) = (x − y , x + 2y , x)

I want to prove that:



∀(x1 , x2 ) , (y1 , y2 ) ∈ R2 , ∀ α , β ∈ R we have f α (x1 , x2 ) + β(y1 , y2 ) = αf (x1 , x2 ) + βf (y1 , y2 ).

We compute the left part of the equality:


 
f α (x1 , x2 ) + β(y1 , y2 ) = f (αx1 , αx2 ) + (βy1 , βy2 )

= f (αx1 + βy1 , αx2 + βy2 )



= αx1 + βy1 − αx2 − βy2 , αx1 + βy1 + 2(αx2 + βy2 ) , αx1 + βy1

We compute the right of the equality:

αf (x1 , x2 ) + βf (y1 , y2 ) = α(x1 − x2 , x1 + 2x2 , x1 ) + β(y1 − y2 , y1 + 2y2 , y1 ) =



= α(x1 − x2 ) + β(y1 − y2 ) , α(x1 + 2x2 ) + β(y1 + 2y2 ) , αx1 + βy1

As the two terms are equal we can conclude that f is linear.

Example

g : R2 → R

g(x , y) = x2 + y.

To prove that g is linear

1. ∀x , y ∈ E , g(x + y) = g(x) + g(y).

AND

2. ∀x ∈ E , ∀ α ∈ R , g(αx) = αg(x).

To prove that g is NOT linear

1. ∃x , y ∈ E | g(x + y) 6= g(x) + g(y).

OR

2. ∃x ∈ E , ∃α ∈ R | g(αx) 6= αg(x).

25

We show that: ∃ α ∈ R , ∃ (x , y) ∈ R2 | g α(x , y) =6 αg(x , y).

Choose α = 2 ; (x , y) = (2 , 0).
 
g α(x , y) = g 2(2 , 0) = g(4 , 0) = 42 + 0 = 16.

αg(x , y) = 2g(2 , 0) = 2(22 + 0) = 8.



g 2(2 , 0) 6= 2g(2 , 0) ⇒ g is not linear.

1.10.2 Properties

1. Let f : Rn → Rm be a mapping. Then, f is linear ⇔ There exists a matrix P of m rows and n columns

such that ∀ x ∈ Rn , f (x) = P x.

Example

f : R2 → R3

f (x , y) = (x − y , x + 2y , x).

   
1 −1 x−y
 
  x  
     
1 2  ·   = x + 2y .
f (x , y) =      



 y 



1 0 x
| {z }
P
Let (e1 , e2 ) be the standard basis of R2 . Then the first column of the matrix P is given by f (e1 ) and the

second column of the matrix P is given by f (e2 ), that is P = f (e1 ), f (e2 ) . This is a general result.

2. Let E and F be vector spaces with finite dimensions dim(E) = n and dim(F ) = m, and bases BE =

(u1 , . . . , un ) and BF = (v1 , . . . , vm ) respectively. Then f : E → F is linear if and only if there exists a

matrix P ∈ MR (m, n) such that [f (x)]BF = P [x]BE for all x ∈ E. The matrix P is denoted [f ]BE BF and

26
is determined as follows. Write

f (u1 ) = a11 v1 + a21 v2 + . . . + am1 vm ,

f (u2 ) = a12 v1 + a22 v2 + . . . + am2 vm ,


..
.

f (un ) = a1n v1 + a2n v2 + . . . + amn vm .

Then we have  
 a11 a12 . . . a1n 
 
 
a
 21 a22 . . . a2n 

[f ]BE BF =
 .
.
 ..


 
 
 
am1 am2 . . . amn

3. If f : E → F is a linear map, then f (0E ) = 0F .

Proof

     
∗ ∗ . . . ∗ 0 0
     
. .. .. ..  . .
 .. . . .   ..  =  .. 
·    

     
     
∗ ∗ ... ∗ 0 0

Example 2

Consider the following electrical circuit where E = 0, v(t) is the capacitor voltage at instant t, and i(t) is

the electrical current of the loop.

Then we have     
dv 1
 (t)  0  v(t)
 dt   C 
 =
   .
 di   1 R  i(t)
 
(t) − −
dt L L

27
1.10.3 Change of basis

Let V be a vector space over R with dimension n, and BV a basis of V .

Let W be a vector space over R with dimension m, and BW a basis of W .

Let f : V → W be a linear map, x ∈ V , and y = f (x). Then

[y]BW = A · [x]BV

where A is the matrix that represents f in the bases BV of V and BW of W .

A = [f ]BV BW .

Now, we use BV0 instead of BV and BW


0
instead of BW . Then

[y]BW
0 = A0 · [x]BV0

where A0 is the matrix that represents f in the bases BV0 of V and BW


0
of W .

A0 = [f ]BV0 BW
0 .

Theorem

−1
A0 = B 0 W · A · BV0 .

Example

V = R2 , n = 2;

28
W = R3 , m = 3.

BV = Standard basis of V = (e1 , e2 ) that is [e1 ]BV = (1 , 0) and [e2 ]BV = (0 , 1).

BW = Standard basis of W = (f1 , f2 , f3 ) that is [f1 ]BW = (1 , 0 , 0) ; [f2 ]BW = (0 , 1 , 0) and [f3 ]BW = (0 , 0 , 1).

BV0 = (e01 , e02 ) where [e01 ]BV = (−1 , 1) and [e02 ]BV = (2 , −1).

0
BW = (f10 , f20 , f30 ) where [f10 ]BW = (1 , 0 , −1) ; [f20 ]BW = (1 , 0 , 2) and [f30 ]BW = (0 , 2 , 1).

f : V → W (that is f : R2 → R3 )  
3 −1
 
 
f (x) = Ax where A = [f ]BV BW =
 0 2 .

 
 
1 0
Find A0 = [f ]BV0 BW
0 .

A0 = B 0 −1 0
W · A · BV .

 
−1 2 
BV0 = (e01 , e02 ) = 

.

1 −1

 
 1 1 0
 
0 0 0 0
 
BW = (f1 , f2 , f3 ) =  0 0 2

.
 
 
−1 2 1

 
2 1 −1
3 6 3 
 

B 0 −1
1
= −1 1 .
W
3 6 3 
 
 1 
0 0
2

 
−11
−2
3 


A0 = B 0 −1 · A · B 0
so that A 0
=
 10 .
−2
W

V
3 


 
1 −1

29
Particular case: V = W that is f : V → V

In this case we consider that BV = BW = B and BV0 = BW


0
= B 0 so that

A0 = B 0−1 · A · B 0

with A = [f ]B and A0 = [f ]B 0 .

Example: In the vector space R2 we consider the standard basis B = (e1 , e2 ) along with the basis

B 0 = (u1 , u2 ) with [u1 ]B = (1, 2) and [u2 ]B = (1, 1). We consider also the vector v such that[v]B = (−2,
−1).
 2 −1 
We consider the linear mapping f : R2 → R2 whose matrix in the basis B is given by [f ]B = 

.

−1 1
1. In a millimeter paper plot the basis B in blue and the basis B 0 in green. Plot v in blue in the basis B; find

[v]B0 and plot the corresponding vector in green in the basis B 0 . Observe in your plot that [v]B and [v]B0

correspond to exactly the same vector.

2. Define w = f (v). Find the matrix [f ]B0 that represents f in the basis B 0 . Compute the coordinates

[w]B = [f ]B [v]B and plot the corresponding vector in blue in to the basis B. Compute the coordinates

[w]B0 = [f ]B0 [v]B0 and plot the corresponding vector in green in the basis B 0 . Observe in your plot that

[w]B and [w]B0 correspond to exactly the same vector w.

Answer.

   
 1 1   −1 1 
1. We have [v]B0 = B 0 −1 [v]B where B 0 = (u1 , u2 ) = 

. We have B 0 −1 = 
 
 so that

2 1 2 −1
  
 −1 1   −2 
[v]B0 =    = (1, −3).
  
2 −1 −1

2.      

−1  −1 1  2 −1   1 1   1 −1 
[f ]B0 = B 0 · [f ]B · B 0 = 





=
 
.

2 −1 −1 1 2 1 −1 2

30
Figure 1.1: Left: 1. Right: 2.

    
 2 −1   −2   −3 
[w]B = [f ]B [v]B = 



=
 
.

−1 1 −1 1
    
 1 −1   1   4 
[w]B0 = [f ]B0 [v]B0 = 



=
 
.

−1 2 −3 −7

That is [w]B = [f ]B [v]B = (−3, 1) and [w]B0 = [f ]B0 [v]B0 = (4, −7).

1.10.4 On the identity function

Let E be a vector space with finite dimension dim(E) = n. The identity function IE : E → E is defined by

the relation IE (x) = x for all x ∈ E. Then IE is a linear map since

IE (αx + βy) = αx + βy = αIE (x) + βIE (y), ∀α, β ∈ R, ∀x, y ∈ E.

Let B = (u1 , . . . , un ) and B 0 = (v1 , . . . , vn ) be a bases of E. Our aim is to find the following matrices [IE ]BB ,

[IE ]B 0 B 0 , [IE ]BB 0 , and [IE ]B 0 B .

31
To find [IE ]BB write

IE (u1 ) = 1 · u1 + 0 · u2 + . . . + 0 · un ,

IE (u2 ) = 0 · u1 + 1 · u2 + . . . + 0 · un ,

..
.

IE (un ) = 0 · u1 + 0 · u2 + . . . + 1 · un .

Then we have  
1 0 . . . 0
 
 
0 1 . . . 0
 
[IE ]BB =
.
.

 .. 
 
 
 
0 0 ... 1
To find [IE ]B 0 B 0 write

IE (v1 ) = 1 · v1 + 0 · v2 + . . . + 0 · vn ,

IE (v2 ) = 0 · v1 + 1 · v2 + . . . + 0 · vn ,
..
.

IE (vn ) = 0 · v1 + 0 · v2 + . . . + 1 · vn .

Then we have  
1 0 . . . 0
 
 
0 1 . . . 0
 
[IE ]B 0 B 0 =
.
.

 .. 
 
 
 
0 0 ... 1
To find [IE ]B 0 B write

IE (v1 ) = a11 u1 + a21 u2 + . . . + an1 un ,

IE (v2 ) = a12 u1 + a22 u2 + . . . + an2 un ,


..
.

IE (vn ) = a1n u1 + a2n u2 + . . . + ann un .

32
Then we have  
 a11 a12 . . . a1n 
 
 
a a . . . a 
 21 22 2n 
 = [v1 ]B , [v2 ]B , . . . , [vn ]B = B 0 .

[IE ]B 0 B =
 . 
 .. 
 
 
 
an1 an2 . . . ann

To find [IE ]BB 0 write

IE (u1 ) = b11 v1 + b21 v2 + . . . + bn1 vn ,

IE (u2 ) = b12 v1 + b22 v2 + . . . + bn2 vn ,


..
.

IE (un ) = b1n v1 + b2n v2 + . . . + bnn vn .

Then we have  
 b11 b12 . . . b1n 
 
 
b
 21 b22 . . . b2n 

[IE ]BB 0 =
 .
.
 ..


 
 
 
bn1 bn2 . . . bnn

Property. [IE ]BB 0 = [IE ]−1


B0B .

[IE ]BB 0 : B − B 0 change-of-basis matrix.

[IE ]B 0 B : B 0 − B change-of-basis matrix.

1.10.5 More properties of linear maps

let E be a vector space with dimension n ∈ N and basis BE , and let F be a vector space with dimension

m ∈ N and basis BF . Let f : E → F be a linear map. Then rank [f ]BE BF is independent of the bases BE and

BF . For this reason we define



rank(f ) = rank [f ]BE BF .

33
Let g : E → F be a linear map and α, β ∈ R, then

[αf + βg]BE BF = α[f ]BE BF + β[g]BE BF .

Let G be a vector space with dimension p ∈ N and basis BG . Let h : F → G be a linear map. Then

[h ◦ f ] = [h] · [f ] .
| {zBE BG} | B{zF BG} | B{zE BF}
p×n p×m m×n

1.10.6 Range space (Spanish: imagen) and null space (Spanish: núcleo)

Let E and F be two vector spaces over R. Let f : E → F be linear.

Ker(f ) = {x ∈ E : f (x) = 0F } Null space

Im(f ) = f (E) = {f (x) , x ∈ E} Range space

Properties

1. Ker(f ) and Im(f ) are vector spaces

34
 
2. Dim(E) = Dim Ker(f ) + Dim Im(f )

Example

f : R3 → R4 defined by f (x , y, z) = (x − y , −2x + 2y, 3x − 5y − 2z, y + z).

Find Ker(f ).

First we have to check that f is linear:

   
 1 −1 0   x−y
 



 x
   
 

 −2 2
    −2x + 2y
0     
 
 · y =  ⇒ f is linear.
     
 3
 −5 −2 
    3x − 5y − 2z
   




 z 



0 1 1 y+z

Ker(f ) = {(x, y, z) ∈ R3 | f (x, y, z) = (0, 0, 0, 0)}.

We have:






 x−y = 0





 −2x + 2y = 0
⇔ x = y = −z.




 3x − 5y − 2z = 0




y+z = 0

Ker(f ) = {(x, y, z) ∈ R3 / x = y = −z} = {(x, x, −x) where x ∈ R} = {x(1, 1, −1) where x ∈ R}. Then

Dim Ker(f ) = 1.

Property

Let f : Rn → Rm be a linear map defined by f (x) = Ax. Then Im(f ) is the subspace of Rm generated by the

columns of matrix A. Also Dim Im(f ) = rank(A). More generally, let E be a vector space with dimension n

and basis BE = (u1 , . . . , un ), and let F be a vector space with dimension m and basis BF = (v1 , . . . , vm ). Let

35
f : E → F be a linear map, then

Im(f ) = hf (u1 ), . . . , f (un )i,

and

 
Dim Im(f ) = rank [f ]BE BF = rank(f ).

Example

f : R3 → R4 defined by f (x , y, z) = (x − y , −2x + 2y, 3x − 5y − 2z, y + z).

Find
 Im(f ).   
 1 −1 0  
x−y 



 x
   
 

 −2 2
    −2x + 2y
0     
 
 · y  =  .
     
 3
     3x − 5y − 2z
−5 −2     




 z 



0 1 1 y+z
| {z }
A

rank(A) = 2 = dim Im(f ) thus Im(f ) = hw1 , w2 i where w1 = (1, −2, 3, 0) and w2 = (−1, 2, −5, 1).

1.10.7 Injective, bijective, surjective

Let A and B be nonempty sets, and let f : A → B be an application defined from A to B. We say that

1. f is injective (Spanish: inyectiva) if ∀x, y ∈ A we have

x 6= y ⇒ f (x) 6= f (y).

36
2. f is surjective (Spanish: sobreyectiva, exhaustiva) if f (A) = B.

3. f is bijective (Spanish: biyectiva) if it is both injective and surjective

37
Example

1. The application f : R → R defined by f (x) = x3 is bijective.

2. The application f : R → R+ defined by f (x) = x2 is surjective but not bijective.

3. The application f : R+ → R defined by f (x) = x2 is injective but not bijective.

38
Property: let E be a vector space with dimension n and basis BE , and let F be a vector space with

dimension m and basis BF . Let f : E → F be a linear mapping. Then the following properties hold:


(i) f is surjective if and only if dim Im(f ) = m.


(ii) f is injective if and only if dim Im(f ) = n.


(iii) f is bijective if and only if dim Im(f ) = n = m.

(iv) If f is bijective then [f −1 ]BF BE = [f ]−1


BE BF . Also f (BE ) is a basis of F .

(v) Let x1 , . . . , xk ∈ E. Then the following holds:

• If the vectors are x1 , . . . , xk are linearly independent and f is injective then f (x1 ), . . . , f (xk ) are

39
linearly independent.

• If the vectors are x1 , . . . , xk are generators of E and f is surjective then f (x1 ), . . . , f (xk ) are generators

of F .

Example

Let f : R3 → R3 be the linear map given by the matrix


 
 1 a 0 
 
 
A=  0 1 1 
 a∈R
 
 
1 1 1
 
1. Find Ker(f ), dim Ker(f ) , Im(f ), dim Im(f ) .

2. Is f injective? surjective? bijective?

Answer. (1a) Ker(f ) = {x ∈ R3 /f (x) = 0}. Since f (x) = Ax we have to solve the equation Ax = 0.
      
 
 1 a 0  x   0  x + ay = 0 x=0

 


 

     
 

    
 0 1 1  y  =  0  ⇔ y+z =0 ⇔ ay = 0
      
     
 

     
 

1 1 1 z 0  x+y+z =0
  z = −y

If a = 0 then the second equation holds for any y. In this case Ker(f ) = {(x, y, z) ∈ R3 /x = 0, z = −y} =

{(0, y, −y), y ∈ R} = {y(0, 1, −1), y ∈ R}. Thus dim Ker(f ) = 1. For a = 0, Im(f ) is the subspace generated

by the first two column vectors of A, that is (1, 0, 1) and (0, 1, 1) because the third column vector of A is equal

to the second. Thus dim Im(f ) = 2.

If a 6= 0 the equation ay = 0 leads to y = 0. In this case Ker(f ) = {(x, y, z) ∈ R3 /x = 0, y = 0, z = −y =



0} = {(0, 0, 0)}. Thus dim Ker(f ) = 0. For a 6= 0, Im(f ) is the subspace generated by the column vectors of

A, that is (1, 0, 1), (a, 1, 1) and (0, 1, 1). We have

1 a 0

0 1 1 = a 6= 0

1 1 1

40

so that these vectors form a basis of R3 and dim Im(f ) = 3, that is Im(f ) = R3 .

(1b) If a = 0 we have dim Im(f ) = 2 6= dim(R3 ) so that f is not injective (so it cannot be bijective). For

the same reason, that is dim Im(f ) = 2 6= dim(R3 ) it is not surjective.

If a 6= 0 we have dim Im(f ) = dim(R3 ) so that f is bijective, which means that f is injective and surjective.

1.10.8 Eigenvalues (Spanish: valores propios) and eigenvectors (Spanish: vec-

tores propios)

Let V be a vector space with dimension n. Let f : V → V be a linear map (= endomorphism of V ).

If there exists λ ∈ R and x ∈ V with x 6= 0 such that f (x) = λx then we say that:

• λ is an eigenvalue of f .

• x is an eigenvector of f associated with the eigenvalue λ.

The set of all eigenvalues of f is called spectrum of f and is denoted σ(f ).

Example

f : R2 → R2 f (x , y) = (x , −x + y)

Find the eigenvalues and the eigenvectors of f

First we have to check that the function is linear:

     
 1 0   x  x 
 · = 
     
−1 1 y −x + y

We are to find some vector which is different than (0 , 0) and some λ ∈ R such that f (x , y) = λ(x , y).

41
f (x , y) = λ(x , y) ⇔ (x , −x + y) = (λx , λy)

x = λx

−x + y = λy

• If x 6= 0 then 6 x = λ 6 x ⇒ λ = 1

Then, −x + y = λy ⇒ −x + y = y ⇒ x = 0. This is not possible.

• If x = 0 then −x + y = λy ⇒ y = λy

Since (x , y) 6= (0 , 0) and x = 0 we have y 6= 0. Thus 6 y = λ 6 y ⇒ λ = 1

Conclusion

The map f has one eigenvalue λ = 1 so that σ(f ) = {1}. The eigenvectors associated with λ = 1 are (0 , y)

with y 6= 0.

Comment:

Let V a vector space of dimension n and f : V → V a linear map.

Let x 6= 0 be an eigenvector associated with the eigenvalue λ. Then we have f (x) = λx.

Recall that the identity function I : V → V is defined by the relation I(x) = x. We have

f (x) = λx ⇔ f (x) = λ I(x) ⇔ f (x) − λ I(x) = 0 ⇔ (f − λ I)(x) = 0.


| {z }
g

Recall that Ker(g) = {x ∈ V : g(x) = 0} so that

(f − λ I)(x) = 0 ⇔ x ∈ Ker(f − λ I).

Conclusion: To find the eigenvectors associated with the eigenvalue λ we have to determine Ker(f − λ I).

Then all nonzero vectors of Ker(f − λ I) are eigenvectors associated with the eigenvalue λ.

Now, how to find λ?

42
1.10.9 Characteristic polynomial

Let V a vector space of dimension n and f : V → V a linear map.

Let x 6= 0 be an eigenvector associated with the eigenvalue λ. Then we have f (x) = λx that is

(f − λ I)(x) = 0.
 
       
 
 11 · · · a1n  1 · · · 0  x1  0
 a 
       
 . . .   .   . 
. .
 
 .. .. ..  0 . . ..  ·  ..  =  .. 
−λ      
 
       
       
 a
 n1 · · · a nn 0 ··· 1   xn 0
| {z }
A

Example:

         
a11 a12  a11 a12  1 0 x1  0
n = 2 then A =   so that we get  −λ  ·   =  
         
a21 a22 a21 a22 0 1 x2 0
     
a11 − λ a12  x1  0
that is   ·   =  .
     
a21 a22 − λ x2 0
In general:

     
a11 − λ · · · a1n   x1  0
     
 . ..   .  .
 .. ...
.   ·  ..  =  .. 
   

     
     
an1 · · · ann − λ xn 0
| {z } | {z } | {z }
B x ~0
We show now that the determinant of matrix B has to be zero. To this end, suppose that det(B) 6= 0. Then

B is invertible with inverse B −1 . From

B·x=0

we deduce

B −1 B · x = B −1 · 0

43
so that

x=0

This cannot be since x 6= 0 as it is an eingenvector.

Thus Det(A − λ I) = 0.

Example:

n = 2 then
a11 − λ a12
Det(A − λ I) =
a21 a22 − λ

which gives

Det(A − λ I) = (a11 − λ)(a22 − λ) − a21 a12 = λ2 − (a22 + a11 )λ − a21 a12 .

In general pf (λ) = Det(A − λ I) ∈ Rn [X] is the characteristic polynomial of A. It is independent of the basis

(even though A depends on the basis) which makes it the characteristic polynomial of f . The roots of pf (λ) are

independent of the basis and are the eigenvalues of f (or A). These roots are the solution of

Det(A − λ I) = 0

which is called the characteristic equation.

Conclusion:

To find the eigenvalues of f we have to compute the characteristic polynomial Det(A − λ I). Then we have

to solve the characteristic equation Det(A − λ I) = 0. The solutions of this characteristic equation are the

eigenvalues of f (or A).

To find the eigenvectors associated with λ:

(i) Compute: Ker(f − λ I).

(ii) The eigenvectors associated with λ are all vectors of Ker(f − λ I) that are 6= 0.

44
Example

Consider the vector space R3 along with its standard basis. Consider the linear map f : R3 → R3 defined by

f (x) = Ax, for all x ∈ R3 with:

 
2 −2
3 3
0
 
 
A=
3
−1 1
.
0
3
 
 
1 2
3 3
1

Find the eigenvalues and eigenvectors of f .

To find the eigenvalues we determine the characteristic polynomial which is Det(A − λ I):

2
3
−λ − 23 0

− 13 1
−λ 0 = −λ(λ − 1)2 .
3

1 2
3 3
1−λ

The characteristic equation is −λ(λ − 1)2 = 0 whose solutions are λ1 = 1 and λ2 = 0. Therefore, the eigenvalues

of f are 0 and 1.

The eigenvectors associated with λ1 = 1 are all non-zero vectors of Ker(A − 1 · I).

     
2
3 − 1 − 23 0  x 0
     
     
 · y  = 0.
 −1 1
−1 0     
 3 3
     
     
1 2
3 3
1 − 1 z 0

We get the following system of equations: 


− 31 x − 23 y = 0







 − 13 x − 23 y = 0



 1x + 2y = 0


3 3

which gives x = −2y. Thus, Ker(f − 1 · I) = {(x , y , z) ∈ R3 | x = −2y} = {(−2y , y , z) , y , z ∈ R} =

{y(−2 , 1 , 0) + z(0 , 0 , 1) , y , z ∈ R}.

45
This means that the vectors v1 = (−2 , 1 , 0) and v2 = (0 , 0 , 1) form a basis of Ker(f − 1 · I),

thus Dim Ker(f − 1 · I) = 2.

Now, we find the eigenvectors associated with λ2 = 0. We have to find Ker(f − 0 · I) = Ker(f ).

     
2 −2
3 3
0 x 0
     
     
 −1
3
1
0 · y  = 0 = 0.
3     
     
     
1 2
3 3
1 z 0

Thus, we get the following system of equations:


2
x − 23 y = 0




 3


 − 31 x + 13 y = 0



 1x + 2y + z = 0


3 3

which leads to x = y and z = −x.

Therefore

Ker(f ) = {(x , y , z) ∈ R3 : y = x , z = −x}

= {(x , x , −x) , x ∈ R}

= {x(1 , 1 , −1) , x ∈ R.}


Thus v3 = (1 , 1 , −1) is a basis of Ker(f ), and Dim Ker(f ) = 1.

1.10.10 Diagonalizable matrix

46
Let V be a vector space over R with Dim(V ) = n, and let f : V → V be a linear map with an associated

matrix A in some basis.

We say that A is diagonal if:

 
λ1 0 · · · 0 
 
 0 λ ... 0 
 
 2 
A=
.

 .. .. . . 
 . . 0

 
 
0 0 · · · λn

Therefore, its characteristic polynomial Det(A − λ · I) is:

λ1 − λ 0 ··· 0
..
0 λ2 − λ . 0
= (λ1 − λ) . . . (λn − λ)
.. .. ..
. . . 0

0 0 ··· λn − λ

and the eigenvalues are λ1 , . . . , λn

Problem

Let V be a vector space over R and let f : V → V be a linear map with an associated matrix A = [f ]B in some

basis B. Can we find a basis B 0 of V such that A0 = [f ]B 0 is diagonal?

This problem is called diagonalization of the linear map f (or the matrix A).

Theorem

Let V be a vector space over R with dimension n ∈ N \ {0} and basis B. Let f : V → V be a linear map with

an associated matrix A = [f ]B . Then, A (or f ) is diagonalizable if and only if:

(i) Det(A − λ · I) = (−1)n · (λ − λ1 )m1 . . . (λ − λs )ms where λ1 , . . . , λs ∈ R, s is the number of different

eigenvalues, and m1 + . . . + ms = n (the quantity mi is called algebraic multiplicity of λi ).


(ii) ∀ 1 ≤ i ≤ s we have mi = Dim Ker(A − λi · I) . In other words

47

• m1 = Dim Ker(A − λ1 · I) .


• m2 = Dim Ker(A − λ2 · I) .

.
• ..


• ms = Dim(Ker A − λs · I) .

If conditions (i) and (ii) hold, then there exists a basis B 0 of V composed of eigenvectors of A such that A0 = [f ]B0

is given by
 
λ1 − − − − − −
 − ... − − − − 
− − λ1 − − − − 
.
0
 
A =
− − − .. − −

−
− − − − λs − −
.
− − − − − .. −
 
− − − − − − λs

Where λ1 is repeated m1 times, · · · , and λs is repeated ms times.

The basis B 0 is given by:

 
(1) (s)
B 0 = e1 , . . . , e(1)
m1 , . . . , e 1 , . . . , e (s)
ms

where the vectors in magenta are a basis of Ker(A − λ1 · I) and the vectors in green are a basis of Ker(A − λs · I).
 
(i) (i)
In general e1 , . . . , emi is a basis of Ker(A − λi · I). We have A0 = B 0 −1 AB 0 or equivalently A = B 0 A0 B 0 −1 .

Example

f : R3 → R3 defined by f (x) = Ax where A is the matrix that represents f in the standard basis B.

 
2 −2
3 3
0
 
 
A = [f ]B = 
3
−1 1
.
0
3
 
 
1 2
3 3
1

Prove that f is diagonalizable. Find a basis B 0 in which A0 = [f ]B0 is diagonal, and find A0 .

Recall that we have studied this matrix in the previous example.

Check condition (i): Det(A − λ · I) = (−1)n · (λ − λ1 )m1 . . . (λ − λs )ms .

Det(A − λ · I) = −(λ − 1)2 λ = −(λ − 1)2 (λ − 0)1 . Thus λ1 = 1 has multiplicity m1 = 2 and λ2 = 0 has

48
multiplicity m2 = 1. There are two different eigenvalues, thus s = 2.

Dimension of R3 = n = 3 = thus (−1)n = −1 and Det(A − λ · I) = (−1)n · (λ − λ1 )m1 · (λ − λ2 )m2

with m1 + m2 = 2 + 1 = 3 = n. Thus Condition (i) holds.

Check condition (ii).


?  
m1 = Dim Ker(A − λ1 · I) . We computed Dim Ker(A − λ1 · I) last time, it is 2.
?  
m2 = Dim Ker(A − λ2 · I) . We computed Dim Ker(A − λ2 · I) last time, it is 1.

Condition (ii) holds.

⇒ A is diagonalizable and
 
0 (1) (1) (2)
B = e1 , e2 , e1 .
(1) (1)
Last time we found the following basis for Ker(A − 1 · I) : e1 = v1 = (−2 , 1 , 0) and e2 = v2 = (0 , 0 , 1).
(2)
We also found the following basis for Ker(A − 0 · I) : e1 = v3 = (1 , 1 , −1).
 
1 0 0
 
0
 
A =
 0 1 0

 
 
0 0 0

Exercise: Find B 0 −1 and check that A = B 0 A0 B 0 −1 .



Remark. Dim Ker(A − λi · I) is called the geometric multiplicity of λi . We always have


1 ≤ Dim Ker(A − λi · I) ≤ mi .

In particular, if mi = 1 then we must have Dim Ker(A − λi · I) = 1.

1.10.11 Application

In genetics, the study of inheritance is based on the concepts of dominant,


 recessive, andhybrid genes. The
 0.5 0.25 0 
 
 
probabilities of producing such genes are determined by a matrix A = 
 0.5 0.5 0.5 . Geneticists are

 
 
0 0.25 0.5

49
interested in computing limn→∞ An as this quantity gives an information on how genes are distributed among

a population after a long time (n is related to time). The objective of this exercise is to apply the concepts we

have seen to solve this problem.

(a) Let B 0 and A0 be p × p matrices for some positive integer p with B 0 invertible. Let A = B 0 A0 B 0 −1 . Show

that ∀m ∈ N \ {0} we have Am = B 0 A0 m B 0 −1 (use an argument by induction).

 
 λ1 0 ... 0 
 
0 0

(b) Let A be p × p diagonal matrix. That is ∃λ1 , . . . , λp ∈ R such that A =  .. 
. Show
 . 
 
 
0 ... 0 λp
 
 λm
1 0 ... 0 
 
that ∀m ∈ N \ {0} we have A0 m

= .. 
 (use an argument by induction).
 . 
 
 
0 ... 0 λm
p

(c) Find matrices A0 and B 0 , with A0 diagonal, such that A = B 0 A0 B 0 −1 .

(d) Compute An and find limn→∞ An .

Answer

a. For m = 1, the relation Am = B 0 A0 m B 0 −1 holds by definition of matrix A. Now, if Am = B 0 A0 m B 0 −1 holds

0 −1
for some ∈ N \ {0}, then Am+1 = Am · A = B 0 A0 m B0 A0 B 0 −1 = B 0 A0 m+1 B 0 −1 . QED.
·

B

 
 λm
1 0 ... 0 
 
b. For m = 1, the relation A0 m

=  ..  holds by definition of matrix A0 . Now, if A0 m =

 . 
 
 
0 ... 0 λm
p

50
 
 λm
1 0 ... 0 
 
 .. 
 holds for some m ∈ N \ {0}, then

 . 
 
 
0 ... 0 λm
p

     
 λm
1 0 ... 0   λ1 0 ... 0   λm+1
1 0 ... 0 
     
A0
m+1 m
= A0 · A0 = 
 ..  
· ..  
= .. 
 .   .   . 

     
     
0 ... 0 λm
p 0 ... 0 λp 0 ... 0 λm+1
p

QED.

0.5 − λ 0.25 0

c. The characteristic polynomial of A is given by det(A − λI) = 0.5 0.5 − λ 0.5 = −λ(λ − 1)(λ −

0 0.25 0.5 − λ
0.5).

We have to determine the eigenvalues of A. The characteristic equation is det(A − λI) = 0 that is −λ(λ −

1)(λ − 0.5) = 0 which implies that λ = 1 or λ = 0 or λ = 0.5. The first condition for A to be diagonalizable

is that the characteristic polynomial should have the form (−1)dim(R )=3 (λ − 1)m1 =1 (λ − 0)m2 =1 (λ − 0.5)m3 =1
3

with m1 + m2 + m3 = dim(R3 ) which is also the case. Remains to determine the dimension of Ker(A − 1 · I),

that of Ker(A − 0 · I), and that of Ker(A


− 0.5 · I).   
 0.5 − 1 0.25 0   −0.5 0.25 0 
   
   
Start with the former. A − 1 · I =  0.5 0.5 − 1  =  0.5
0.5   −0.5 0.5 . We have

   
   
0 0.25 0.5 − 1 0 0.25 −0.5
     

 −0.5 0.25 0  x   0  −0.5x + 0.25y = 0




     

    
 0.5
 −0.5 0.5   y 
   =  0 
  ⇔ 0.5x − 0.5y + 0.5z = 0 This is a linear system of 3 equa-
     

     


0 0.25 −0.5 z 0 
 0.25y − 0.5z = 0
tions and 3 unknowns from which we get x = 0.5y and z = 0.5y. Thus Ker(A − I) = {(x, y, z) ∈ R3 /x =

0.5y, z = 0.5y} = {(0.5y, y, 0.5y), y ∈ R} = {y(0.5, 1, 0.5), y ∈ R}. Thus the vector space Ker(A − I) is gener-

ated by the vector v1 = (0.5, 1, 0.5) which is a basis of Ker(A − I). Thus dim (Ker(A − I)) = 1 = m1 .

51
 
 0.5 0.25 0 
 
 
A−0·I = 0.5 0.5 0.5 

 
 
0 0.25 0.5
     

 0.5 0.25 0  x   0  0.5x + 0.25y = 0




     

    
We have  = 0 ⇔ 0.5x + 0.5y + 0.5z = 0 This is a linear system of 3 equa-
 0.5 0.5 0.5   y

   
     

     

0 0.25 0.5 z 0  0.25y + 0.5z = 0

tions and 3 unknowns from which we get x = −0.5y and z = −0.5y. Thus Ker(A) = {(x, y, z) ∈ R3 | x =

−0.5y, z = −0.5y} = {(−0.5y, y, −0.5y), y ∈ R} = {y(−0.5, 1, −0.5), y ∈ R}. Thus the vector space Ker(A) is

generated by the vector v2 = (−0.5, 1, −0.5) which is a basis of Ker(A). Thus dim (Ker(A)) = 1 = m2 .

   
 0.5 − 0.5 0.25 0   0 0.25 0 
   
   
A − 0.5 · I =  0.5
 0.5 − 0.5 0.5  =  0.5 0 0.5 
 
.
   
   
0 0.25 0.5 − 0.5 0 0.25 0
     

 0 0.25 0  x   0  0.25y = 0




     

    
We have   0.5 0 0.5   y 
   =  0 
  ⇔ 0.5x + 0.5z = 0 This is a linear system of 3 equations
     

     


0 0.25 0 z 0  0.25y = 0

and 3 unknowns from which we get y = 0 and z = −x. Thus Ker(A − 0.5 · I) = {(x, y, z) ∈ R3 : y = 0, z =

−x} = {(x, 0, −x), x ∈ R} = {x(1, 0, −1), x ∈ R}. Thus the vector space Ker(A − 0.5 · I) is generated by the

vector v3 = (1, 0, −1) which is a basis of Ker(A − 0.5 · I). Thus dim (Ker(A − 0.5 · I)) = 1 = m3 .

The last condition is thus satisfied which implies thatA is diagonalizable.


 The vectors (v1 , v2 , v3 ) form a

 1 0 0 
 
 
basis of R3 in which the matrix A is represented by A0 = 
 0 0 0 

 
 
0 0 0.5

52
   
 0.5 −0.5 1   0.5 0.5 0.5 
   
That is we have A = B 0 A0 B 0 −1 where B 0 = (v1 , v2 , v3 ) =  −1
   
 so that B 0 =  −0.5 0.5 −0.5 
 1 1 0   
   
   
0.5 −0.5 −1 0.5 0 −0.5
   
 1n 0 0   1 0 0 
   
d. ∀n ∈ N \ {0}, An = B 0 A0 n B 0 −1 where A0 n = 
   
= 0 0  Thus
 0 0n 0   0 
   
   
0 0 0.5n 0 0 0.5n

     
 0.5 −0.5 1   1 0 0   0.5 0.5 0.5 
     
     
An = 
 1 1 0 · 0 0
  0  ·  −0.5 0.5 −0.5 
  
     
     
0.5 −0.5 −1 0 0 0.5n 0.5 0 −0.5
 
 0.25 + 0.5n+1 0.25 0.25 − 0.5n+1 
 
 
= 
 0.5 0.5 0.5 

 
 
0.25 − 0.5n+1 0.25 0.25 + 0.5n+1

 
 0.25 0.25 0.25 
 
 
Given that limn→∞ 0.5n+1 n
= 0 it follows that limn→∞ A = 
 0.5 0.5 0.5 .

 
 
0.25 0.25 0.25

1.11 Lines and planes

In this paragraph we will review some concepts from geometry.

1.11.1 Equation of a line in the space

M
r  ∆

1

u 
A r
1


 

 


53
In the space consider the line ∆1 that goes through the point A = (a1 , a2 , a3 ) along the direction of the
−−→
vector u = (u1 , u2 , u3 ) 6= (0, 0, 0). Let M = (x, y, z) be a point of ∆1 . Then AM = λu for some λ ∈ R. That is

(x − a1 , y − a2 , z − a3 ) = λ(u1 , u2 , u3 ), λ ∈ R. (1.1)

Equation (1.1) defines the line ∆1 using the parameter λ. If we want to eliminate this parameter, observe that

from (1.1)

x − a1
x − a1 = λu1 ⇒ λ =
u1
y − a2
y − a2 = λu2 ⇒ λ =
u2
z − a3
z − a3 = λu3 ⇒ λ =
u3

so that
x − a1 y − a2 z − a3
λ= = =
u1 u2 u3

Then the equation


x − a1 y − a2 z − a3
= = (1.2)
u1 u2 u3

is equivalent to Equation (1.1) and also defines the line ∆1 . If u1 = 0 then the equation of ∆1 is

y − a2 z − a3
x = a1 , = .
u2 u3

If u1 = 0 and u2 = 0 then the equation of ∆1 is

x = a1 , y = a2 .

54

1.11.2 Equation of a plane in the space 
 Π 

 
 
  

 


r
M 

 
 

 u 

 
  1 v 
r
 
  
 
 A 
 
 
  
 



In the space consider a point A = (a1 , a2 , a3 ) and two linearly independent vectors u = (u1 , u2 , u3 ) and

v = (v1 , v2 , v3 ). Consider the plane Π that goes through A and is parallel to both u and v.
−−→
Then a point M = (x, y, z) belongs to Π if and only if we can find λ1 , λ2 ∈ R such that AM = λ1 u + λ2 v,

that is

(x − a1 , y − a2 , z − a3 ) = λ1 (u1 , u2 , u3 ) + λ2 (v1 , v2 , v3 ), λ1 , λ2 ∈ R. (1.3)

Equation (1.3) defines the plane Π using the parameters λ1 and λ2 . To find the implicit equation of the plane

we proceed as follows:

u1 v1 x − a1

u2 v2 y − a2 =0

u3 v3 z − a3
−−→
because the vectors u, v and AM are linearly dependent.

We find that

α(x − a1 ) + β(y − a2 ) + γ(z − a3 ) = 0, (1.4)

where α, β, and γ are the following.

u2 v2 u1 v1 u1 v1
α= ; β=− ; γ= .
u3 v3 u3 v3 u2 v2

55
Equations (1.3) and (1.4) are equivalent descriptions of the plane Π.

Property:

Suppose that the plane Π is given by the equation:

α(x − a1 ) + β(y − a2 ) + γ(z − a3 ) = 0, (1.5)

Then the vector (α, β, γ) is orthogonal to Π.

Moreover, the equation of the Line ∆ orthogonal to a the plane Π and that goes through the point A =

(a1 , a2 , a3 ) is

(x − a1 , y − a2 , z − a3 ) = λ(α, β, γ), λ ∈ R,

or equivalently
x − a1 y − a2 z − a3
= = .
α β γ

If α = 0 the equation of ∆ is

y − a2 z − a3
x = a1 ; = .
β γ

If α = 0 and β = 0 the equation of ∆ is

x = a1 ; y = a2 .

56

You might also like