[go: up one dir, main page]

0% found this document useful (0 votes)
9 views9 pages

Inv Matrices

This teaches some of the fundamentals of linear algebra. In it, is a lesson about inverse matrices

Uploaded by

Gianna Williams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views9 pages

Inv Matrices

This teaches some of the fundamentals of linear algebra. In it, is a lesson about inverse matrices

Uploaded by

Gianna Williams
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Matrix Inverses

1 Properties of Matrix Inverses


Definition 1.1. Invertible Matrix
An n × n matrix A is invertible if there exists an n × n matrix C such that
CA = In and AC = In . C is called the inverse of A and we write C = A−1 .

Theorem 1.1. The inverse of a matrix is unique.

Proof. Let B and C be inverses of A. Then B = BI = BAC = IC = C. 

Since the inverse of a matrix is unique, it is appropriate to refer to the inverse


of A, as in there is only one. From the definition of the inverse we can see that
A is the inverse of A−1 as well, since AA−1 = I and A−1 A = I.

Example 1.1. Let A be a nonsingular n × n matrix and let’s say we want to


solve n systems of equations that all have A as their coefficient matrix. Writing
the systems as matrix equations we would get

Ax = b1
Ax = b2
..
.
Ax = bn

Note that since A is nonsingular, all the solutions to these equations will be
unique. For notational purposes, let’s call them c’s. So we can rewrite the
equations replacing the x’s with c’s.

1
Ac1 = b1
Ac2 = b2
..
.
Acn = bn

To solve the first of these equations, we row reduce the augmented matrix, and
since A is nonsingular, we know we’ll get the unique solution c1 (note, we don’t
know what c1 is until we compute it, we’ve just given it a name in advance
because we know it’s unique).
h i h i
A b1 ∼ In c1

And of course we do the same thing to solve all the other equations.

h i h i
A b2 ∼ In c2
h i h i
A b3 ∼ In c3
..
.
h i h i
A bn ∼ In cn

Notice that in all these row reductions, the first n columns of the augmented
matrix are A, and they always row-reduce to In . In the first system, b1 row-
reduces to c1 , in the second one, b2 row-reduces to c2 , etc. We can eliminate a
lot of steps by doing all these row reductions at once, as follows
h i h i
A b1 b2 · · · bn ∼ In c1 c2 · · · cn

We can simplify this even further if we write the b’s and c’s as matrices
h i h i
B = b1 b2 · · · bn , C = c1 c2 · · · cn

Then we have
h i h i
A B ∼ In C

2
To summarize, if we have a bunch of systems with the same nonsingular coeffi-
cient matrix, we can solve them all at once by forming a matrix B with all the
constant vectors, and the matrix C above will contain all the solution vectors.

Another interesting fact that will be useful later is that the equations

Ac1 = b1
Ac2 = b2
..
.
Acn = bn

tell us that
h i h i
AC = Ac1 Ac2 · · · Acn = b1 b2 · · · bn = B

So if A and B are n × n matrices and


h i h i
A B ∼ In C

then AC = B.

Example 1.2. Let


       
1 −1 0 0 1 −1
A = 1 0 1 , b1 = 1 , b2 = 2 , b3 =  2 
       

2 3 −1 5 1 7

Solve the following systems all at once.

Ax = b1
Ax = b2
Ax = b3

We simply form a matrix from the constant vectors, augment A with it, and
row reduce:

3
   
h i 1 −1 0 0 1 −1 1 0 0 1 1 1
A b1 b2 b3 = 1 0 1 1 2 2  ∼ 0 1 0 1 0 2
   

2 3 −1 5 1 7 0 0 1 0 1 1

Then we simply read the solution vectors from the second partition:

The solution set to Ax = b1 is  


n 1 o
1
 

0
The solution set to Ax = b2 is  
n 1 o
0
 

1
The solution set to Ax = b3 is  
n 1 o
2
 

Theorem 1.2. Computing a Matrix Inverse


If A is a nonsingular n × n matrix and D is the matrix formed by augmenting
A with In , then the reduced row-echelon form of D will be In augmented with
the n × n matrix C, where AC = In . We can write this as:
h i h i
If D = A In ∼ In C , then AC = In , and C = A−1

Proof. As we saw in example 1.1, if A and B are n × n matrices and


h i h i
A B ∼ In C

then AC = B. In this case, B = In , which is n × n, so we have the desired


result AC = In


4
Example 1.3. Let A be the matrix given below. Is A invertible? If it is, find
the inverse of A.
 
2 1 0
A = 0 3 2
 

1 1 2

We can answer both parts at once by augmenting A with I3 and row-reducing.


If A row-reduces to I3 , then A is invertible, and the second partition is A−1 .

  2 1 1

1 0 0 5 −5

2 1 0 1 0 0 5
  1 2 2

[A | I] = 0 3 2 0 1 0 ∼ 
0 1 0 5 −5

5
1 1 2 0 0 1 3
0 0 1 − 10 1
− 10 3
5

So A is invertible, and its inverse is:


 2

5 − 51 1
5
A−1 =
 
 1 2 2
 5 5 −5
3 1 3
− 10 − 10 5

Theorem 1.3. If A is an invertible n × n matrix, then the equation Ax = b


has the unique solution x = A−1 b for all b in Cn .

Proof. A−1 b is a solution to Ax = b since

AA−1 b = Ib = b

A−1 b is the unique solution, since if y is another solution, then

Ay = b
A−1 Ay = A−1 b
y = A−1 b

5
Theorem 1.4. Properties of Inverse Matrices

1. If A is invertible, then A−1 is invertible and (A−1 )−1 = A

2. If A and B are n×n and invertible, then so is AB, and (AB)−1 = B −1 A−1

3. If A is invertible, then so is AT , and (AT )−1 = (A−1 )T = A−T

4. If A is invertible and α is a non-zero scalar, then αA is invertible, and


(αA)−1 = 1 −1
αA

Theorem 1.5. If A and B are n × n matrices, then AB is nonsingular iff A


and B are both nonsingular

Theorem 1.6. If A and B are n × n matrices and AB = In , then BA = In .

Theorem 1.7. Equivalence of Nonsingular and Invertible


If A is an n × n matrix, then A is nonsingular iff A is invertible.

6
2 Invertible Matrix Theorem

Theorem 2.1. Invertible Matrix Theorem (IMT)


Let A be an n × n matrix. The following statements are logically equivalent.

1. A is an invertible matrix.

2. A is nonsingular

3. A is row equivalent to the n × n identity matrix.

4. A has n pivot positions.

5. The equation Ax = 0 has only the trivial solution.

6. N (A) = {0}

7. The columns of A form a linearly independent set.

8. The linear transformation x 7→ Ax is one-to-one.

9. The equation Ax = b has a unique solution for every b in Cn .

10. The system LS(A, b) has a unique solution for every b in Cn .

11. The columns of A span Cn .

12. The linear transformation x 7→ Ax is onto.

13. There is an n × n matrix C such that CA = I.

14. There is an n × n matrix D such that AD = I.

15. AT is an invertible matrix.

16. det A 6= 0.

7
Definition 2.1. Unitary Matrix
A unitary matrix U is an n × n matrix that satisfies the equation U ∗ U = In .

Theorem 2.2. If U is a unitary matrix, then U is invertible and U −1 = U ∗

Proof. If U is unitary, then U ∗ U = In . Since In is nonsingular, both U and U ∗


are nonsingular, and therefore invertible. We also know that since U and U ∗
are square and U ∗ U = In , U U ∗ = In . By the definition of an invertible matrix,
U −1 = U ∗ . 

Theorem 2.3. The columns of an n × n matrix U form an orthonormal set iff


U is a unitary matrix.

u∗1
 
 ∗
 u2  ∗
Proof. Let U = [u1 u2 · · · un ], and U = 
 .. . So

 . 
u∗n

u∗1
 
 ∗
 u2 
U ∗U = 
 ..  [u1 u2 · · · un ]

 . 
u∗n

u∗1 u1 u∗1 u2 · · · u∗1 un


 
 ∗
 u2 u1 u∗2 u2 · · · u∗2 un 

=
 .. .. .. 
 . . . 
u∗n u1 u∗n u2 · · · u∗n un

= I, iff u∗i ui = hui , ui i = 1 ∀i, and u∗i uj = hui , uj i = 0 for i 6= j.

So {u1 , u2 , . . . , un } is an orthonormal set iff U ∗ U = I.




8
Theorem 2.4. If U is an n × n unitary matrix and x and y are in Cn , then

hU x, U yi = hx, yi and kU xk = kxk

Proof.

hU x, U yi = (U x)∗ U y
= x∗ U ∗ U y
= x∗ In y
= x∗ y
= hx, yi

p
kU xk = hU x, U xi
p
= hx, xi
= kxk

You might also like