[go: up one dir, main page]

0% found this document useful (0 votes)
20 views116 pages

Algebra Module

Uploaded by

dberihun316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views116 pages

Algebra Module

Uploaded by

dberihun316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

LINEAR ALGEBRA

FOR
ECONOMISTS

MODULE

TADELE BAYU
Aksum University
Ethiopia

2014

DEPARTMENT OF ECONOMICS
Preface

i
Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

1 Matrix Algebra 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Matrix Addition and Subtraction . . . . . . . . . . . . . . . . 3
1.2.2 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Inner Product and Outer Product . . . . . . . . . . . . . . . . 8
1.2.4 Transpose of a matrix . . . . . . . . . . . . . . . . . . . . . . 8
1.2.5 Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 Minors and Cofactors . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.2 Higher order determinants . . . . . . . . . . . . . . . . . . . . 16
1.3.3 Adjoint (Adjugate) of Matrices . . . . . . . . . . . . . . . . . 17
1.4 Matrix Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4.1 Derivation of a second order matrix inverse . . . . . . . . . . 20
1.4.2 Gauss Jordan Elimination Through Pivoting . . . . . . . . . . 22
1.5 Partitioned Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6 Rank of a Matrix and Linear Independence . . . . . . . . . . . . . . . 25
1.6.1 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.7 Vectors and Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . 27
1.7.1 Vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.7.2 Length of a column vector . . . . . . . . . . . . . . . . . . . . 28
1.7.3 Linear dependence . . . . . . . . . . . . . . . . . . . . . . . . 28
1.8 Powers and Trace of a Square Matrix . . . . . . . . . . . . . . . . . . 29
1.8.1 Trace of a Square Matrix . . . . . . . . . . . . . . . . . . . . . 30
1.9 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.10 SOLUTION FOR EXERCISES . . . . . . . . . . . . . . . . . . . . . 33
1.10.1 Exercise 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.10.2 Exercise 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.10.3 Exercise 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

ii
1.10.4 Exercise 1.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.10.5 Exercise 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.10.6 Exercise 1.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.10.7 Exercise 1.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.10.8 Exercise 1.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.10.9 Exercise 1.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.10.10 Exercise 1.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
1.10.11 Exercise 1.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
1.10.12 Exercise 1.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.11 Summary Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2 Systems of Linear Equations 52


2.1 Matrix Representation of Linear Equations . . . . . . . . . . . . . . . 52
2.2 Solving Systems of Linear Equations . . . . . . . . . . . . . . . . . . 53
2.2.1 Simultaneous Equation . . . . . . . . . . . . . . . . . . . . . . 55
2.2.2 Gauss-Jordan Method . . . . . . . . . . . . . . . . . . . . . . 57
2.2.3 Cramers Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.2.4 Inverse Method . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.3 Homogeneous Systems of Linear Equations . . . . . . . . . . . . . . . 62
2.4 Economic Applications . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.5 SOLUTION FOR EXERCISES . . . . . . . . . . . . . . . . . . . . . 70
2.5.1 Exercise 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.5.2 Exercise 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.5.3 Exercise 2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.5.4 Exercise 2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

3 Special Determinants and Matrices in Economics 83


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2 The Jacobean Determinant . . . . . . . . . . . . . . . . . . . . . . . . 83
3.3 The Hessian Determinant . . . . . . . . . . . . . . . . . . . . . . . . 85
3.3.1 Unconstrained optimization and |H| . . . . . . . . . . . . . . 85
3.4 Eigen vectors and Eigen values . . . . . . . . . . . . . . . . . . . . . 87
3.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.5.1 Positive and negative definiteness . . . . . . . . . . . . . . . . 93
3.5.2 Exercise 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.5.3 Exercise 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

4 Input out put and linear programming 96


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.2 Input-Output Model (Leontief Model) . . . . . . . . . . . . . . . . . 96
4.3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.3.1 Assumptions of Input-Output Models . . . . . . . . . . . . . . 97
4.3.2 The Closed Economy Model . . . . . . . . . . . . . . . . . . . 97

iii
4.3.3 The Open Economy Model . . . . . . . . . . . . . . . . . . . . 97
4.3.4 Exercise 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.4 Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.4.2 Formulating Linear Programming Problems . . . . . . . . . . 102
4.4.3 solving Linear Programming Problems . . . . . . . . . . . . . 102
4.4.4 The Graphic Method . . . . . . . . . . . . . . . . . . . . . . . 102
4.4.5 The Simplex Method . . . . . . . . . . . . . . . . . . . . . . . 104
4.4.6 The Duality Theorem . . . . . . . . . . . . . . . . . . . . . . . 106
4.4.7 Limitations of Linear Programming . . . . . . . . . . . . . . . 106
4.4.8 Summary Questions . . . . . . . . . . . . . . . . . . . . . . . 108

iv
Chapter 1
Matrix Algebra

Objectives

This chapter will help students to:


• Understand Matrix concepts

• Develop mathematical skills about matrix operation

• Know facts about matrix inversion

• Learn the terms in matrix algebra

• Develop ability in economic application of matrix

1.1 Introduction
A matrix is an array of numbers or parameters arranged in rows and columns as

 
a11 a11 . . . a1n
 a21 a22 . . . a2n 
A= 
. . . . . .

... ... 
am1 am2 . . . amn
Matrix A is m × n matrix (i.e., m rows and n columns) where the entry in the ith row
and j th column is aij . So that a matrix that contains m rows and n columns can be

1
expressed as
A = (aij )m×n
The number of rows m and the number of columns n explains the dimension of matrix
A. In this case the row number always precedes the column number.

Two matrices are said to be of the same size if they have the same number of rows and
same number of columns.Hence, matrix equality is defined for two matrices if they are
in the same size. Given two m × n matrices A and B , A = B if aij = bij for every i, j.

Example
 
  1 −9
1 −9 7
Suppose A = B = 0 1 
0 1 −5
7 −5
Since the size of matrix A is 2 × 3 and that of B is 3 × 2, A 6= B.

Example
   
x2 y − x 1 x−y
Find all values of x and y so that =
2 y2 x+1 1
We see that the size of each matrix is 2 × 2. So we set the corresponding entries
equal:

x2 = 1 y−x=x−y
2=x+1 y2 = 1

We see that x = ±1 and y = ±1. From 2 = x + 1, we get that x must be 1. From


y − x = x − y, we get that 2y = 2x and so x = y. Thus y is also 1.

Exercise 1.1 State the dimension of the following matrices


 
1. A= 2 4 7
 
2
2. B= 6

4
 
2 4
3. C=
6 5
 
2 4 7
4. D=
6 5 4

2
 
2 4
5. E= 6
 5
4 7
 
2 4 7
6. F=6 5 4
4 7 0

1.2 Matrix Operations


1.2.1 Matrix Addition and Subtraction
Definition
If A = (aij )m × n and B = (bij )m × n, we define the sum of A and B as m × n matrix
as(aij ) ± (bij )m × n
A + B = (aij ± bij )
Matrix addition and subtraction are carried out by adding (subtracting) the corre-
sponding elements of the two matrices. But one thing you need to know is that both
matrix addition and subtraction are possible if and only if the two matrices under
consideration have the same order.

Definition Let A = (aij ) and B = (bij ) be m × n matrices. We define their


sum, denoted by A + B, and their difference, denoted by A − B, to be the respective
matrices (aij + bij ) and (aij − bij ). We define scalar multiplication by for any r ∈ R,
rA is the matrix (raij ). These definitions should appear quite natural: When two
matrices have the same size, we just add or subtract their corresponding entries, and
for the scalar multiplication, we just multiply each entry by the scalar. Here are some
examples.

Compute A+B where A and B are matrices defined as:


   
1 −3 −5 1 2 9 6 −7
9 6 −4 −9  6 −9 2 4
A= 
−3 7 −9 −4
 B= 
−3 −8 1 −3

0 4 3 9 −4 9 −1 6
The sum of two matrices is defined when both matrices have equal size, and the
result is a new matrix of equal size, where each entry is obtained by adding the en-
tries at same position in both matrices. Matrices of different sizes cannot be added.

Step 1: Add the corresponding entries of the matrices A and B.

3
 
1+2 −3 + 9 −5 + 6 1 + (−7)
 9+6 6 + (−9) −4 + 2 −9 + 4 
A + B =
−3 + (−3) 7 + (−8) −9 + 1 −4 + (−3)

0 + (−4) 4+9 3 + (−1) 9+6


Step 2: Final solution.
 
3 6 1 −6
 15 −3 −2 −5
A+B= −6 −1 −8

−7
−4 13 2 15

4
Exercise 1.2
     
2 3 −1 2 1 2 3
Let A = B= and C = .
−1 2 6 −2 −1 −2 −3
Compute each of the following, if possible.
1. A + B

2. B − A

3. B + C

4. 4C

5. 2A − 3B

Basic Rules of matrix addition and multiplication by a scaler

Given conformable matrix A, B and C the following rules of matrix addition


hold true

♥ (A + B) + C = A + (B + C)

♥ A+B =B+A

♥ A+0=A

♥ A + (−A) = A − A = 0

For any constant α and β

♥ (α + β)A = αA + βA

♥ α(A + B) = αA + αB

Exercise 1.3: Given the following matrices answer question 1,2 and 3
     
2 4 7 0 8 7 1 0 3
A= 6 5 4 B= 0 5 3 C= 1 0 2
4 7 0 1 5 0 1 0 0

1. 2A+3B

5
2. 3A-3B
3. 3A-A try it!
4. Compute A + B A−B B + A and A + B + C for the following Matrices
     
2 4 7 0 8 7 1 0 3
A= 6 5 4 B= 0 5 3 C= 1 0 2
4 7 0 1 5 0 1 0 0

1.2.2 Matrix Multiplication


Definition
Suppose that A = (aij )m×n and B = (bij )n×p matrices. Then the product of the two
matrices A and B let it be C is a matrix with dimension m × p that is C = (cij )m×p ,
whose element in the ith row and j th column is the scaler product given by
n
X
cij = ai1 b1j + ai2 b2j + ... + ain bnj = aij bij
i=1

of the ith row of A with the j th column of B.


 st 
1 row of A times 1st column of B 1st row of A times 2nd column of B
2nd row of A times 1st column of B 2nd row of A times 2nd column of B
Note:

1. In general, AB does not equal BA.

2. If AB = AC, it is not true that B = C.

3. If the product AB is the zero matrix, it cannot be concluded that either A or


B is a zero matrix.
 
  x
Example Suppose matrix A and matrix B are given as A= a b c B=y 
z
Find
1. AB
2. BA
 
  x
AB = a b c y  = ax + by + cz
z

6
   
x   xa xb xc
and BA = y  a b c = ya yb yc
z za zb zc
m
X
(AB)ij = Aik Bk j
k=1

Thus the product AB is defined only if the number of columns in A is equal to the
number of rows in B, in this case m. Each entry may be computed one at a time.

Exercise 1.4

Multiply, if possible the following given matrices


   
1 2 4 −3
1. Let A = and B =
3 4 −2 1
  
2 2 9 1 2 3
2. Is it possible to multiply ?
−1 0 8 5 2 3
 
  1 2
2 2 9 
3. Can we multiply 5 2?
−1 0 8
−1 3
   
a b c x
4. If A = d e f , B = y 
  
g h i z
Find AB and BA
   
a b c α β γ
5. If A = d e f  B = λ µ ν 
g h i ρ σ τ
Find AB and BA if possible? and Try BA by yourself !

Dear learners, you need to now that matrix multiplication is possible only if the
number of columns in the lead matrix is equal with the number of rows in the lag
matrix.In this case we say that matrix AB is conformable. Moreover; if A and B are
two matrices, then Ab might be defined , even if BA is not.

7
Properties of Matrix multiplication

Given conformable matrix A, B and C the following rules of matrix multiplica-


tion hold true

1. AB 6= BA Matrix multiplication is not commutative special cases

(a) Identity element: If A is a square matrix, then AI = IA = A where


I is the identity matrix of the same order.
(b) Inverse matrixAB = BA when B is the inverse of A

2. (AB)C = A(BC) Associative Law

3. A(B + C) = AB + AC Left Distributive Law

4. (A + B)C = AC + BC Right Distributive Law

5. (rA)B = r(AB) = A(rB) (Scalar multiplication commutes with matrix


multiplication.)

1.2.3 Inner Product and Outer Product


The inner product of two vectors in matrix form is equivalent to a column vector
multiplied on the left by a row vector:
 
b1
  b2 

a · b = aT b = a1 a2 · · · an  ..  = a1 b1 + a2 b2 + · · · + an bn = ni=1 ai bi
  P
.
bn
T
where a denotes the transpose of a. The outer product (also known as the dyadic
product or tensor product) of two vectors in matrix form is equivalent to a row vector
multiplied on the left by a column vector:
   
a1 a1 b 1 a1 b 2 · · · a1 b n
 a2 b 1 a2 b 2 · · · a2 b n 
 a2     
a⊗b = abT =  ..  b1 b2 · · · bn =  ..
 
.. ... .. 
.  . . . 
an an b 1 an b 2 · · · an b n

1.2.4 Transpose of a matrix


Definition
A matrix formed by interchanging the rows and columns of an m × n matrix , so that

8
the first rows of the old matrix becomes the first column in the new matrix is called
transpose of a matrix. This new matrix is with an order n × m and it is denoted by
AT or A0 . In other words if matrix A has m × n dimension ,then its transpose AT
must be n × m dimension.
   
a11 a11 . . . a1n a11 a11 . . . an1
 a21 a22 . . . a2n   a12 a22 . . . an2 
A= 
. . . . . . . . . . . .  AT =  
. . . . . . . . . . . . 
am1 am2 . . . amn a1m a2m . . . anm
Example Find the transpose of the following matrices
 
1 2 3
1. Given A = 4
 5 6
7 8 9 3×3
 
1 4 7
Answer: AT = 2 5 8
3 6 9 3×3
 
3 4
2. Given A =
1 7 2×2

 
T3 1
Answer: A =
4 7 2×2

Properties of Transpose of a matrix

Given conformable matrix A, B and C the following rules of matrix multipli-


cation hold true

♥ (AT )T = A [Transpose of a transpose is the orginal matrix]

♥ (B ± C)T = B T ± C T [Transpose of sum and difference]

♥ (kA)T = kAT [scaler multiple]

♥ (AB)T = B T AT [transpose of a product]

♥(A + B)T = AT + B T

Exercise 1.5

9
1. Suppose  
7 −4 −2
−7 −6 −3
 
5
A= 8 −4
4 2 3
−8 0 8

(a) Compute(A)T
(b) state the dimension of (A)T
 
−1 −5 9  
−4 3 4 6 −6 5
 
5
2. Given two matrices A and B as A = 8 7 B = 7 4 4
−5 −9 3  3 −9 −9
−1 1 −5
T
Find C = (AB)
   
−5 6 9 0 −6 −4
3. A = −7 0 −4 B= −6 8 1 Find B T AT
4 5 6 −5 −5 −3
4. If r is a scalar element and A and B represent two different 2 × 2 matrices

(a) show that the transpose of a transpose matrix is the original matrix
(b) show that the transpose of two added matrices is the same as the addition
of the two transpose matrices
(c) show that when a scalar element is multiplied to a matrix, the order of
transposition is irrelevant.
(d) show that the transpose of a product of matrices equals the product of
their transposes in reverse order

1.2.5 Special Matrices


1. Square matrix
A matrix is square if its number of rows equal its number of columns.Forexample
the following
 matrices
 areasquare 
0 8 7 1 2 3  
0 8
B= 0 5 3
  M= 4 5 3
  C=
0 5
1 5 0 7 5 0

10
2. Row vector or column Vector If a matrix is composed of a single
 column such
a11
 a21 
that its dimension is m × 1, it is a column vector. A= 
...

am1
 
10
5
Example A=  
. . .
2
Similarly, if a matrix is composed of a single row such that its dimension is
1 × n,
 it is a row vector.

A= a11 a11 . . . a11
 
Example A= 10 5 . . . 2
3. Diagonal matrix A matrix is said to be diagonal if its off-diagonal elements (i.e.,
aij , i 6= j) are all zeros and at least one of its diagonal elements is non-zero, i.e.,
aii 6= 0 for some i = 1, ..., n .
Example   
4 0 0 1 0 0
A = 0 1 0 B = 0 5 0
0 0 2 0 0 8
4. Identity matrix An identity matrix of order n , denoted by In (most of the time
by I), is the n × n matrix having ones along the main diagonal and zeroes of
the principal diagonal.
1 0 0 0 ··· 0
 
0 1 0 0 · · · 0
 
0 0 1 0 · · · 0
In =  .. .. .. . . .. 

. . . . .

. . . . . . .. 
 .. .. .. .
0 0 0 0 ··· 1
If A is any m × n matrix , then AIn = A. This is because an identity matrix is
equivalent to 1 in the real number system.
5. Triangular matrix
A matrix A is said to be lower (upper) triangular if aij = 0 for i < (>)j.

Example
   
1 2 3 1 0 0
A = 0 5 3  B = 2 5 0
0 0 02 3 4 8

11
6. Symmetric Matrix
A square matrix with the property A = AT is called a symmetric matrix. In
other words matrix A = (a
 ij )m×n is symmetric
 if and only if aij = aji , for all i,j.
  a b c 2 −1 5
−3 2 b d e  −1 −3 2
Example
2 0
c e f 5 2 8

1.3 Determinants
If a matrix is square (that is, if it has the same number of rows as columns),then we
may have a unique number called determinant. So determinants are defined only for a
square matrix and denoted by |A|. Using determinants we can solve matrix equations
and it is also useful in determining whether a matrix has an inverse, without actually
going through the process of trying to find its inverse.

Note:

• The determinant of a product AB is the product of the determinants of square


matrices A and B
det(AB) = det(A) det(B)

• Since det(A) and det(B) are just numbers and so commute,

det(AB) = det(BA), even whenAB 6= BA

a11 a12
Given a 2 × 2 matrix A as A = ,the its determinant is given by
a21 a22

|A| = (a11 )(a22 ) − (a21 )(a12 )

Example Find the determinant of the following matrices


2 1
1. A =
1 1
|A| = (2)(1) − (1)(1) = 1

4 5
2. A =
4 5
|A| = (4)(5) − (5)(4) = 0

Give a 3 × 3 matrix A as  
a11 a12 a13
A = a21 a22 a23 
a31 a32 a33

12
,then its determinant

a11 a12 a13


|A| = 21 a22 a23
a
a31 a32 a33

is given by
a22 a23 a a a a
|A| = a11 − a12 21 23 + a13 21 22
a32 a33 a31 a33 a31 a32
|A| = a11 a22 a33 − a11 a32 a23 − a12 a21 a33 + a12 a31 a23 + a13 a21 a32 − a13 a31 a22
Example Find the determinant of the following matrices
 
2 1 3
A = 4 5 6
7 8 9
5 6 4 6 4 5
|A| = 2 −1 +3
8 9 7 9 7 8
|A| = −9
Exercise 1.6

5 2 8
1. A = 8 0 6
7 9 0

0 2 8
2. A = 0 0 6
0 9 0

4 8 8
3. A = 2 4 6
1 1 0

−1 −5 −8 5
0 0 −5 −1
4. D=
6 −4 −5 −2
−4 9 −8 −7
Self Test Exercise
Suppose matrices A and B are
   
2 4 7 0 8 7
A = 6 5 4 B = 0 5 3
4 7 0 1 5 0

13
1. Compute |AB|

2. Find |BA|
 
1 0 4
3. Find |DT | if matrix D= 0 3 7
4 7 2

1.3.1 Minors and Cofactors


The elements of a matrix remaining after the deletion of the ith row and the j th
column. So that the sub determinant of the matrix called a minor.Thus, a minor
|Mij | is the determinant of the sub matrix formed by deleting the ith row and j th
column of the matrix.  
a11 a12 a13
If A = a21 a22 a23 
a31 a32 a33
Then the associated minors are
     
a22 a23 a21 a23 a a
|M11 | = |M12 | = |M13 | = 21 22
a32 a33 a31 a33 a31 a32

Here |M11 | is the minor of a11 ,|M12 | is the minor of a12 and |M13 | is the minor of a13

|A| = a11 |M11 | + a12 (−1)|M12 | + a13 |M13 |

A minor with associated sign is called a cofactor.The rule for the cofactor is given by

|Cij | = (−1)i+j |Mij |

This implies that if the sum of the subscripts is even number,|Cij | = |Mij | , since
-1 raised to an even number is positive. But if the sum of the subscripts is an odd
number,|Cij | = −|Mij |.This implies that it is irrelevant which row or column we
choose to expand a determinant of a square matrix. We always obtain the same
result. The sign pattern is given by
 
+ − + ...
− + − . . .
 
 + − + . . .
 
.. .. .. ..
. . . .

Example Compute the matrix of cofactors of the square matrix A defined by


 
−6 0 −8
A= 1 2 3
−8 −7 −1

14
The cofactor of the entry located on i-th row and j-th column, is defined to be the
determinant of the submatrix that remains after the i-th row and j-th column are
deleted from the matrix, changing the sign if i+j is odd. Step 1: Compute the
cofactor for each entry of the matrix A.
 
2 3
C11 = = 2(−1) − 3(−7) = 19
−7 −1
 
1 3
C12 = = (−1)(1)(−1) − 3(−8)) = −23
−8 −1
 
1 2
C13 = = 1(−7) − 2(−8) = 9
−8 −7
 
0 −8
C21 = = (−1)0(−1) − (−8)(−7)) = 56
−7 −1
 
−6 −8
C22 = = (−6)(−1) − (−8)(−8) = −58
−8 −1
 
−6 0
C23 = = (−1)(−6)(−7) − 0(−8) = −42
−8 −7
 
0 −8
C31 = = 0(3) − (−8)2 = 16
2 3
 
−6 −8
C32 = = (−1)(−6)(3) − (−8)(1) = 10
1 3
 
−6 0
C33 = = (−6)2 − 0(1) = −12
1 2
Step 2: Compose the matrix using the cofactors previously computed.
 
19 −23 9
Cof (A) = 56 −58 −42
16 10 −12

Example Calculate the determinant of the matrix A defined as

−2 −7 7
4 8 −5
5 1 −4

The determinant can be computed by multiplying the entries in any row (or column)
by their cofactors and adding the resulting products, where the cofactor of the entry
located on i-th row and j-th column, is defined to be the determinant of the submatrix
that remains after the i-th row and j-th column are deleted from the matrix, changing

15
the sign if i+j is odd. From the previous definition it could be seen that cofactors
involve determinants of lower order. Using this technique recursively together with
the formulas for determinants of order 2 and 3, we have a method for calculating
the determinant. In practice when expanding cofactors along a row or column to
calculate the determinant, use the row or column with the greatest amount of zeros,
because it is not needed to compute the associated cofactors. When a matrix is trian-
gular, its determinant is the product of the entries on the main diagonal of the matrix.

Step 1: Use the formula for determinants of order 3

|A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a12 a21 a33 − a11 a23 a32
−2 −7 7
4 8 −5
5 1 −4
|A| = (−2)8(−4)+(−7)(−5)5+(4)(1)(7)−(7)(8)(5)−(−7)(4)(−4)−(−5)(1)(−2) = −135
Exercise 1.7 Find the cofactor matrix for each of the following matrices
 
−7 0 −5
1. A= −9 0 5 
5 8 −3
 
−5 −7 5
2. B= 2 −9 9
−8 8 5
 
−3 9 −6 −3
 8 −8 −1 −1
3. C=−7 −3 −4 5 

−5 2 1 2

1.3.2 Higher order determinants


Laplace expansion is a method of finding determinants in terms of co factors. In this
case the Laplace expansion of third order determinant can be expressed as

|A| = a11 |C11 | + a12 |C12 | + a13 |C13 |

Where |Cij | is a cofactor based on a second order determinant.In this way Laplace
expansion allow evaluation of a determinant a long any row or column. For n order
case
|A| = a11 |C11 | + a12 |C12 | + a13 |C13 | + . . .

16
PROPERTIES OF DETERMINANT

The following are some of the properties of a determinant

1. Adding or subtracting any zero multiple of one row (column) from another
row (column) will have no effect on the determinant.

2. Interchanging any two rows or columns of a matrix will change the sign ,
but not the absolute value of the determinant.

3. Multiplying the elements of any row or column by a constant will cause


the determinant to be multiplied by the constant.

4. The determinant of a triangular matrix i.e., a matrix with zero elements


everywhere above or below the principal diagonal ,is equal to the product
of the elements on the principal diagonal.

5. The determinant of a matrix equals the determinant of its transpose |A| =


|AT |

6. If all the elements of any row or column are zero , then the determinant
is zero.

7. If two rows or columns are identical or proportional, then will be deter-


minant zero.

1.3.3 Adjoint (Adjugate) of Matrices


A cofactor matrix is a matrix in which every element aij is replaced with its cofactor
|Cij |. An adjoint matrix is the transpose of a cofactor matrix. So, given a cofactor
matrix as  
|C11 | |C12 | |C13 |
C = |C21 | |C22 | |C23 |
|C31 | |C32 | |C33 |
Then , the Adjoint Matrix
 
|C11 | |C21 | |C31 |
Adj A = C T = |C12 | |C22 | |C32 |
|C13 | |C23 | |C33 |

Example Given a matrix  


2 3 1
A = 4 1 2
5 3 4

17
Replacing the elements of aij with their cofactors |Cij |

1 2 4 2 4 1
C11 = = −2 C12 = − = −6 C13 = =7
3 4 5 4 5 3
3 1 5 4 5 3
C21 = − = −9 C22 = =3 C23 = − =9
3 4 2 3 5 3
3 1 2 1 2 3
C31 = =5 C32 = − =0 C33 = = −10
1 2 4 2 4 1
 
−2 −6 7
C = −9 3 9 
5 0 −10
The transpose the cofactor matrix known as adjoint of matrix A is given by

 
−2 −9 5
Adj A = C T = −6 3 0 
7 9 −10
Exercise 1.8 Find the adjoint of the following matrices

 
3 3 −2
1. A=−5 0 −3
−6 −7 0
 
2 2 4
2. B=3 −7 9
9 4 0
 
3 −2 2
3. C=−8 −9 −3
0 −5 6
 
−3 2 −5
4. D=  6 2 7
8 −5 7

18
1.4 Matrix Inversion
Given a square matrix A = (aij )n×n , with determinant |A| =
6 0 has a unique inverse
−1 −1
AA = A A = I, which is given by
1
A−1 = adj(A)
|A|

But if |A| = 0, then there is no matrix X such that AX = XA = I. Therefore, a


square matrix M is invertible (nonsingular) if there exists a matrix A−1 such that
A−1 A = I = AA−1 .

Example Suppose that A and B be two matrices

 
a b
A=
c d
 
d −b
B=
−c a
Multiplying these matrices gives
 
ad − bc 0
AB = = (ad − bc)I
0 ad − bc
 
−1 1 d −b
A =
ad − bc −c a
, so long as ad − bc 6= 0, where1/(ad-bc) is the reciprocal of the determinant of the
matrix in question.

19
PROPERTIES OF MATRIX INVERSION

The following are some of the properties of matrix inversion

1. If A is a square matrix and B is the inverse of A, then A is the inverse of


B, since AB = I = BA. Then we have the identity:

(A−1 )−1 = A

2. AB is invertible, and (AB)−1 = B −1 A−1

3. The transpose AT is invertible, and (AT )−1 = (A−1 )T

4. (λA)−1 = λ−1 A−1 , whenever λ is a number 6= 0

5. If we have another invertible matrix C , (ABC)−1 = ((AB)C)−1 =


C −1 B −1 A−1

6. Notice that B −1 A−1 AB = B −1 IB = I = ABB −1 A−1 . Then:

(AB)−1 = B −1 A−1

Then much like the transpose, taking the inverse of a product reverses
the order of the product.

7. All invertible matrices are square .In other words, If A is an invertible


matrix then A must be square.

8. Finally, recall that (AB)T = B T AT . Since I T = I, then (A−1 A)T =


AT (A−1 )T = I. Similarly, (AA−1 )T = (A−1 )T AT = I. Then (A−1 )T =
(AT )−1

1.4.1 Derivation of a second order matrix inverse


Given a 2 × 2 matrix of the form:
 
a b
A=
c d

To find AX = XA = I we are looking for a number x,y z and w such that


    
a b x y 1 0
=
c d z w 0 1

20
so that applying matrix multiplication

ax + bz = 1 zx + dz = 0
cy + dw = 1 ay + bw = 0
Solving them using Cramer’s rule yields

d −b
x = ad−bc y = ad−bc
−c a
z = ad−bc w = ad−bc
So that the inverse become
 
−1 1 d −b
A =
ad − bc −c a
Similarly, we can have the second method or the adjoint method as follows

Replacing the elements of aij with their cofactors |Cij |

C11 = d C12 = −c C21 = −b C22 = a


 
d −c
C=
−b a
 
T d −b
C =
−c a
Applying the formula
1
A−1 = adj(A)
|A|
 
1 d −b
A−1 =
ad − bc −c a
Example    
2 1 1 −1
Suppose A = and B = .
1 1 −1 2
Show that B is the inverse of A
   
2(1) + 1(−1) 2(−1) + 1(2) 1 0
AB = =
1(1) + 1(−1) 1(−1) + 1(2) 0 1
 
  1 0
BA = 1(2) − 1(1) 1(1) − 1(1) − 1(2) + 2(1) −1(1) + 2(1) =
0 1
So A is invertible. Note that B is also invertible

21
1.4.2 Gauss Jordan Elimination Through Pivoting
There are three types of elementary matrices, which correspond to three types of row
operations (respectively, column operations):
1. Row switching
A row within the matrix can be switched with another row.

Ri ↔ Rj

2. Row multiplication
Each element in a row can be multiplied by a non-zero constant.

kRi → Ri , where k 6= 0

3. Row addition
A row can be replaced by the sum of that row and a multiple of another row.

Ri + kRj → Ri , where i 6= j

For each row in a matrix, if the row does not consist of only zeros, then the left-most
non-zero entry is called the leading coefficient (or pivot) of that row. A matrix is
said to be in row echelon form if the lower left part of the matrix contains only zeros,
and all of the zero rows are below the non-zero rows. The word echelon is used here
because one can roughly think of the rows being ranked by their size, with the largest
being at the top and the smallest being at the bottom.

A matrix is said to be in reduced row echelon form if furthermore all of the lead-
ing coefficients are equal to 1 and in every column containing a leading coefficient all
of the other entries in that column are zero.

Example Given a 3 × 3 matrix A below find its inverse using the above method
 
2 −1 0
A = −1 2 −1
0 −1 2
To find the inverse of this matrix, one takes the following matrix augmented by the
identity, and row reduces it as a 3 by 6 matrix:
 
2 −1 0 1 0 0
[A|I] = −1
 2 −1 0 1 0 
0 −1 2 0 0 1
By performing row operations, one can check that the reduced row echelon form of
this augmented matrix is:

22
 3 1 1

1 0 0 4 2 4
−1 1 1
[I|A ] =  0 1 0 1
 
2 2 
1 1 3
0 0 1 4 2 4

The matrix on the left is an identity matrix, which shows A is invertible. The 3 by 3
matrix on the right, A−1 , is the inverse of A. This procedure for finding the inverse
works for square matrices of any size1 .

Example Determine whether or not the matrix is invertible and if so find its in-
verse.
 
2 1
1. A =
1 −1
 
1 0 2
2. B = −1 1 −2
2 2 1
 
1 −1
3. C =
−1 1
4. Check that B is the inverse of A
   
−1 2 −3 −5 4 −3
A= 2 1 0 B =  10 −7 6 
4 −2 5 8 −6 5

Solution
  
2 1 | 1 0 1 −1 | 0 1
1. →
1 −1 | 0 1 2 1 | 1 0
     
1 −1 | 0 1 1 −1 | 0 1 1 0 | 1/3 1/3
→ →
0 3 | 1 −2 0 1 | 1/3 −2/3 0 1 | 1/3 −2/3
So we see that the reduced echelon
 form of A is the identity. Thus A is in-
1/3 1/3
vertible and A−1 = . We can rewrite this inverse a bit more nicely
1/3 −2/3
by factoring out the 1/3:
 
−1 1 1 1
A =
3 1 −2
1
The Cayley-Hamilton method A−1 = 1
det(A) [(trA) I − A]

23
     
1 0 2 | 1 0 0 1 0 2 | 1 0 0 1 0 2 | 1 0 0
2. −1 1 −2
 | 0 1 0 → 0 1 0
  | 1 1 0 → 0
  1 0 | 1 1 0
2 2 1 | 0 0 1 0 2 −3 | −2 0 1 0 0 −3 | −4 −2 1
   
1 0 2 | 1 0 0 1 0 0 | −5/3 −4/3 2/3
0 1 0 | 1 1 0 → 0 1
 0 | 1 1 0 
0 0 1 | 4/3 2/3 −1/3 0 0 1 | 4/3 2/3 −1/3
So we see that A is invertible and
 
−5/3 −4/3 2/3
B −1 =  1 1 0 
4/3 2/3 −1/3
Notice all of those thirds in the inverse? Factoring out 1/3, we get
 
−5 −4 2
1
B −1 =  3 3 0
3
4 2 −1
   
1 −1 | 1 0 1 −1 | 1 0
3. → .
−1 1 | 0 1 0 0 | 1 1
Since the reduced echelon form of C is not I, C is not invertible.

4. AA−1 = I
    
−1 2 −3 −5 4 −3 1 0 0
AB =  2 1 0   10 −7 6  = 0 1 0
4 −2 5 8 −6 5 0 0 1

The product of the two matrices is indeed an identity matrix!

Exercise 1.9 Find the inverse of the following matrices using row reduction method
 
1 2 3
1. A = 4
 5 3
7 8 9

 
2 8 3
2. B = 1 7 3
9 8 9
 
0 0 3
3. A = 1
 0 1
0 8 9

24
 
1 0 4
4. A = 1 0 1 
1 5 10
 
2 2 0
5. A = 1.5 1 1
4 0 0
 
1 2
6. A=
3 4
 
1/4 2/9
7. A=
1/5 4/3

1.5 Partitioned Matrices


In mathematics, a block matrix or a partitioned matrix is a matrix which is interpreted
as having been broken into sections called blocks or submatrices. In otherwords, a
matrix interpreted as a block matrix can be visualized as the original matrix with a
collection of horizontal and vertical lines which break it out, or partition it, into a
collection of smaller matrices. Any matrix may be interpreted as a block matrix in
one or more ways, with each interpretation defined by how its rows and columns are
partitioned.

Example
 
1 1 2 2
1 1 2 2
3 3 4 4 can be partitioned into four 2 × 2
The matrix P =  blocks

  3 3 4 4     
1 1 2 2 3 3 4 4
P11 = P12 = P21 = P22 =
1 1 2 2 3 3 4 4
The partitioned matrix can then be written as
 
P11 P12
P=
P21 P22

1.6 Rank of a Matrix and Linear Independence


1.6.1 Rank of a Matrix
Suppose matrix A has m × n dimension with n column vectors, each with m com-
ponents and the largest number of such column vectors in A that form linearly in-

25
dependent set is called the rank of A usually written as r(A). In other words, the
maximum number of linearly independent column vectors in A is called the rank of
a matrix or the rank of a given matrix is equal to the order of the largest minor of A
that is different from 0. 2

Example Find the rank of matrix B


 
1 4 7
B = 2 5 8
3 6 9

2R1 − R2 → R2
3R1 − R3 → R3
 
1 4 7
0 −3 −6 
0 −6 −12
Devide the 2nd row by -3
 
1 4 7
0 1 2 
0 −6 −12

4R2 − R1 → R1
−6R2 − R3 → R3
 
1 0 −1
0 1 2 
0 0 0
Since the non-zero rows 2, then Rank(A) = 2

Exercise 1.10

Find the rank of the following matrices


 
1 2 3
1. A = 4 5 6
7 8 9
 
1 4 7
2. B = 2
 5 8
3 6 9
2
Note that the rank a matrix is equal to the rank of its transpose r(A) = r(AT )

26
 
2 8 4
3. C = 4 1 1
6 0 8
 
9 7/9 4/3
4. Given D = 0.5 1.2 1.6  find the rank of DT
0.6 1.5 0

1.7 Vectors and Vector Spaces


A quantity that can be completely described by its magnitude in som particular units
is called a scaler quantity.Similarly, if a quantity can be described by its magnitude
and expressed in some particular unit and its direction, then we call it a vector quan-
tity. Suppose v is a vector in a plane whose initial point is the origin and whose 
x
terminal point is (x, y), then the coordinate form of v is given by v = (x, y) or .
y
The numbers x and y are called components or coordinates of v. If the initial and
terminal
  points lie at the origin then v is the zero vector and it is given by v = (0, 0) or
0
. The two components are equal if and only if their corresponding components
0
are equal.

Definition: The sum of vector u and vector v is the vector of the sums of each
corrsponding entry of the two vectors.
     
u1 v1 u1 + v1
u + v =  ...  +  ...  =  ... 
     
un vn un + vn
Note that for the addition to be de?ned the vectors must have the same number of
entries. This entry-by-entry addition works for any pair of matrices, not just vectors,
provided that they have the same number of rows and columns.

The scalar multiplication of the real number λ and the vector v is given by

   
v1 λv1
λv = λ  ...  =  ... 
   
vn λvn
If P = (x1 , y1 ) and Q = (x2 , y2 ) are two points on the plane then the coordinate form
of the vector v represented by P Q is given by v = (x2 − x1 , y2 − y1 ) and the length
of v denoted by |v| is given by
p
|v| = (x2 − x1 )2 + (y2 − y1 )2

27
Example 1.19 Find the coordinate form and the length of the vector v that has
initial points (3, −7) and terminal point (−2, 5).

Letting P = (3, −7) and Q = (−2, 5).

v = (−2 − 3, 5 − (−7)) = (−5, 12)


The length of v is p √
|v| = (−5)2 + (12)2 = 169 = 13

1.7.1 Vector space


A vector space is a set of vectors which we denoted by R. The totality of the two
vectors generated by the various linear combination of two independent vectors u and
v constitute two dimensional space.

1.7.2 Length of a column vector


For a 1 × n dimensional vector V its length represented by ||V || is given by
q
||V || = V12 + V22 + · · · + Vn2
 
1
 2 
Example Find the length of the vector V = 
 3 

4
q
||V || = V12 + V22 + · · · + Vn2
√ √
||V || = 12 + 22 + 32 + 42 = 30

1.7.3 Linear dependence


A set of vectors v1 , . . . , vn is said to be linearly dependent if and only if it can be
expressed as a linear combination of the remaing vectors. Otherwise, they are linearly
independent.

Example Check that whether the following vectors are linearly independent or
dependent
     
2 1 4
V1 = V1 = V1 =
7 8 5

28
Solution

Since V3 is the linear combination of V1 and V2 as 3V1 − 2V2 = V3

     
6 2 4
− =
21 16 5
Exercise 1.11 Check
 that
 whether V1 and V12 are functionally dependent or indepen-
dent vectors V1 = 1 0 and V2 = 0 1 are lineally independent or independent?

1.8 Powers and Trace of a Square Matrix


Square matrices can be multiplied by themselves repeatedly in the same way as ordi-
nary numbers, because they always have the same number of rows and columns. This
repeated multiplication can be described as a power of the matrix. A rectangular
matrices do not have the same number of rows and columns so they can never be
raised to a power. The power An of an n × n matrix A ,where n is a nonnegative
integer, is defined as the matrix product of n copies of A,

An = A
| .{z
. . A}
n−times

Note the following:


• Zero power: A0 = I where I is the identity matrix. This is parallel to the zeroth
power of any number which equals unity.

• Scalar multiplication:
(λA)k = λk Ak

• Determinant
det(Ak ) = det(A)k
A special case is the power of a diagonal matrix. Since the product of diagonal
matrices amounts to simply multiplying corresponding diagonal elements together,
the power k of a diagonal matrix A will have entries raised to the power.
 k  
A11 0 · · · 0 Ak11 0 · · · 0
 0 A22 · · · 0   0 Ak · · · 0 
22
Ak =  .. =
   
.. .. ..   .. .. .. .. 
 . . . .   . . . . 
0 0 · · · Ann 0 0 · · · Aknn

This implies that it is easy to raise a diagonal matrix to a power. When raising an
arbitrary matrix (not necessarily a diagonal matrix) to a power, it is often helpful to

29
exploit this property by diagonalizing the matrix first.

ExampleCompute the power of A, where A is a square matrix defined as:


 
−7 −7
−8 2

How to solve this problem?

The matrix C obtained by multiplying the square matrix A by itself, is always defined
due to the number of columns and rows of A are equal. The matrix C will have the
same size as A. To find the entry associated to row i and column j: Cij , multiply the
entries of the i-th row by the corresponding entries in the j-th column of A and then
add up the resulting products.

Step 1: Multiply each row by each column of the matrix A. The first index in C
indicates the row index and the second one indicates the column index in A.

C11 = (−7)(−7) + (−7)(−8) = 105


C12 = (−7)(−7) + (−7)(2) = 35
C21 = (−8)(−7) + (2)(−8) = 40
C22 = (−8)(−7) + (2)(2) = 60
Step 2: Final solution.

 
105 35
A.A =
40 60
Exercise 1.11 Find the power of the following matrices
 
8 −2
1.
−7 −5
 
−3 7 7
2. A= 5 −4 8 
−5 −1 −5

1.8.1 Trace of a Square Matrix


The trace of a square matrix is the sum of its diagonal elements; i.e.
n
X
trace(A) = aii
i=1

30
Example Find the trace of
 
1 −7 8
A = −4 −5 2
−4 4 5
n
X
trace(A) = aii = 1 − 5 + 5 = 1
i=1

PROPERTIES OF TRACE OF A MATRIX

The following are some of the properties of the trace of a matrix

1. trace(A)=trace(AT )

2. trace (cA + dB)= c trace(A) + d trace (B), where c and d are scalars.

3. trace (AB)=trace (BA) ,provided that both AB and BA are defined. The
trace of a product AB is independent of the order of A and B
N
4. trace(A B)=trace(A)trace(B)

Exercise 1.12 Find the trace of the following matrices


 
9 −8
1. A =
−4 −5
 
−7 −1 1
2. B= 0 −4 −2
6 −1 3
 
−3 −2 0
3. C=−6 2 2
2 6 0
 
0 9 −3 3
−7 −5 −2 9
4. D=−9 0

2 0
0 −8 −5 7

1.9 summary
An m × n matrix is a rectangular array of numbers with m rows and n columns. Each
number in the matrix is an entry. Matrix addition is also defined for two matrices of

31
the same size. Given two m × n matrices A and B,their sum, C = A + B,is the m × n
matrix with the (i, j)t h element ci j = ai j +bi j . Note that matrix addition, if defined,
is commutative: A+B = B +A and associative A+(B +C) = (A+B)+C.Moreover,
A+0=A
If all entries of A below the main diagonal are zero, A is called an upper triangular
matrix. Similarly if all entries of A above the main diagonal are zero, A is called a
lower triangular matrix. If all entries outside the main diagonal are zero, A is called
a diagonal matrix.

A vector (or column vector ) is a matrix with a single column. A matrix with a
single row is a row vector . The entries of a vector are its components. A column or
row vector whose components are all zeros is a zero vector.

The identity matrix In of size n is the n-by-n matrix in which all the elements on the
main diagonal are equal to 1 and all other elements are equal to 0.

A matrix in which the number of rows are identical with the number of columns
is called a square matrix. A square matrix A that is equal to its transpose( i.e.,
A = AT ), is called a symmetric matrix. If instead, A was equal to the negative of its
transpose (i.e., A = −AT ), then A is called skew-symmetric matrix.

The scalar multiplication cA of a matrix A and a number c (also called a scalar


in the parlance of abstract algebra) is given by multiplying every entry of A by
c:(cA)i,j = cAi,j .

Multiplication of two matrices is defined if and only if the number of columns of


the left matrix is the same as the number of rows of the right matrix. If A is an m-
by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p
matrix whose entries are given by dot product of the corresponding row of A and the
corresponding column of B:

Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity), and (A +


B)C = AC + BC as well as C(A + B) = CA + CB (left and right distributivity),
whenever the size of the matrices is such that the various products are defined.The
product AB may be defined without BA being defined. Even if both products are
defined, they need not be equal, i.e., generally AB 6= BAi.e., matrix multiplication is
not commutative.

The elements of a matrix remaining after the deletion of the ith row and the j th
column is called a minor .A minor with associated sign is called a cofactor.The rule
for the cofactor is given by |Cij | = (−1)i+j |Mij |. This implies that if the sum of the
subscripts is even number,|Cij | = |Mij | , since -1 raised to an even number is positive.

32
But if the sum of the subscripts is an odd number,|Cij | = −|Mij |. A square matrix M
is invertible (nonsingular) if there exists a matrix A−1 such that A−1 A = I = AA−1 .
A set of vectors v1 , . . . , vn is said to be linearly dependent if and only if it can be
expressed as a linear combination of the remaing vectors. Otherwise, they are linearly
independent.

NO. Objectives Yes (X) No (×)


1 Can you define the term matrix
2 Can you add, subtract and multiply matrices
3 Can you calculate the transpose of a matrix
4 Can you calculate the inverse of a matrix
5 Can you calculate the trace of a matrix
6 Can you calculate the power of a matrix
Reading Materials

1. Alpha C. Chiang, (1984). Fundamental Methods of mathematical economics.


3r d edition, Singapore.

2. Alpha C. Chiang, & Wainwright K., (2005). Fundamental Methods of mathe-


matical economics. 4th edition, Singapore.

3. Sydsaeter K., (2011). Mathematics essentials for Economic Analysis. Mekelle


University, Mekelle-Ethiopia.

4. Sydasaeter K. & Hammond Peter J., (2010). Mathematics for Economic Anal-
ysis. 5th edition, New Delhi.

1.10 SOLUTION FOR EXERCISES


1.10.1 Exercise 1.1
1. A = 1 × 3

2. B = 3 × 1

3. C = 2 × 2

4. D = 2 × 3

5. E = 3 × 2

6. F = 3 × 3

33
1.10.2 Exercise 1.2
1. Since A and B are both 2 × 2 matrices, we can add them as follow
       
2 3 −1 2 2 + (−1) 3+2 1 5
A+B = + = = .
−1 2 6 −2 −1 + 6 2 + (−2) 5 0
2. Since A and B are both 2 × 2 matrices, we can subtract them
       
−1 2 2 3 −1 − 2 2−3 −3 −1
B−A= − = = .
6 −2 −1 2 6 − (−1) −2 − 2 7 −4
3. Impossible!! B and C have different sizes: B is 2 × 2 and C is 2 × 3.

4. Just multiply each entry of C by 4


   
4·1 4·2 4·3 4 8 24
4C = = .
4 · (−1) 4 · (−2) 4 · (−3) −4 −8 −24
5. These matrices have the same size, so we’ll do the scalar multiplication first and
then the subtraction
     
4 6 −3 6 7 0
2A − 3B = − =
−2 4 18 −6 −20 10

1.10.3 Exercise 1.3


 
4 32 35
1. Answer 2A+3B= 12
 25 17
11 29 0
 
6 12 0
2. Answer 3A-3B= 18 0 3
9 6 0

1.10.4 Exercise 1.4


1. Since both matrices are 2 × 2, we can find AB and BA. Note that the size of
both products is 2 × 2
  
1 2 4 −3
AB =
 3 4 −2 1   
1(4) + 2(−2) 1(−3) + 2(1) 0 −1
= and
3(4) + 4(−2) 3(−3) + 4(1) 4 −5
  
4 −3 1 2
BA =
−2 1 3 4

34
   
4(1) + (−3)(3) 4(2) + (−3)(4) −5 −4
= .
−2(1) + 1(3) −2(2) + 1(4) 1 0
Did you notice what just happened? We have that AB 6= BA! Yes, it’s true:
Matrix multiplication is not commutative.

2. No. The first matrix is 2 × 3 and then second is also 2 × 3. The number of
columns of the first is not the same as the number of rows of the second.

 is 2 ×
3. Yes, the first  3 and the second is 3 × 2, so their product is 2 × 2.
  1 2
2 2 9 
5 2
−1 0 8
 −1 3   
2(1) + 2(5) + 9(−1) 2(2) + 2(2) + 9(3) 3 35
=
−1(1) + 0(5) + 8(−1) −1(2) + 0(2) + 8(3) −9 22
    
a b c x ax + by + cz
4. AB = d e f  y  =dx + ey + f z 
g h i z gx + hy + iz
BA is not defined
    
a b c α β γ aα + bλ + cρ aβ + bµ + cσ aγ + bν + cτ
5. AB = d e f  λ µ ν  =dα + eλ + f ρ dβ + eµ + f σ dγ + eν + f τ 
g h i ρ σ τ gα + hλ + iρ gβ + hµ + iσ gγ + hν + iτ

1.10.5 Exercise 1.5


 
7 −7 5 4 −8
1. AT = −4 −6 8 2 0 
−2 −3 −4 3 8

2. Step 1: Multiply each row of the matrix A by each column of the matrix B (To
multiply a row by a column just multiply the corresponding entries and then
add up the resulting products). The first index in C indicates the row index in
A and the second one indicates the column index in B.

C1 1 = (−1)6 + (−5)7 + 93 = −14


C1 2 = (−1)(−6) + (−5)4 + 9(−9) = −95
C1 3 = (−1)5 + (−5)4 + 9(−9) = −106

C2 1 = (−4)6 + 37 + 43 = 9
C2 2 = (−4)(−6) + 34 + 4(−9) = 0
C2 3 = (−4)5 + 34 + 4(−9) = −44

35
C3 1 = 56 + 87 + 73 = 107
C3 2 = 5(−6) + 84 + 7(−9) = −61
C3 3 = 55 + 84 + 7(−9) = −6

C4 1 = (−5)6 + (−9)7 + 33 = −84


C4 2 = (−5)(−6) + (−9)4 + 3(−9) = −33
C4 3 = (−5)5 + (−9)4 + 3(−9) = −88

C5 1 = (−1)6 + 17 + (−5)3 = −14


C5 2 = (−1)(−6) + 14 + (−5)(−9) = 55
C5 3 = (−1)5 + 14 + (−5)(−9) = 44
Step 2: Final solution.
 
−14 −95 −106
 9
 0 −44 

AB =  107 −61 −6 

−84 −33 −88 
−14 55 44
The transpose of a matrix A is obtained from interchanging the rows and
columns of A.
Step 1: Interchange the rows and
 columns of matrix A. The resulting matrix is
−14 9 107 −84 −14
T
the transpose of A. (AB) = −95
 0 61 −33 55 
−106 −44 −6 −88 44

3. Step 1: Multiply each row of the matrix A by each column of the matrix B (To
multiply a row by a column just multiply the corresponding entries and then
add up the resulting products). The first index in C indicates the row index in
A and the second one indicates the column index in B.

C11 = (-5)0 + 6(-6) + 9(-5) = -81


C12 = (-5)(-6) + 68 + 9(-5) = 33
C13 = (-5)(-4) + 61 + 9(-3) = -1
C21 = (-7)0 + 0(-6) + (-4)(-5) = 20
C22 = (-7)(-6) + 08 + (-4)(-5) = 62
C23 = (-7)(-4) + 01 + (-4)(-3) = 40
C31 = 40 + 5(-6) + 6(-5) = -60
C32 = 4(-6) + 58 + 6(-5) = -14
C33 = 4(-4) + 51 + 6(-3) = -29
Step 2: Final solution.

36
 
−81 33 −1
AB = 20 62 40 
−60 −14 −29
 
−81 20 −60
(AB)T =  33 62 −14 = B T AT
−1 40 −29

1.10.6 Exercise 1.6


0 6 8 6 8 0
1. |A| = 5 -2 +8 = 390
9 0 7 0 7 9
0 6 0 6 0 0
2. |A| = 0 −2 +8 =0
9 0 0 0 0 9
4 6 2 6 2 4
3. |A| = 4 −8 +8 =8
1 0 1 0 1 1
−1 −5 −8 5
0 0 −5 −1
4.
6 −4 −5 −2
−4 9 −8 −7
Expanding the cofactors along the row or column with the greater amount
of zeros, because it reduces the volume of the calculations, since the cofactors
associated to the zero entries should not be calculated. Determinants of order
3 are calculated using the formula: |A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 −
a13 a22 a31 − a12 a21 a33 − a11 a23 a32

−1 −5 5 −1 −5 −8
- 5(-1) 6 −4 −2 - 1 6 −4 −5
−4 9 −7 −4 9 −8

−1 −5 5
6 −4 −2 = −106
−4 9 −7
−1 −5 −8
6 −4 −5 = −721
−4 9 −8
|A| = −5(106) − 1(−721) = 191

1.10.7 Exercise 1.7


1. Step 1: Compute
 the cofactor for each entry of the matrix A.
0 5
C11 = = 0(−3) − 58 = −40
8 −3

37
 
C12 = −9 55 −3 = (−1)((−9)(−3) − 55) = −2
−9 0
C13 = = (−9)8 − 05 = −72
 5 8
0 −5
C21 = = (−1)(0(−3) − (−5)8) = −40
8 −3 
−7 −5
C22 = = (−7)(−3) − (−5)5 = 46
 5 −3 
−7 0
C23 = = (−1)((−7)8 − 05) = 56
 5 8
0 −5
C31 = = 05 − (−5)0 = 0
0 5 
−7 −5
C32 = = (−1)((−7)5 − (−5)(−9)) = 80
−9 5
−7 0
C33 = = (−7)0 − 0(−9) = 0
−9 0
Step 2: Compose the matrix using the cofactors previously computed.
 
−40 −2 −72
Cof(A)=−40 46 56 
0 80 0
 
−117 −82 −56
2. Cof(A)= 75 15 96 
−18 55 59
 
83 66 −11 147
 228 186 −36 402 
3. Cof(A)=−75 −60 15 −135

426 342 −72 744

1.10.8 Exercise 1.8


1. The adjugate matrix is the transpose of the matrix of cofactors, therefore, firstly
compute the matrix of cofactors and then transpose it. Step 1: Compute the
cofactor for each entry of the matrix A.
 
0 −3
C11 = = 00 − (−3)(−7) = −21
−7 0
 
−5 − 3
C12 = = (−1)((−5)0 − (−3)(−6)) = 18
−60
 
−5 0
C13 = = (−5)(−7) − 0(−6) = 35
−6 −7

38
 
3 −2
C21 = = (−1)(30 − (−2)(−7)) = 14
−7 0
 
3 −2
C22 = = 30 − (−2)(−6) = −12
−6 0
 
3 3
C23 = = (−1)(3(−7) − 3(−6)) = 3
−6 −7
 
3 −2
C31 = = 3(−3) − (−2)0 = −9
0 −3
 
3 −2
C32 = = (−1)(3(−3) − (−2)(−5)) = 19
−5 −3
 
3 3
C33 = = 30 − 3(−5) = 15
−5 0
Step 2: Compose the matrix using the cofactors previously computed.
 
−21 18 35
Cof(A) = 14 −12 3 
−9 19 15
Step 3: Transpose the matrix of cofactors.
 
−21 14 −9
Adj(A) = Cof (A)T =  18 −12 19 
35 3 15
 
−36 16 46
2. Adj(B) = Cof (B)T =  81 −36 −6 
75 10 −20
 
−69 2 24
3. Adj(C) = Cof (C)T =  48 18 −7 
40 15 −43
 
49 11 24
4. Adj(D) = Cof (D)T =  14 19 −9 
−46 1 −18

1.10.9 Exercise 1.9


1. To find the inverse
 matrix write the matrix
 and added to it on the right identity
1 2 3 1 0 0
matrix: [A|I] =  4 5 3 0 1 0 
7 8 9 0 0 1

R2 − 4R1 → R2

39
R3 − 7R1 → R3
 
1 2 3 1 0 0
[A|I] =  0 −3 −9 −4 1 0 
0 −6 −12 −7 0 1
Devide the 2-th row by -3
 
1 2 3 1 0 0
[A|I] =  0 1 3 4/3 −1/3 0 
0 −6 −12 −7 0 1

R1 − 2R2 → R2

R3 + 6R2 → R3
 
1 0 −3 −5/3 2/3 0
[A|I] =  0 1 3 4/3 −1/3 0 
0 0 6 1 −2 1
Devide the 3rd row by 6
 
1 0 −3 −5/3 2/3 0
[A|I] =  0 1 3 4/3 −1/3 0 
0 0 1 1/6 −1/3 1/6

R1 + 2R3 → R1

R2 − 3R3 → R2
 
1 0 0 −7/6 −1/3 0.5
[A|I] =  0 1 0 5/6 2/3 −0.5 
0 0 1 1/6 −1/3 1/6

 
−7/6 −1/3 0.5
A−1 =  5/6 2/3 −0.5
1/6 −1/3 1/6

2. Augumented
 matrix: 
2 8 3 1 0 0
 1 7 3 0 1 0 
[A|I] = 
 9

8 9 0 0 1 

Devide the 1st row by 2

40
 
1 4 1.5 0.5 0 0
[A|I] =  1 7 3 0 1 0 
9 8 9 0 0 1
R2 − 1R1 → R2

R3 − 9R1 → R3
 
1 4 1.5 0.5 0 0
[A|I] =  0 3 1.5 −0.5 1 0 
0 −28 −4.5 −4.5 0 1
devide the 2nd row by 3
 
1 4 1.5 0.5 0 0
[A|I] =  0 1 0.5 −1/6 1/3 0 
0 −28 −4.5 −4.5 0 1
R1 − 4R2 → R2

R3 + 28R2 → R3
 
1 0 −0.5 7/6 −4/3 0
[A|I] =  0 1 0.5 −1/6 1/3 0 
0 0 9.5 −55/6 28/3 1
Devide the 3rd row by 9.5
 
1 0 −0.5 7/6 −4/3 0
[A|I] =  0 1 0.5 −1/6 1/3 0 
0 0 1 −55/57 56/57 2/19
R1 + 0.5R3 → R1

R2 − 0.5R3 → R2
 
1 0 0 13/19 −16/19 1/19
[A|I] =  0 1 0 6/19 −3/19 −1/19 
0 0 1 −55/57 56/57 2/19

 
13/19 −16/19 1/19
A−1 =  6/19 −3/19 −1/19
−55/57 56/57 2/19
 
0 0 3 1 0 0
3. [A|I] =  1 0 1 0 1 0 
0 8 9 0 0 1

41
Change places the 1st and the 2nd rows
 
1 0 1 0 1 0
[A|I] = 0 0 3
 1 0 0 
0 8 9 0 0 1
Change places the 2nd and the 3rd rows
 
1 0 1 0 1 0
[A|I] = 0 8 9
 0 0 1 
0 0 3 1 0 0
Devide the 2nd row by 8
 
1 0 1 0 1 0
[A|I] =  0 1 1.125 0 0 0.125 
0 0 3 1 0 0
Devide the 3rd row by 3
 
1 0 1 0 1 0
[A|I] =  0 1 1.125 0 0 0.125 
0 0 1 1/3 0 0

R1 − R3 → R1

R2 − 1.125R3 → R2
 
1 0 0 −1/3 1 0
[A|I] =  0 1 0 −0.375 0 0.125 
0 0 1 1/3 0 0

 
−1/3 1 0
A−1 = −0.375 0 0.125
1/3 0 0
 
1 0 4 1 0 0
4. [A|I] =  1 0 1 0 1 0 
1 5 10 0 0 1

R2 − R1 → R2

R3 − R1 → R3
 
1 0 4 1 0 0
[A|I] = 0 0 −3 −1 1 0 

0 5 6 −1 0 1

42
Change places the 2nd and the 3rd rows
 
1 0 4 1 0 0
[A|I] =  0 5 6 −1 0 1 
0 0 −3 −1 1 0
Devide the 2nd row by 5
 
1 0 4 1 0 0
[A|I] =  0 1 1.2 −0.2 0 0.2 
0 0 −3 −1 1 0
Devide the 3rd row by -3
 
1 0 4 1 0 0
[A|I] =  0 1 1.2 −0.2 0 0.2 
0 0 1 1/3 −1/3 0

R1 − 4R3 → R1

R2 − 1.2R3 → R2
 
1 0 0 −1/3 4/3 0
[A|I] =  0 1 0 −0.6 0.4 0.2 
0 0 1 1/3 −1/3 0

 
−1/3 4/3 0
A−1 =  −0.6 0.4 0.2
1/3 −1/3 0
 
2 2 0 1 0 0
5. [A|I] =  1.5 1 1 0 1 0 
4 0 0 0 0 1
Devide the 1st row by 2
 
1 1 0 0.5 0 0
[A|I] = 1.5
 1 1 0 1 0 
4 0 0 0 0 1

R2 − 1.5R1 → R2

R3 − 4R1 → R2
 
1 1 0 0.5 0 0
[A|I] =  0 −0.5 1 −0.75 1 0 
0 −4 0 −2 0 1

43
Devide the 2nd row by -0.5
 
1 1 0 0.5 0 0
[A|I] =  0 1 −2 1.5 −2 0 
0 −4 0 −2 0 1

R1 − R2 → R1

R3 + 4R2 → R3
 
1 0 2 −1 2 0
[A|I] =  0 1 −2 1.5 −2 0 
0 0 −8 4 −8 1
Devide the 3rd row by -8
 
1 0 2 −1 2 0
[A|I] =  0 1 −2 1.5 −2 0 
0 0 1 −0.5 1 −0.125

R1 − 2R3 → R1

R2 + 24R3 → R2
 
1 0 0 0 0 0.25
[A|I] =  0 1 0 0.5 0 −0.25 
0 0 1 −0.5 1 −0.125

 
0 0 0.25
A−1 =  0.5 0 −0.25 
−0.5 1 −0.125
 
1 2 1 0
6. [A|I] =
3 4 0 1
R2 − 3R1 → R2
 
1 2 1 0
[A|I] =
0 −2 −3 1
Devide the 2nd row by -2
 
1 2 1 0
[A|I] =
0 1 1.5 −0.5

R1 − 2R2 → R1

44
 
1 0 −2 1
[A|I] =
0 1 1.5 −0.5
 
−1 −2 1
A =
1.5 −0.5
 
1/4 2/9 1 0
7. [A|I] =
1/5 4/3 0 1
Devide the 1st row by 1/4
 
1 8/9 4 0
[A|I] =
1/5 4/3 0 1

R2 − 1/5R1 → R2
 
1 8/9 4 0
[A|I] =
0 52/45 −4/5 1
Devide the 2nd row by 52/45
 
1 8/9 4 0
[A|I] =
0 1 −9/13 45/52

R1 − 8/9R2 → R1
 
1 0 60/13 −10/13
[A|I] =
0 1 −9/13 45/52
 
−1 60/13 −10/13
A =
−9/13 45/52

 
1 2 3
8. A = 4 5 6
7 8 9
R2 − 4R1 → R2
R3 − 7R1 → R3
 
1 2 3
0 −3 −6 
0 −6 −12
Devide the 2nd row by -3

45
 
1 2 3
0 1 2 
0 −6 −12
R1 − 2R2 → R1
R3 + 6R2 → R3
 
1 0 −1
0 1 2 
0 0 0
Since the non-zero rows are 2, then Rank(A) = 2.
 
1 4 7
9. B = 2 5 8
3 6 9
2R1 − R2 → R2
3R1 − R3 → R3
 
1 4 7
0 −3 −6 
0 −6 −12
Devide the 2nd row by -3
 
1 4 7
0 1 2 
0 −6 −12
4R2 − R1 → R1
−6R2 − R3 → R3
 
1 0 −1
0 1 2 
0 0 0
Since the non-zero rows 2, then Rank(A) = 2.
 
2 8 4
10. C = 4 1 1
6 0 8
st
Devide
 the
 1 row by 2
1 4 2
4 1 1
6 0 8
R2 − 4R1 → R2
R3 − 6R1 → R3

46
 
1 4 2
0 −15 −7
0 −24 −4
Devide the 2nd row by -15
 
1 4 2
0 1 7/15
0 −24 −4
R1 − 4R2 → R1
R3 + 24R2 → R3
 
1 0 2/15
0 1 7/15
0 0 7.2
Devide the 3rd row by 7.2
 
1 0 2/15
0 1 7/15
0 0 1
R1 − 2/15R3 → R1
R2 − 7/15R3 → R3
 
1 0 0
0 1 0
0 0 1
Since the non-zero rows 3, then Rank(A) = 3.
 
9 7/9 4/3
11. D = 0.5 1.2 1.6 
0.6 1.5 0
Since rank(D)=rank(DT ) we can check the answer via D

Devide the 1st row by 9


 
1 7/9 4/27
0.5 1.2 1/6 
0.6 1.5 0
R2 − 0.5R1 → R2
R3 − 0.6R1 → R3
 
1 7/9 4/27
0 73/90 5/54 
0 31/30 −4/45

47
Devide the 2nd row by 73/90
 
1 7/9 4/27
0 1 25/219
0 31/30 −4/45
R1 − 7/9R2 → R1
R3 − 31/304R2 → R3
 
1 0 13/219
0 1 25/219 
0 0 −151/730
Devide the 3rd row by -151/730
 
1 0 13/219
0 1 25/219
0 0 1
R1 − 13/219R3 → R1
R2 − 25/2194R3 → R2
 
1 0 0
0 1 0
0 0 1
Since the non-zero rows 3, then Rank(A) = 3.

1.10.10 Exercise 1.10


λ1 [V1 ] + λ2 [V2 ] = 0
   
λ1 1 0 + λ2 0 1

[λ1 0] + [0 λ2 ] = 0
λ1 = 0 and λ2 = 0

Since the scalars are zero (λ1 = λ1 = 0), the two vectors are linearly independent.

1.10.11 Exercise 1.11


1. Step 1: Multiply each row by each column of the matrix A. The first index in
C indicates the row index and the second one indicates the column index in A.

48
C11 = (8)(8) + (−2)(−7) = 78
C12 = (8)(−2) + (−2)(−5) = −6
C21 = (−7)(8) + (−5)(−7) = −21
C22 = (−7)(−2) + (−5)(−5) = 39
Step 2: Final solution
 
78 −6
A.A =
−21 39
2. Multiply each row by each column of the matrix A. The first index in C indicates
the row index and the second one indicates the column index in A.

C11 = (−3)(−3) + (7)(5) + 7(−5) = 9


C12 = (−3)(7) + (7)(−4) + 7(−1) = −56
C13 = (−3)(7) + (7)(8) + 7(−5) = 0

C21 = (5)(−3) + (−4)(5) + 8(−5) = −75


C22 = (5)(7) + (−4)(−4) + 8(−1) = 43
C23 = (5)(7) + (−4)(8) + 8(−5) = −37

C31 = (−5)(−3) + (−1)(5) + (−5)(−5) = 35


C32 = (−5)(7) + (−1)(−4) + (−5)(−1) = −26
C33 = (−5)(7) + (−1)(8) + (−5)(−5) = −18
Step 2: Final solution
 
9 −56 0
A.A = −75 43 −37
35 −26 −18

1.10.12 Exercise 1.12


1. trace (A)=4
2. trace (B)=8
3. trace (C)=-1
4. trace (D)=4

49
1.11 Summary Question
Given the following matrices find
 
2 3 1
1. A = 2 3 4
1 2 3
 
1 2 3
2. B = 4 5 6
7 8 9
 
2 4 8
3. C = 1 2 4
1 2 3
 
1 3 1
4. D = 4 3 4
5 2 5
 
1 3 1
5. E = 4 3 4
5 2 5
 
1 3 1
6. F = 4 3 4
5 2 5

(a) Adguate (adjoint) of the given matrix


(b) Determinant of the matrix
(c) The inverse of the matrix
(d) Transpose of the matrix
(e) Adguate of the transpose of the original matrix
(f) Determinant of the the adguate matrix . What do you observe?
(g) Transpose of the adguate matrix. What do you observe?

7. Find the product of A and B


 
  6 5 6 7
3 1/3 1 1/2 7/2 1/4 6 6 
A = 6 7/2 9 3  B=
 4

1 7 7/3
3 7 1/3 9
4 4 2 7

50
   
2 3/2 9 1/3 6 5
8. A = 1
 2 4 B= 2
 6 7
6 3/4 2 1 2 4
   
2 6 1/3 1/5 12 3
9. A = 1 1 5  B= 2 3 4
2 2 1/2 11 7 2
 
2 1 7
10. Find the inverse of the following matrices through cofactors A=5 6 5
7 8 2

51
Chapter 2
Systems of Linear Equations

Objectives

This chapter will help students to:

• Understand how to represent systems of equations in com-


pact matrix form

• Develop mathematical skills about how to solve systems of


linear equations via matrix algebra

• Know facts about homogeneous systems of linear equations

2.1 Matrix Representation of Linear Equations


A general system of m linear equations with n unknowns can be written as

a11 x1 + a12 x2 + · · · + a1n xn = b1


a21 x1 + a22 x2 + · · · + a2n xn = b2
.. . . . .
. + .. + .. + .. = ..
am1 x1 + am2 x2 + · · · + amn xn = bm
Here x1 , x2 ,. . .,xn are the unknowns, a11 ,a12 ,. . .,amn are the coefficients of the system,
and b1 ,b2 ,. . .,bm are the constant terms. Often the coefficients and unknowns are real

52
or complex numbers, but integers and rational numbers are also seen, as are polyno-
mials and elements of an abstract algebraic structure.

In the equation system there are three essential ingredients. These are
1. The set of coefficients

2. The set of variables

3. The set of constant terms


Example 2.1 Represent in matrix for the following system of equations

6x1 + 3x2 + x3 = 22

x1 + 4x2 − 2x3 = 12
4x1 − x2 + 5x3 = 10
6 3 1 x1 22
A= 1 4 −2 X= x2 D= 12
4 −1 5 x3 10

1. A- The set of coefficients

2. X- The set of variables

3. D-The set of constant terms

Exercise 2.1 Given the following system of linear equations represent in matrix form
1. x1 + x2 + x3 + x4 = 3
x1 − x2 + x 3 + x4 = 5
x2 − x3 − x4 = −4
x1 + x2 − x3 − x4 = −3

2. −x1 + 3x2 − 2x3 + 4x4 = 2


2x1 − 6x2 + x3 − 2x4 = −1
x1 − 3x2 − 4x3 − 8x4 = −4

2.2 Solving Systems of Linear Equations


As we have seen in chapter one a square matrix is nonsingular if it is the matrix of
coefficients of a homogeneous system with a unique solution. It is singular otherwise,
that is, if it is the matrix of coefficients of a homogeneous system with infinitely many
solutions.

53
We have made the distinction in the definition because a system with the same num-
ber of equations as variables behaves in one of two ways, depending on whether its
matrix of coefficients is nonsingular or singular. Where the matrix of coefficients
is nonsingular the system has a unique solution for any constant on the right side.
Example
x + 2y = a
3x + 4y = b
has the unique solution x = b − 2a and y = (3a − b)/2. On the other hand, where the

Figure 2.1: Unique solution

matrix of coefficients is singular the system never has a unique solution;it has either
no solutions or else has infinitely many, as with these.

Example

x + 3y = 3
4x + 12y = 10
has no solution.

Figure 2.2: No solution

Example
2x + 3y = 5

54
Figure 2.3: Infinitely many solutions

4x + 6y = 10
has infinitely many solutions.

To solve a certain given system of linear equations we can solve the unknown through
one of the following method.
• Simultaneous equation
• Gauss-Jordan Row Reduction
• Cramer’s Rule
• inverse Method

2.2.1 Simultaneous Equation


Given a system of linear equations with two unknown variables as follow we can show
how to solve the solution through simultaneously by eliminating one variable first and
the other next.

Example 2.2 Solve the value of x and y

2x + 3y = 4

x + 2y = 1
Multiply the the second equation by -2 and add to the first equation, then you will
get
−y = 2 ⇒ y = 2
If you substitute y = 2 either in the first equation or in the second equation you will
have x = 5.

Example 2.3 Find the value of x, y and z?

x + 3y − 2z = 5

55
3x + 5y + 6z = 7
2x + 4y + 3z = 8
Solving the first equation for x gives x = 5 + 2z − 3y, and plugging this into the
second and third equation yields

−4y + 12z = −8

−2y + 7z = −2
Solving the first of these equations for y yields y = 2 + 3z, and plugging this into the
second equation yields z = 2. We now have:

x = 5 + 2z − 3y

y = 2 + 3z
z=2
Substituting z = 2 into the second equation gives y = 8, and substituting z = 2 and
y = 8 into the first equation yields x = −15. Therefore, the solution set is the single
point (x, y, z) = (−15, 8, 2).

Exercise 2.2 Solve the value of x and y using substitution method

1. 2x1 + 3x2 = 5
x1 + 2x2 = 10

2. 0.6x1 + 0.2x2 = 1
0.5x1 + 0.3x2 = 4

3. 1.5x1 + 2x2 = 1.5


0.5x1 + 3x2 = 4.6

4. 4x1 + 5x2 = 30
2x1 + x2 = 13

5. 11x1 + 25x2 + 15x3 = 9


2x1 + 5x2 + 12x3 = 8
1.6x1 + 3x2 + 7x3 = 17

6. 2x1 + 9x2 + 6x3 = 19


x1 + 4.3x2 + 1.2x3 = 27
4x1 + 2x2 + 4x3 = 60

56
2.2.2 Gauss-Jordan Method
If a linear system is changed to another by one of these operations
1. an equation is swapped with another
2. an equation has both sides multiplied by a nonzero constant
3. an equation is replaced by the sum of itself and a multiple of another then the
two systems have the same set of solutions.
Each of the three Gausss Method operations has a restriction. Multiplying a row by
0 is not allowed because obviously that can change the solution set. Similarly, adding
a multiple of a row to itself is not allowed because adding -1 times the row to itself
has the effect of multiplying the row by 0. We disallow swapping a row with itself to
make some results in the fourth chapter easier, and also because its pointless.

In each row of a system, the first variable with a nonzero coefficient is the rows
leading variable. A system is in echelon form if each leading variable is to the right
of the leading variable in the row above it, except for the leading variable in the first
row, and any all-zero rows are at the bottom.

The Gauss-Jordan row reduction method can be facilitated by the following easy
steps:
1. Express the system of equations as an augmented matrix.
 
2 3 4
Reconsider Example 2.2
1 2 1
2. Use elementary row operations to find a row equivalent matrix in reduced row
echelon form. There are three types of elementary row operations:
• Type 1: Swap the positions of two rows.
• Type 2: Multiply a row by a nonzero scalar.
• Type 3: Add to one row a scalar multiple of another.
 
1 0 5
Reconsider Example 2.2
0 1 2
3. Solve the variables in the columns with leading entries in terms of free variables.
Example 2.4 Reconsider Example 2.3

In row reduction, the linear system is represented as an augmented matrix:

57
 
1 3 −2 5
 3 5 6 7 
2 4 3 8
This matrix is then modified using elementary row operations until it reaches reduced
row echelon form.
Because these operations are reversible, the augmented matrix produced always rep-
resents a linear system that is equivalent to the original.

There are several specific algorithms to row-reduce an augmented matrix, the sim-
plest of which are Gaussian elimination and Gauss-Jordan elimination. The following
computation shows Gauss-Jordan elimination applied to the matrix above:
       
1 3 −2 5 1 3 −2 5 1 3 −2 5 1 3 −2 5
 3 5 6 7  →  0 −4 12 −8  →  0 −4 12 −8  →  0 1 −3 2 
2 4 3 8 2 4 3 8 0 −2 7 −2 0 −2 7 −2
       
1 3 −2 5 1 3 −2 5 1 3 0 9 1 0 0 −15
 0 1 −3 2  →  0 1 0 8 → 0 1 0 8 → 0 1 0 8 
0 0 1 2 0 0 1 2 0 0 1 2 0 0 1 2
The last matrix is in reduced row echelon form, and represents the system x = -15, y
= 8, z = 2. A comparison with the example in the previous section on the algebraic
elimination of variables shows that these two methods are in fact the same; the dif-
ference lies in how the computations are written down.

Example 2.5 Use Gauss Jordan Elimination method to solve the following prob-
lem  
6 6 3/2 9
 1 6 6 3 
6 5 3 2
Divide row1 by 6  
1 1 1/4 3/2
 1 6 6 3 
6 5 3 2
Add (-1 * row1) to row2  
1 1 1/4 3/2
 0 5 23/4 3/2 
6 5 3 2
Add (-6 * row1) to row3  
1 1 1/4 3/2
 0 5 23/4 3/2 
0 −1 3/2 −7

58
Divide row2 by 5  
1 1 1/4 3/2
 0 1 23/20 3/10 
0 −1 3/2 −7
Add (1 * row2) to row3  
1 1 1/4 3/2
 0 1 23/20 3/10 
0 0 53/20 −67/10
Divide row3 by 53/20  
1 1 1/4 3/2
 0 1 23/20 3/10 
0 0 1 −134/53
Add (-23/20 * row3) to row2
 
1 1 1/4 3/2
 0 1 0 170/53 
0 0 1 −134/53

Add (-1/4 * row3) to row1


 
1 1 0 113/53
 0 1 0 170/53 
0 0 1 −134/53

Add (-1 * row2) to row1  


1 0 0 −57/53
 0 1 0 170/53 
0 0 1 −134/53
Exercise 2.3 Use Gauss Jordan Elimination method to solve the following problems

1. 2x1 + 3x2 + 4x3 = 10


2x1 + 7x2 + x3 = 9
x1 + 4x2 + 5x3 = 6

2. x1 + 3x2 + 2x3 = 4
2x1 + x2 + 2x3 = 5
x1 + 2x2 + x3 = 4

3. 4x1 + 2x2 = 5
x1 + 2x2 = 2

4. x1 + 0.5x2 = 4
2x1 + 3x2 = 1

59
5. x1 + 2x2 + 4x3 = 12
x1 + x2 + 5x3 = 10
3x2 + 6x3 = 8

6. 3x1 + 2x2 + 1x3 = 1


4x1 + x2 + 2x3 = 10
5x1 + 3x2 + 3x3 = 5

7. 10x1 + 1x2 + 2x3 = 11


1.5x1 + x2 + 1x3 = 1
3x1 + 1x3 = 5

2.2.3 Cramers Rule


Cramer’s rule is an explicit formula for the solution of a system of linear equations,
with each variable given by a quotient of two determinants.For each variable, the
denominator is the determinant of the matrix of coefficients, while the numerator is
the determinant of a matrix in which one column has been replaced by the vector of
constant terms.

Example 2.5 Redo Example 2.5 Find the value of x, y and z?

x + 3y − 2z = 5

3x + 5y + 6z = 7
2x + 4y + 3z = 8
5 3 −2
7 5 6
8 4 3 60
x= = = −15
1 3 −2 −4
3 5 6
2 4 3
1 5 −2
3 7 6
2 8 3 −32
y= = =8
1 3 −2 −4
3 5 6
2 4 3

60
1 3 5
3 5 7
2 4 8 −8
z= = =2
1 3 −2 −4
3 5 6
2 4 3
Exercise 2.4 Use Cramer’s Rule to solve the following equations

1. x + 2y + 5z = 3
0.01x + 0.12y + 3z = 4
3x + 4y + 2z = 2.5

2. 4.25x + 13x + 10z = 15.4


2.25x + 3y + 1.25z = 12
3.5x + z = 20

3. x + 2x + 3z = −10
4x + 5y + 6z = 12/13
7x + 8y + 9z = 7.5

2.2.4 Inverse Method


If the equation system is expressed in the matrix form Ax = b, the entire solution
set can also be expressed in matrix form. If the matrix A is square (has m rows and
n = m columns) and has full rank (all m rows are independent), then the system has
a unique solution given by
x = A−1 b
where A−1 is the inverse of A.

• Pick the column with the most zeros in it

• Use a row or column only once

• Pivot on a one if possible

• Pivot on the main diagonal

• Never pivot on a zero

• Never pivot on the right hand side

But if you get a row of all zeros except for the right hand side, then there is no
solution to the system. Moreover; if you get a row of all zeros, and the number of
non-zero rows is less than the number of variables, then the system is dependent, you
will have many answers, and you need to write your answer in parametric form.

61
2.3 Homogeneous Systems of Linear Equations
Definition: A linear equation is homogeneous if it has a constant of zero, so that it
can be written as a11 x1 + a12 x2 + ... + a1n xn = 0

A homogenous equation always have a trivial solution x1 , x2 , ..., xn = 0. A homoge-


nous linear systems of equations with n equations and n unknowns of the form

a11 x1 + a12 x2 + ... + a1n xn = 0


a x + a x + ... + a x = 0
11 1 22 2 2n n


 ............................. 0

an1 x1 + an2 x2 + ... + ann xn = 0
has non trivial solution if and only if the coefficient matrix A = (aij )n is singular
(i.e., |A| = 0)

Linear dependence

Definition: vectors, a1 ,a2 ,...,an in Rm are linearly dependent if there exist numbers
K1 ,K2 ,...,Kn not all zero such that

K1 a1 + K2 a2+ , ..., +Kn an = 0

If this equation holds only when K1 = K2 = ... = Kn = 0, then the vectors are said
to be linearly independent
     
1 4 7
Given the set S = { 2 , 5 , 8}
    
3 6 9
of vectors in the vector space R3 , determine whether S is linearly independent or
linearly dependent.

Step 1: Set up a homogeneous system of equations

The set S = {v1 , v2 , v3 } of vectors in R3 is linearly independent if the only solu-


tion of

c1 v1 + c2 v2 + c3 v3 = 0 (*)

is c1 , c2 , c3 = 0 Otherwise (i.e., if a solution with at least some nonzero val-


ues exists), S is linearly dependent. With our vectors v1 , v2 ,v3 , (*) becomes:

62
       
1 4 7 0
C1 2 + C2 5 + C3 8 = 0
      
3 6 9 0
The matrix equation above is equivalent to the following homogeneous system of
equations (**)

c1 + 4c2 + 7c3 = 0
2c1 + 5c2 + 8c3 = 0
3c1 + 6c2 + 9c3 = 0
Step 2: Transform the coefficient matrix of the system to the reduced row
echelon form

We now transform the coefficient matrix of the homogeneous system above to the
reduced row echelon form to determine whether the system has the trivial solution
only (meaning that S is linearly independent), or the trivial solution as well as non-
trivial ones (S is linearly dependent).
 
1 4 7
2 5 8
3 6 9

R2 − 2R1 → R2  
1 4 7
0 −3 −6
3 6 9
R3 − 3R1 → R3  
1 4 7
0 −3 −6 
0 −6 −12
−1/3R2 → R2  
1 4 7
0 1 2 
0 −6 −12
R3 + 6R2 → R3  
1 4 7
0 1 2
0 0 0

63
R1 − 4R2 → R1  
1 0 −1
0 1 2 
0 0 0
Step 3: Interpret the reduced row echelon form

The reduced row echelon form of the coefficient matrix of the homogeneous system
(**) is  
1 0 −1
0 1 2 
0 0 0
which corresponds to the system

c1 − c3 = 0

c2 + 2c3 = 0
0=0
Since some columns do not contain leading entries, then the system has nontrivial
solutions, so that some of the values c1 , c2 , c3 solving (*) may be nonzero. Therefore
the set S = {v1 , v2 , v3 } is linearly dependent.
     
1/4 1/3 1
Example Given the set S = { 4  ,  1  , 6} of vectors in the vector space
3 0 2
3
R , determine whether S is linearly independent or linearly dependent?

Step 1: Set up a homogeneous system of equations The set S = {v1 , v2 , v3 } of


vectors in R3 is linearly independent if the only solution of

c1 v1 + c2 v2 + c3 v3 = 0 (*)

is c1 , c2 , c3 = 0 Otherwise (i.e., if a solution with at least some nonzero val-


ues exists), S is linearly dependent. With our vectors v1 , v2 ,v3 , (*) becomes:
       
1/4 1/3 1 0
C1 4 + C2 1 + C3 6 = 0
      
3 0 2 0

The matrix equation above is equivalent to the following homogeneous system of


equations (**)
(1/4)c1 + (1/3)c2 + 1c3 = 0
4c1 + 1c2 + 6c3 = 0

64
3c1 + 0c2 + 2c3 = 0
Step 2: Transform the coefficient matrix of the system to the reduced row
echelon form

We now transform the coefficient matrix of the homogeneous system above to the
reduced row echelon form to determine whether the system has the trivial solution
only (meaning that S is linearly independent), or the trivial solution as well as non-
trivial ones (S is linearly dependent).
 
1/4 1/3 1
 4 1 6
3 0 2

Multiply the 1s t row by  


1 4/3 4
4 1 6
3 0 2
Add -4 times the 1s t row to the 2n d row
 
1 4/3 4
0 −13/3 −10
3 0 2

Add -3 times the 1s t row to the 3r drow


 
1 4/3 4
0 −13/3 −10
0 −4 −10

Multiply the 2n d row by -3/13


 
1 4/3 4
0 1 30/13
0 −4 −10

Add 4 times the 2n d row to the 3r d row


 
1 4/3 4
0 1 30/13 
0 0 −10/13

Multiply the 3r d row by -13/10


 
1 4/3 4
0 1 30/13
0 0 1

65
Add -30/13 times the 3r d row to the 2n d row
 
1 4/3 4
0 1 0
0 0 1
Add -4 times the 3r d row to the 1s t row
 
1 4/3 0
0 1 0
0 0 1
Add -4/3 times the 2n d row to the 1s t row
 
1 0 0
0 1 0
0 0 1
Step 3:Interpret the reduced row echelon form

The reduced row echelon form of the coefficient matrix of the homogeneous system
(**) is  
1 0 0
0 1 0
0 0 1
which corresponds to the system

1c1 =0
1c2 = 0
1c3 = 0

Since each column contains a leading entry (highlighted in yellow), then the sys-
tem has only the trivial solution, so that the only solution of (*) is c1 , c2 , c3 =0.
Therefore the set S = {v1 , v2 , v3 } is linearly independent!!

Exercise 2.5 Check that whether the following vectors are linearly or independent
     
1 2 3
Given the set S = { 4 , 5 , 6} of vectors in the vector space R3 , determine
    
7 8 9
whether S is linearly independent or linearly dependent??

2.4 Economic Applications


Suppose that Almeda Textile factory discounts all its T-shirts, trousers and skirts by
10 percent at the end of the year. If v1 is the value of stock in the three branches

66
prior to the discount , find the value v2 after the 10 percent discount,if

T-shirt Trouser Skirt


 
outlet A 100 200 20
Total stock = outlet B  10 20 20 
outlet C 30 40 10
 
100 200 20
v1 = 10 20 20

30 40 10
 
100 200 20
v2 = 0.9  10 20 20
30 40 10
If the price of T-shirt is Birr 250 , the price of trouser is Birr 550 and the price of skirt
is Birr 450 ,use vector multiplication to determine the value of stock for in outlet A?

The value of stock = Quantity × Price

The physical valume of stock in outlet 1 in vector form is


 
Q = 100 200 20

putting the price in vector form  


250
P = 550

450
 
  250
v = 100 200 20 550
450
Example

(100)(250) + (200)(550) + (20)(450) = 25000 + 110000 + 9000 = 144000

T-shirt Trouser Skirt Jacket


 
outlet A 200 0 540 600
outlet B 900 500 200 200 
Total stock = 
outlet C  200 100 400 600 
outlet D 300 800 100 500
k = 1.2
How to solve this problem? The product of a matrix A by a scalar k is always defined,
and the result is a matrix of equal size of A obtained by multiplying each entry of A
by the scalar k. The matrix kA is said to be a scalar multiple of A.

67
Step 1: Multiply each entry of the matrix A by the scalar k.
   
1.2 × 200 1.2 × 0 1.2 × 540 1.2 × 600 240 0 648 720
kA = 1.2 × 900 1.2 × 500 1.2 × 200 1.2 × 200 = 1080 600 240 240
1.2 × 200 1.2 × 100 1.2 × 400 1.2 × 600 240 120 480 720
Example

The quantity of goods sold Q,the selling price of the goods P,and the goods C are
give for a hypothetical ABC company
     
100 10.50 1.25
Q = 200 P = 20.25 C = 2.25
300 30 3.50
Calculate
1. Total revenue
2. Total Cost
3. Per Unit profit
4. Total profit
Solution
1. Total revenue  
100
T
 
P Q = 10.50 20.25 30 200 = 14100

300
2. Total Cost  
 100
C T Q = 1.25 2.25 3.50 200 = 1625


300
3. Per Unit profit
     
10.50 1.25 9.25
Per unit profit = AP = 20.25 − 2.25 =  18 
30 3.50 26.50

4. Total profit  
100
T
 
AP = 9.25 18 26.50 200 = 12475

300
Note
Total profit = 14100 − 1625 = 12475

68
CHAPTER SUMMARY
The main ideas of chapter two are summarized below.

In the equation system there are three essential ingredients. These are the set
of coefficients,the set of variables and the set of constant terms.

A general system of m linear equations with n unknowns can be written


as

a11 x1 + a12 x2 + · · · + a1n xn = b1


a21 x1 + a22 x2 + · · · + a2n xn = b2
.. . . . .
. + .. + .. + .. = ..
am1 x1 + am2 x2 + · · · + amn xn = bm
Here x1 , x2 ,. . .,xn are the unknowns, a11 ,a12 ,. . .,amn are the coefficients of the
system, and b1 ,b2 ,. . .,bm are the constant terms. Often the coefficients and
unknowns are real or complex numbers, but integers and rational numbers
are also seen, as are polynomials and elements of an abstract algebraic structure.

To solve a certain given system of linear equations we can solve the un-
known through one of the following method. Elimination ,Gauss-Jordan
Row Reduction or Cramer’s Rule. There are three types of elementary row
operations:

• Type 1: Swap the positions of two rows.

• Type 2: Multiply a row by a nonzero scalar.

• Type 3: Add to one row a scalar multiple of another.

1. Pick the column with the most zeros in it


2. Use a row or column only once
3. Pivot on a one if possible
4. Pivot on the main diagonal
5. Never pivot on a zero
6. Never pivot on the right hand side

If the equation system is expressed in the matrix form Ax = b, the entire


solution set can also be expressed in matrix form. If the matrix A is square
(has m rows and n = m columns) and has full rank (all m rows are independent),
then the system has a unique solution given by

x = A−1 b
69
−1
where A is the inverse of A.
Reading Materials

1. Alpha C. Chiang, (1984). Fundamental Methods of mathematical economics.


3r d edition, Singapore.

2. Alpha C. Chiang, & Wainwright K., (2005). Fundamental Methods of mathe-


matical economics. 4t h edition, Singapore.

3. Sydsaeter K., (2011). Mathematics essentials for Economic Analysis. Mekelle


University, Mekelle-Ethiopia.

4. Sydasaeter K. & Hammond Peter J., (2010). Mathematics for Economic Anal-
ysis. 5t h edition, New Delhi.

2.5 SOLUTION FOR EXERCISES


2.5.1 Exercise 2.1
    
1 1 1 1 x1 3
1 −1 1 1 = −5 
x2 
  
1.  
0 1 −1 −1 x3   −4 
1 1 −1 −1 x4 −3
 
  x1  
−1 3 −2 4  2
x2  

2.  2 −6 1 −2 
 x3 = −1

1 −3 −4 −8 −4
x4

2.5.2 Exercise 2.2


1. Divide the 1st equation by 2 and express x1 by other variables

x1 = −1.5x2 + 2.5

x1 + 2x2 = 10
In 2 equation we substitute x1

x1 = −1.5x2 + 2.5
1(−1.5x2 + 2.5) + 2x2 = 10
after simplification we get:

x1 = −1.5x2 + 2.5

0.5x2 = 7.5

70
Divide the 2nd equation by 0.5 and express x2 by other variables

x1 = −1.5x2 + 2.5

x2 = +15
Now, moving from the last to the first equation can find the values of the other
variables.

Answer:

• x1 = −20
• x2 = 15

2. Simplify the system:


3x1 + x2 = 5
5x1 + 3x2 = 40
Divide the 1st equation by 3 and express x1 by other variables

x1 = −(1/3)x2 + (5/3)

5x1 + 3x2 = 40
In 2 equation we substitute x1

x1 = −(1/3)x2 + (5/3)

5(−(1/3)x2 + (5/3)) + 3x2 = 40


after simplification we get:

x1 = −(1/3)x2 + (5/3)

(4/3)x2 = 95/3
Divide the 2nd equation by 4/3 and express x2 by other variables

x1 = −(1/3)x2 + (5/3)

x2 = +23.75
Now, moving from the last to the first equation can find the values of the other
variables.
Answer:
x1 = −6.25
x2 = 23.75

71
3. Simplify the system:
3x1 + 4x2 = 3
5x1 + 30x2 = 46
Divide the 1st equation by 3 and express x1 by other variables

x1 = −(4/3)x2 + 1

5x1 + 30x2 = 46
In 2 equation we substitute x1

x1 = −(4/3)x2 + 1

5(−(4/3)x2 + 1) + 30x2 = 46
after simplification we get:

x1 = −(4/3)x2 + 1

(70/3)x2 = 41
Divide the 2nd equation by 70/3 and express x2 by other variables

x1 = −(4/3)x2 + 1

x2 = +(123/70)
Now, moving from the last to the first equation can find the values of the other
variables.

Answer:
x1 = −47/35
x2 = 123/70

4. Divide the 1st equation by 4 and express x1 by other variables

x1 = −1.25x2 + 7.5

2x1 + x2 = 13
In 2 equation we substitute x1

x1 = −1.25x2 + 7.5
2(−1.25x2 + 7.5) + x2 = 13
after simplification we get:

x1 = −1.25x2 + 7.5

72
−1.5x2 = −2
Divide the 2nd equation by -1.5 and express x2 by other variables

x1 = −1.25x2 + 7.5

x2 = +(4/3)
Now, moving from the last to the first equation can find the values of the other
variables.

Answer:
x1 = 35/6
x2 = 4/3

5. Simplify the system:


11x1 + 25x2 + 15x3 = 9
2x1 + 5x2 + 12x3 = 8
8x1 + 15x2 + 35x3 = 85
Divide the 1st equation by 11 and express x1 by other variables

x1 = −(25/11)x2 − (15/11)x3 + (9/11)

2x1 + 5x2 + 12x3 = 8


8x1 + 15x2 + 35x3 = 85
In 2, 3 equation we substitute x1

x1 = −(25/11)x2 − (15/11)x3 + (9/11)

2(−(25/11)x2 − (15/11)x3 + (9/11)) + 5x2 + 12x3 = 8


8(−(25/11)x2 − (15/11)x3 + (9/11)) + 15x2 + 35x3 = 85
after simplification we get:

x1 = −(25/11)x2 − (15/11)x3 + (9/11)

(5/11)x2 + (102/11)x3 = 70/11


−(35/11)x2 + (265/11)x3 = 863/11
Divide the 2nd equation by 5/11 and express x2 by other variables

x1 = −(25/11)x2 − (15/11)x3 + (9/11)

x2 = −20.4x3 + 14

73
−(35/11)x2 + (265/11)x3 = 863/11
In 3 equation we substitute x2

x1 = −(25/11)x2 − (15/11)x3 + (9/11)


x2 = −20.4x3 + 14
−(35/11)(−20.4x3 + 14) + (265/11)x3 = 863/11
after simplification we get:

x1 = −(25/11)x2 − (15/11)x3 + (9/11)

x2 = −20.4x3 + 14
89x3 = 123
Divide the 3rd equation by 89 and express x3 by other variables

x1 = −(25/11)x2 − (15/11)x3 + (9/11)

x2 = −20.4x3 + 14
x3 = +(123/89)
Now, moving from the last to the first equation can find the values of the other
variables.

Answer:
x1 = 2776/89
x2 = −6316/445
x3 = 123/89

6. Simplify the system:

2x1 + 9x2 + 6x3 = 19


10x1 + 43x2 + 12x3 = 270
2x1 + x2 + 2x3 = 30
Divide the 1st equation by 2 and express x1 by other variables x1 = −4.5x2 −
3x3 + 9.5
10x1 + 43x2 + 12x3 = 270
2x1 + x2 + 2x3 = 30
In 2, 3 equation we substitute x1

x1 = −4.5x2 − 3x3 + 9.5

74
10(−4.5x2 − 3x3 + 9.5) + 43x2 + 12x3 = 270
2(−4.5x2 − 3x3 + 9.5) + x2 + 2x3 = 30
after simplification we get:

x1 = −4.5x2 − 3x3 + 9.5

−2x2 − 18x3 = 175


−8x2 − 4x3 = 11
Divide the 2nd equation by -2 and express x2 by other variables

x1 = −4.5x2 − 3x3 + 9.5


x2 = −9x3 − 87.5
−8x2 − 4x3 = 11
In 3 equation we substitute x2

x1 = −4.5x2 − 3x3 + 9.5

x2 = −9x3 − 87.5
−8(−9x3 − 87.5) − 4x3 = 11
after simplification we get:

x1 = −4.5x2 − 3x3 + 9.5


x2 = −9x3 − 87.5
68x3 = −689
Divide the 3rd equation by 68 and express x3 by other variables

x1 = −4.5x2 − 3x3 + 9.5

x2 = −9x3 − 87.5
x3 = −(689/68)
Now, moving from the last to the first equation can find the values of the other
variables.

Answer:
x1 = 3167/136
x2 = 251/68
x3 = −689/68

75
2.5.3 Exercise 2.3
 
2 3 4 10
1.  2 7 1 9 
1 4 5 6
Devide the 1st row by 2
 
1 1.5 2 5
 2 7 1 9 
1 4 5 6
R2 − 2R1 → R2
R3 − R1 → R3
 
1 1.5 2 5
 0 4 −3 −1 
0 2.5 3 1
Devide the 2nd row by 4
 
1 1.5 2 5
 0 1 −0.75 −0.25 
0 2.5 3 1
R1 − 1.5R2 → R1

R3 − 2.5R2 → R3
 
1 0 3.125 5.375
 0 1 −0.75 −0.25 
0 0 4.875 1.625
Devide the 3rd row by 4.875
 
1 0 3.125 5.375
 0 1 −0.75 −0.25 
0 0 1 1/3
R1 − 3.125R3 → R1

R2 + 0.75R3 → R2
 
1 0 0 13/3
 0 1 0 0 
0 0 1 1/3
Answer:

• x1 = 13/3
• x2 = 0

76
• x3 = 1/3
 
1 3 2 4
2.  2 1 2 5 
1 2 1 4
R2 − 2R1 → R2
R3 − R1 → R3
 
1 3 2 4
 0 −5 −2 −3 
0 −1 −1 0
Devide the 2nd row by -5
 
1 3 2 4
 0 1 0.4 0.6 
0 −1 −1 0
R1 − 3R2 → R1
R3 + R2 → R1
 
1 0 0.8 2.2
 0 1 0.4 0.6 
0 0 −0.6 0.6
Devide the 3rd row by -0.6
 
1 0 0.8 2.2
 0 1 0.4 0.6 
0 0 1 −1
R1 − 0.8R3 → R1
R2 − 0.4R3 → R2
 
1 0 0 3
 0 1 0 1 
0 0 1 −1
Answer:

• x1 = 3
• x2 = 1
• x3 = −1
 
4 2 5
3.
1 2 2
Devide the 1st row by 4

77
 
1 0.5 1.25
1 2 2
R2 − R1 → R2
 
1 0.5 1.25
0 1.5 0.75
Devide the 2nd row by 1.5
 
1 0.5 1.25
0 1 0.5
R1 − 0.5R2 → R1
 
1 0 1
0 1 0.5
Answer:

• x1 = 1
• x2 = 0.5
 
1 0.5 4
4.
2 3 1
R2 − 2R1 → R2
 
1 0.5 4
0 2 −7
Devide the 2nd row by 2
 
1 0.5 4
0 1 −3.5
R1 − 0.5R2 → R1
 
1 0 5.75
0 1 −3.5
Answer:

• x1 = 5.75
• x2 = −3.5
 
1 2 4 12
5.  1 1 5 10 
0 3 6 8
R2 − R1 → R2

78
 
1 2 4 12
 0 −1 1 −2 
0 3 6 8
Devide the 2nd row by -1
 
1 2 4 12
 0 1 −1 2 
0 3 6 8
R1 − 2R3 → R1
R3 − 3R3 → R3
 
1 0 6 8
 0 1 −1 2 
0 0 9 2
Devide the 3rd row by 9
 
1 0 6 8
 0 1 −1 2 
0 0 1 2/9
R1 − 6R3 → R1
R2 + R3 → R2
 
1 0 0 20/3
 0 1 0 20/9 
0 0 1 2/9
Answer:

• x1 = 20/3
• x2 = 20/9
• x3 = 2/9
 
3 2 1 1
6.  4 1 2 10 
5 3 3 5
Devide the 1st row by 3
 
1 2/3 1/3 1/3
 4 1 2 10 
5 3 3 5
R2 − 4R1 → R2
R3 − 5R1 → R3

79
 
1 2/3 1/3 1/3
 0 −5/3 2/3 26/3 
0 −1/3 4/3 10/3
Devide the 2nd row by -5/3
 
1 2/3 1/3 1/3
 0 1 −0.4 −5.2 
0 −1/3 4/3 10/3

R1 − 2/3R2 → R1

R3 + 1/3R2 → R3
 
1 0 0.6 3.8
 0 1 −0.4 −5.2 
0 0 1.2 1.6
Devide the 3rd row by 1.2
 
1 0 0.6 3.8
 0 1 −0.4 −5.2 
0 0 1 4/3
R1 − 0.6R3 → R1
R2 + 0.4R3 → R2
 
1 0 0 3
 0 1 0 −14/3 
0 0 1 4/3
Answer:

• x1 = 3
• x2 = −14/3
• x3 = 4/3
 
10 1 2 11
7.  1.5 1 1 1 
3 0 1 5
Devide the 1st row by 10
 
1 0.1 0.2 1.1
 1.5 1 1 1 
3 0 1 5
R2 − 1.5R1 → R2
R3 − 3R1 → R3

80
 
1 0.1 0.2 1.1
 0 0.85 0.7 −0.65 
0 −0.3 0.4 1.7
Devide the 2nd row by 0.85
 
1 0.1 0.2 1.1
 0 1 14/17 −13/17 
0 −0.3 0.4 1.7
R1 − 0.1R2 → R1
R3 + 0.3R2 → R3
 
1 0 2/17 20/17
 0 1 14/17 −13/17 
0 0 11/17 25/17
Devide the 3rd row by 11/17
 
1 0 2/17 20/17
 0 1 14/17 −13/17 
0 0 1 25/11
R1 − 2/17R3 → R1
R2 − 14/17R3 → R2
 
1 0 0 10/11
 0 1 0 −29/11 
0 0 1 25/11
Answer:
• x1 = 10/11
• x2 = −29/11
• x3 = 25/11

2.5.4 Exercise 2.4


   
1 2 5 3
1. suppose that A 0.01 0.12 3 B= 4 
3 4 2 2.5
3 2 5 3 2 5
|A| = 4 0.12 3 = 4.6 |A1 | = 4 .12 3 = 42.22
2.5 4 2 2.5 4 2
1 3 5 1 2 3
|A2 | = 0.01 4 3 = −32.435 |A3 | = 0.01 0.12 4 = 7.29
3 2.5 2 3 4 2.5

81
2. suppose that    
4.25 13 10 15.4
A = 2.25 3 1.25 B =  12 
3.5 0 1 20
 
15.4 13 10
|A1 | =  12 3 1.25 = −247.3
10 0 1
 
4.25 15.4 10
|A2 | = 2.25 12 1.25 = 7.475
3.5 20 1
 
4.25 13 15.4
|A3 | = 2.25 3 12  = 54.3
3.5 0 20
|A1 | |A1 | |A1 |
x= y= z=
|A| |A| |A|
3. suppose that    
1 2 3 −10
A = 4 5 6 B = 12/13
7 8 9 7.5
 
1 2 3
|A| = 4 5 6 = 0

7 8 9
No solution !

82
Chapter 3
Special Determinants and Matrices in
Economics

3.1 Introduction
Objectives

This chapter will help students to:

• Understand the Jacobian determinant

• Understand the Hessian determinant and unconstrained op-


timization

• Understand the Bordered Hessian determinant and con-


strained optimization

• How to compute Eigen Values and Eigen Vectors

3.2 The Jacobean Determinant


Definition: A Jacobian determinant |J| is a matrix composed of the first order par-
tial derivatives of a system of equations arranged in ordered sequence. |J| helps us
to test the existence of functional dependence both for linear and non linear functions.

Given
y1 = f1 (x1 , x2 , x3 )

83
y2 = f2 (x1 , x2 , x3 )
y3 = f3 (x1 , x2 , x3 )
∂y1 ∂y1 ∂y1
∂x1 ∂x2 ∂x3
∂y1 ,∂y2 ,∂y3 ∂y2 ∂y2 ∂y2
|J| = ∂x1 ,∂x2 ,∂x3 = ∂x1 ∂x2 ∂x3
∂y3 ∂y3 ∂y3
∂x1 ∂x2 ∂x3
Note:
• Elements of each row are the partial derivatives of one of function yi with respect
to each of the independent variables x1 ,x2 ,x3 .
• Elements of each column are the partial derivatives of one of function y1 ,y2 ,y3
with respect to one of the independent variables xj
• If |J| = 0, the equations are functionally dependent.
• If |J| =
6 0, the equations are functionally independent.
Example Use the Jacobian to test for functional dependence
y 1 = x1 + x2
y2 = 2x1 + 3x2
Solution
∂y1 ∂y1
∂x1
=1 ∂x2
=1
∂y2 ∂y2
∂x1
=2 ∂x2
=3
∂y1 ∂y1
∂x1 ∂x2
|J| = ∂y2 ∂y2
∂x1 ∂x2

1 1
|J| = =3−2=1
2 3
Since |J| =
6 0, y1 and y2 are functionally independent!

Self Test Exercise Use the Jacobian to test for functional dependence
1. y1 = 2x1 + 4x2
y2 = 8x1 + 16x2
2. y1 = x1 + x22
y 2 = x1 + x2
3. y1 = 2x1 + 3x2
y2 = 4x1 + 12x1 x2 + 9x22
4. y1 = 13 x1 + 4x2
y2 = 2.4x1 + 3.5x2

84
3.3 The Hessian Determinant
Definition: A Hessian |H| is a determinant composed of all the second order partial
derivatives with the second order direct partials on the principal diagonal and the
second order cross partials off the principal diagonal. If the first order or necessary
conditions for a multivariate function z = f (x1 , x2 ) to be at optimum

• zx1 x1 , zx2 x2 > 0 For a minimum

• zx1 x1 , zx2 x2 < 0 For a maximum

zx1 x1 zx2 x2 > (zx2 x2 )2

The equivalent form of the Hessian determinant |H|

zx1 x1 zx1 x2
|H| =
zx2 x1 zx2 x2

Where
zx1 x2 = zx2 x1

1. If the first elements on the principal diagonal , the first principal minor |H1 | =
zx1 x1 > 0 and the second principa minor

zx1 x1 zx1 x2
|H2 | = = zx1 x1 zx2 x2 − (zx1 x2 )2 > 0
zx2 x1 zx2 x2

meets the second order condition for a minimum.The |H| is called positive
definite.

2. If the first elements on the principal diagonal , the first principal minor |H1 | =
zx1 x1 < 0 and the second principa minor

zx1 x1 zx1 x2
|H2 | = = zx1 x1 zx2 x2 − (zx1 x2 )2 > 0
zx2 x1 zx2 x2

meets the second order condition for a maximum.The |H| is called negative
definite.

3.3.1 Unconstrained optimization and |H|


If the first order or necessary conditions for a multivariate function z = f (x1 , x2 , . . . , xn )
to be at optimum

85
f11 f12 . . . f1n
f21 f22 . . . f2n
|H| = .. .. . . ..
. . . .
fn1 fn2 . . . fnn
The necessary condition for the function to be at point of relative extremum is that
all the first derivatives should vanish.
f1 = f2 = · · · = fn = 0
The second order sufficient condition for extremum is that
1. All the principal minors should be positive for the function to be a minimum.
|H1 | > 0, |H2 | > 0. . . , |Hn | > 0 , this is equivalent to say the quadratic form of
the discriminant described before is positive definite.
2. For a maximum the principal minor should start with negative and alternate in
sign |H1 | < 0, |H2 | > 0 , |H3 | < 0 . . .
Example : Find the extreme value of
z = 2x21 + x1 x2 + 4x22 + x1 x3 + x23 + 2
Solution

First order necessary conditions

f1 = 0 4x1 + x2 + x3 = 0

f2 = 0 x1 + 8x2 + 0 = 0

f3 = 0 x1 + 0 + 2x3 = 0

We find unique solution i.e., x1 = x2 = x3 = 0 and z = 2

Second order sufficient conditions

f11 = 4 f12 = f21 = 1

f22 = 8 f23 = f32 = 0

f33 = 2 f31 = f13 = 1

f11 f12 f13 4 1 1


|H| = f21 f22 f23 = 1 8 0
f31 f32 f33 1 0 2

86
So that
|H1 | = 4 > 0
4 1
|H2 | = = 31 > 0
1 8
4 1 1
|H3 | = 1 8 0 = 54 > 0
1 0 2
Thus we can conclude that z̄ = 2 is a minimum point.

Example Given the following profit function for a competitive firm that produces
two products as
Π = P1 Q1 + P2 Q2 − 2Q21 − Q1 Q2 − 2Q22
Does the firm maximize profit by choosing Q1 and Q2 ?

Solution

Π1 = 0 P1 − 4Q1 − Q2 = 0

Π2 = 0 P2 − Q1 − 4Q2 = 0

Π11 = −4, Π12 = Π21 = −1 and Π22 = −4

So that
|H1 | = −4 < 0
−4 −1
|H2 | = |H| = = 15 > 0
−1 −4
Therefore we can conclude that the firm can maximize profit

3.4 Eigen vectors and Eigen values


A non zero vector x ∈ Rn that solves Ax = λx is called an Eigen vector or charac-
teristic vector and the associated λ is called an Eigne value characteristic value.

Given a matrix A which is n × n with a special property

Ax = λx

(A − λI)x = 0
Where I denotes the identity matrix of order n ,then this homogenous system of
equations has a non trivial solution x 6= 0 if and only if the coefficient matrix has

87
determinant equal to zero. That is, iff

|A − λI| = 0
Example Given a 2 matrix

 
a a
A = 11 11
a21 a22
1. Determine the sign of its Eigen Values?

2. Show when the Eigne Values are real ?

Solution First find the characteristic equation as follows:

|A − λI| = 0

a11 − λ a11
=0
a21 a22 − λ
λ2 − (a11 + a22 )λ + (a11 a22 − a12 a21 ) = 0
The roots of this quadratic equation known as the characteristic equation is
r
1 1
λ = (a11 + a22 ) ± (a11 + a22 )2 − (a11 a22 − a12 a21 )
2 4
So that the roots are real when

(a11 + a22 )2 ≥ 4(a11 a22 − a12 a21 )

If the real Eigen Values are λ1 and λ2 , then

Then sum λ1 + λ2 of the Eigen Values is equal to a11 + a22 , the sum of the diag-
onal elements (i.e., the trace of the matrix). The product λ1 λ2 of the Eigen Values
is equal to the determinant a11 a22 − a12 a21 = |A|. So the following are some of the
points deducted from the above characteristic equation.

• Both Eigen Values are positive if and only if a11 + a22 > 0 and |A| > 0

• Both Eigen Values are positive if and only if a11 + a22 > 0 and |A| > 0

• The two Eigen Values have different signs if and only if |A| < 0

88
Example Find the Eigen Values and Eigen Vectors of a 2 × 2 matrix
 
4 2
A=
1 3

First we compute

     
4 2 λ 0 4−λ 2
|A − λI| = − =
1 3 0 λ 1 3−λ
|A − λI| = (4 − λ)(3 − λ) − (2)(1)
We have to set this equal to zero to find the values of λ that make this true:

(4 − λ)(3 − λ) − 2 · 1 = 10 − 7λ + λ2 = (2 − λ)(5 − λ) = 0
This means that λ = 2 and λ = 5 are solutions.

Now if we want to find the Eigen Vectors that correspond to these values we look at
vectors v Such that  
4−λ 2
v=0
1 3−λ
For λ = 5     
4−5 2 x 0
=
1 3−5 y 0
    
−1 2 x 0
=
1 −2 y 0
This gives us the equalities
−x + 2y = 0
x − 2y = 0
 
1 2
These equations give us the line y = x.
Any point on this line, so for example
2
,
1
is an Eigen Vector with Eigen Value λ = 5.

Now lets find the Eigen Vector for λ = 2


  
4−2 2 x
1 3−2 y
    
2 2 x 0
=
1 1 y 0

89
which gives the equalities
2x + 2y = 0
x+y =0
These two
 equations are not independent
  of one another. This means any vector
x 1
v= where y = −x , such as , or any scalar multiple of this vector on the
y −1
line y = −x is an Eigen Vector with Eigen Value 2. This solution could be written
neatly as
   
2 1
λ1 = 5, v1 = , and λ2 = 2, v2 =
1 −1
Exercise 3.1 Given the following matrices find the characteristic polynomial and
characteristic root
 
4 1
1. A=
2 −1
 
5 −6
2. A =
1 −5

 
9 −3
3. A=
−1 7
 
−2 −9
4. A=
−4 −2
 
−6 0
5.
−3 8
 
8 7
6.
8 −4

3.5 Quadratic Forms


Quadratic forms are special matrix functions that are very important for the deriva-
tion of second order condition of maximum and minimum.
   
a11 a12 x
Let A = and a 2 vector x : x = 1
a21 a22 x2
Then q(x) = xT AX is said to be quadratic form where xT represents the trans-
pose of vector matrix x. In quadratic form we can put it as follows:

90
   
  a11 a12 x1
q(x) = x1 x2 1×2
a21 a22 2×2 x2 2×1
q(x) = a11 x21 + a12 x1 x2 + a21 x2 x1 + a22 x22
The general for can be written as
n X
X n
T
Q(x) = x AX = aij xi xj
i=1 j=1

Suppose we have the following second order total differential

d2 z = fxx dx2 + 2fx dx dy + fyy dy 2

Now lets consider dx and dy as variables

Let dx = u and dy = v consider also the partial derivatives as constants letting


a = fxx ,b = fxx and h = fxy = fyx so that the quadratic form of the above differen-
tial equation can be rewritten as follow:

qz = au2 + 2huv + bv 2

Then we can form symmetric matrix from the above equation by placing the squared
terms on the diagonal and splitting 2huv in to two equal parts and placing it on the
off diagonal   
  a h u
q(x) = u v
h b v
 
a h
The form D = is known as the discriminant of the quadratic form.Here
h b

a h
1. q is positive definite if and only if |a| > 0 and >0
h b

a h
2. q is negative definite if and only if |a| < 0 and <0
h b

a h
Where |a| = a is the first leading principal minor and is the second leading
h b
principal minor of D. So using these two terms we determine the sign from the total
differential case.
d2 z = fxx dx2 + 2fx dx dy + fyy dy 2

91
The discriminant for this is the d2 z is the second order partial derivatives.Such a
discriminant is known as Hessian determinant and it is given by
 
fxx fxy
|H| =
fyx fyy

Since fxy = fyx the Hessian matrix is symmetric

Example Determine the sign definiteness of 5u2 + 3uv + 2v 2 ?

Solution
5 1.5
D= = 7.75 > 0
1.5 2
Therefore, q is positive definite.

Example If fxx = −2 and fxy = 1 and fyy = −1, what is the sign for d2 z if
z = f (x, y)

Solution
−2 1
D=
1 −1
|D1 | = | − 2| < 0
And
−2 1
|D2 | = =1>0
1 −1
Therefore, d2 z is negative definite.

For three variable

1. The quadratic form q will be positive definite if and only if |D1 | > 0,|D2 | > 0
and |D3 | > 0

2. The quadratic form q will be negative definite if and only if |D1 | < 0,|D2 | > 0
and |D3 | < 0

Exercise 3.2 State whether the following quadratic forms are positive or negative
definite.

1. Q = 5u2 − 4uv + 2v 2 ?

2. Q = u21 + 6u22 + 3u23 − 2u1 u2 − 4u2 u3

92
3.5.1 Positive and negative definiteness
1. The quadratic form q is said to be positive definite if q is invariably positive
(i.e., q(x) > 0)

2. The quadratic form q is said to be positive semi-definite if q is invariably non


negative (i.e., q(x) ≥ 0)

3. The quadratic form q is said to be negative definite if q is invariably negative


(i.e., q(x) < 0)

4. The quadratic form q is said to be negative semi- definite if q is invariably non


positive (i.e., q(x) ≤ 0)

5. If q changes signs when the variable assume different values of, q is said to be
sign indefinite.

CHAPTER SUMMARY
The main ideas of chapter two are summarized below.

A Jacobian determinant |J| is a matrix composed of the first order partial


derivatives of a system of equations arranged in ordered sequence. |J| helps us
to test the existence of functional dependence both for linear and non linear
functions.

A Hessian |H| is a determinant composed of all the second order partial


derivatives with the second order direct partials on the principal diagonal and
the second order cross partials off the principal diagonal.

A non zero vector x ∈ Rn that solves Ax = λx is called an Eigen vec-


tor or characteristic vector and the associated λ is called an Eigne value
characteristic value.

Reading Materials

1. Alpha C. Chiang, (1984). Fundamental Methods of mathematical economics.


3rd edition, Singapore.

2. Alpha C. Chiang, & Wainwright K., (2005). Fundamental Methods of mathe-


matical economics. 4th edition, Singapore.

3. Sydsaeter K., (2011). Mathematics essentials for Economic Analysis. Mekelle


University, Mekelle-Ethiopia.

93
4. Sydasaeter K. & Hammond Peter J., (2010). Mathematics for Economic Anal-
ysis. 5th edition, New Delhi.

3.5.2 Exercise 3.1


1. λ2 − 3λ − 6 λ ≈ -1.3723 λ ≈ 4.3723
2. Characteristic polynomial:λ2 − 19

Real Eigen values: (-4.359, 4.359)

Eigen vector of Eigen value -4.359: (0.5397, 0.842)

Eigen vector of Eigen value 4.359: (0.994, 0.106)


3. Characteristic polynomial λ2 − 16λ + 60

Real Eigenvalues (6, 10)

Eigen vector of Eigen value 6: (1, 1)

Eigen vector of Eigen value 10: (-3, 1)


4. λ2 + 4λ − 32

Real eigen values:(-8, 4)

Eigen vector of Eigen value -8: (1, 0.667)

Eigen vector of Eigen value 4: (1, -0.667)


5. Characteristic polynomial: λ2 − 2λ − 48

Real Eigen values: (-6, 8)

Eigen vector of Eigen value -6: (1, 0.214)

Eigen vector of Eigen value 8: (0, 1)


6. Characteristic polynomial: λ2 − 4λ − 88
Real Eigen values: (-7.592, 11.592)

Eigen vector of Eigen value -7.592: (1, -2.227)

Eigen vector of Eigen value 11.592: (1, 0.513)

94
3.5.3 Exercise 3.2
5 2
1. D = =6>0
2 2
Therefore, q is positive definite.

 
1 −1 0
2. D =  −1 6 −2 
0 −2 3
1 −1
|D1 | = 1 > 0 ,|D2 | = = 5 > 0 and
1 6

1 −1 0
|D3 | = −1 6 −2 = 11 > 0
0 −2 3

Therefore the given quadratic form is positive definite.

95
Chapter 4
Input out put and linear programming

4.1 Introduction
Objectives

This chapter will help students to:

• Understand the Leontif Input-Output Model

• Understand the Open Economy Input-Output model

• Understand the Open Economy Input-Output model

• Understand limitations of the Input-Output model

4.2 Input-Output Model (Leontief Model)


4.3 Introduction
Professor Wassily Leontief in his famous book began the economy as an input out
put system and received Nobel prize in Economics in 1973 for his work.He was the
first to put to use the concept of the economic system as a working aggregation
of interrelated parts, in which all the parts have their place. His works has been
applied in more than fifty nations and inter national agencies as a predictive tool for

96
Economic planning.In order to produce something, each sector needs to consume of
its own output and some of output from the other sectors. In the model there are n
industries producing n different products such that the input equals the output or,
in other words, consumption equals production. One distinguishes two models:
open model: some production consumed internally by industries, rest consumed
by external bodies.
Problem: Find production level if external demand is given.
closed model: entire production consumed by industries.
Problem: Find relative price of each product.

4.3.1 Assumptions of Input-Output Models


Since Leontief input-output model normally can have a large number of industries
and it will be quite complicated for a simplification, the following assumptions are
adopted

1. Each industry produce only one homogeneous commodity

2. Each industry uses a fixed input ratio for the production of its output

3. Production in every industry is subject to constant return to scale (constant


returns to scale means k-fold change in every input will result in an exactly
k-fold change in output)

4.3.2 The Closed Economy Model


Assume that an economy consists of n interdependent industries (or sectors) s1 , s2 . . . ,
s1 . Each industry will consume some of the goods produced by the other industries,
including itself (for example, a power-generating plant uses some of its own power
for production). An economy is called closed if it satisfies its own needs; that is, no
goods leave or enter the system. We make the following conventions:

4.3.3 The Open Economy Model


There is a Closed Leontief Model where no goods leave or enter the economy. However,
in real economic world, it does not happen very often. Normally, a certain economy
has outside demand from like government agencies. Therefore, we will use the Open
Leontief Model. In Open Leontief Model, there are industries in an economy. Each
industry has a demand for products from other industries (internal demand). Also,
there are external demands from outside. We will find a production level for the
industries that will satisfy both internal and external demands. In an open economy,
the sectors cooperate to satisfy an external demand for each sector. If the demand
for the 5 sectors is represented by the vector

97
If the inverse of the matrix In A exists. (In − A)−1 is then called the Leontief inverse.)
For a given realistic economy, a solution obviously must exist.

• Let aij : the number of units produced by industry Si to produce one unit of
industry Si

• xi : the production level of industry Si

• aij xi : the number of units produced by industry Si and consumed by industry


Sj

• di : demand from the it h outside industry

• Then, total number of units produced by industry Si ,

xi = a1j x1 + a2j x2 + + ain xn + Di


     
a11 . . . a1n D1 xi
 .. ..   .. 
A =  . ... D= .  X =  ... 
 
. 
an1 . . . ann Di xn
Matrix A is called input-output matrix or consumption matrix. A consumption ma-
trix shows the quantity of inputs needed to produce one unit of a good. The rows of
the matrix represent the producing sector of the economy. The columns of the matrix
represent the consuming sector of the economy. The entry aij in consumption matrix
represent what percent of the total production value of sector j is spent on products
from sector i. d is the demand vector. Demand vector d represents demand from the
non-producing sector of the economy. Vector P represents the total amount of the
product produced.

Example: Assume there are 5 sectors of an economy:

Sector 1: Auto
Sector 2: Steel
Sector 3: Electricity
In general, let x1, x2, ..., xn, be the total output of industry S1 , S2 , . . . , Sn respectively.
Then
x1 = a11 x1 + a12 x2 + · · · + a1n xn + D1
x2 = a21 x1 + a22 x2 + · · · + a2n xn + D2
...
xn = am1 x1 + am2 x2 + · · · + amn xn + Dm
Assume further that aij represents the dollar amount of sector i used to produce $1.00
of sector j. The matrix containing these terms is given as

98
Since aij xj is the number of units produced by industry Si and consumed by in-
dustry Sj . The total consumption equals the total production for the product of
each industry Si .

The first column of this matrix can be interpreted as the dollar amounts of each
industry needed to produce 1 dollar of Auto: 15 cents worth of Auto, 40 cents worth
of Steel, 10 cents worth of Electricity, 10 cents worth of Coal, and 5 cents worth of
Chemical go into producing 1 dollar of Auto. More generally, the consumption of
Auto as it produces x1 dollars is

   
  D1 x1
a11 · · · a1n  D2   x2 
 .. .. 
A= . D =  ..  X =  .. 
   
.   .  .
an1 · · · ann
Dn xn
A is called the input-output matrix, B the external demand vector and X the pro-
duction level vector. The above system of linear equations is equivalent to the matrix
equation
X = AX + D
X − AX = D
[I − A]X = D
X = [I − A]−1 D
If we have two sectors
   −1    −1  
x1 1 − a11 0 − a12 D1 1 − a11 −a12 D1
= =
x2 0 − a21 1 − a22 D2 −a21 1 − a22 D2
If we have three sectors
   −1    −1  
x1 1 − a11 0 − a12 0 − a13 D1 1 − a11 −a12 −a13 b1
x2  = 0 − a21 1 − a22 0 − a23  D2  =  −a21 1 − a22 −a23  b2 
x3 0 − a31 0 − a32 1 − a33 D3 −a31 −a32 1 − a33 b3
The consumption of Steel as it produces x1 dollars is
 
0.1x1
x1 = 0.4x1 
0.7x1
The consumption of Electricity as it produces x2 dollars is
 
0.2x2
x2 = 0.5x2 
0.8x2

99
The consumption of Coal as it produces x3 dollars is
 
0.3x3
x3 = 0.6x3 
0.1x3
Therefore, the total consumption of all 3 sectors is
     
0.1x1 0.2x2 0.3x3
x = 0.4x1  + 0.5x2  + 0.6x3 
0.7x1 0.8x2 0.1x3
The assumption in a closed economy is that production equals total consumption.
This yields        
x1 0.1x1 0.2x2 0.3x3
x2  = 0.4x1  + 0.5x2  + 0.6x3 
x3 0.7x1 0.8x2 0.1x3
| {z }
Total consumption

Example: Calculate the total demand for sector A, B and C given the matrix of
technical coefficients A and the final demand vector D as follow:
   
0.1 0.2 0.3 10
A = 0.4 0.5 0.6
  D = 15

0.7 0.8 0.1 20
   −1  
x1 0.9 −0.4 −0.3 10
x2  = −0.2 0.8 −0.1 15
x3 −0.1 −0.1 0.7 20
      
x1 1.358 .76543 0.69136 10 37.35779
x2  = 0.37037 1.4815 0.37037 15 =  30.3706 
x3 0.24691 0.32099 1.5802 20 38.24597
If final demand decreases by 2 and 4 for industry 1 and 2 ; if increase by 5 for industry
3 calculate the new level of final
    
1.358 .76543 0.69136 −2 −2.3209
∆X = 0.37037 1.4815 0.37037 −4 = −4.81489
0.24691 0.32099 1.5802 5 6.12322
   
37.35779 −2.3209
New final demand =  30.3706  + −4.81489
38.24597 6.12322
For our example we have:
     
0.05 0.5 8, 000 x
A= B= X=
0.1 0 2, 000 y

100
We obtain therefore the solution

X = (I − A)−1 D

   −1  
1 0 0.05 0.5 8, 000
X= −
0 1 0.1 0 2, 000
 −1     
0.95 −0.5 8, 000 1 10 5 8, 000
x= =
−0.1 1 2, 000 9 1 9.5 2, 000
 
10, 000
x=
3, 000
 
7, 300
If the external demand changes, ex. ∆D = Then the change in x and y is
2, 500
given by       
∆x −1 1 10 5 7, 300 9, 500
= (I − A) ∆D = =
∆y 9 1 9.5 2, 500 3, 450

4.3.4 Exercise 4.1


An economy has the two industries R and S. The current consumption is given by
the table

R S External
Industry R production 50 50 20
Industry S production 60 40 100
Assume the new external demand is 100 units of R and 100 units of S. Determine
the new production levels. Solution: The total production is 120 units for R and 200
units for S. We obtain
     50 50   
120 20 100
X= B= A = 12060
200 and ∆B =
40
200 100 120
100
200    
96 30 100 307.3
The solution is ∆X = (I − A)−1 ∆B = 41 1
=
60 70 100 317.0

4.4 Linear Programming


4.4.1 Introduction
A linear programming problem may be defined as the problem of maximizing or min-
imizing a linear function subject to linear constraints. Linear programming problems
involving only two variables can be effectively solved by a graphical technique which
provides a pictorial representation of the solution.

101
4.4.2 Formulating Linear Programming Problems
The number of problems, showing how to model them by the appropriate choice of
decision variables, objective, and constraints. Any linear programming problem in-
volving more than two variables may be expressed as follows:

Find the values of the variable x1, x2,............, xn which maximize (or minimize)
the objective function

Z = c1 x1 + c2 x2 + .............. + cn xn

subject to the constraints

a11 x1 + a12 x2 + · · · + a11n xn ≤ b1

a21 x1 + a22 x2 + · · · + a2n xn ≤ b2

am1 x1 + am2 x2 + · · · + amn xn ≤ bm


and meet the non negative restrictions

x1 , x2 , . . . xn ≥ 0
A set of values x1 , x2 , . . . xn which satisfies the constraints of linear programming
problem is called its solution. Any solution to a linear programming problem which
satisfies the non negativity restrictions of the problem is called its feasible solution.
Any feasible solution which maximizes(or minimizes) the objective function of the
linear programming problem is called its optimal solution.

4.4.3 solving Linear Programming Problems


4.4.4 The Graphic Method
Easy steps
♠ Step 1. Mark the unknowns in the given linear programming by x and y.

♠ Step 2. Formulate the objective function.

♠Step 3. Translate all the constraints in the form of equalities.

♠ Step 4. Solve these equalities simultaneously.

102
♠Step 5. Find the values of x and y for which the objective function z = ax + by has
maximum or minimum value (as the case may be).

Example
A carpenter makes tables and chairs. Each table can be sold for a profit of 30 and each
chair for a profit of 10. The carpenter can afford to spend up to 40 hours per week
working and takes six hours to make a table and three hours to make a chair. Cus-
tomer demand requires that he makes at least three times as many chairs as tables.
Tables take up four times as much storage space as chairs and there is room for at
most four tables each week. Formulate this problem as a linear programming problem.

Solution: Let
xT = number of tables made per week
xC = number of chairs made per week

Constraints total work time 6xT + 3xC ≤ 40 customer demand xC ≥ 3xT storage
space ( xC
4
) + xT ≤ 4, and xT , xC ≥ 0

Objective maximize 30xT + 10xC


The solution lies at the intersection of

( xC
4
) + xT = 4 6xT + 3xC = 40

solve for xC and xT

xC
4
+ xT = 4................(i)

and 6xT + 3xC = 40 ....................(ii)

From (i), xT = 4 - xC4, put in (ii)

xC
6(4 − 4
) + 3xC = 40

3x
=24 − C2
+ 3xC = 40

3x
=24 + C2
= 40

3x
= C2
= 40 − 24 = 16

or xC = 323 = 10.667

103
Plug the value of xC in equation (i)

10.667
4
+ xT = 4

xT = 4 − 10.6674 = 1.333

Solving these two equations simultaneously we get xC = 10.667, xT = 1.333.

Example Maximize p = 3x + y

Subject to

2x − y ≤ 4
2x + 3y ≤ 12
y≤3
Vertex Lines Through Vertex Value of Objective
(3,2) 2x − y = 4 2x + 3y = 12 11 Maximum
(2,0) 2x − y = 4 y = 0 6
(1.5,3) 2x + 3y = 12 y = 3 7.5
(0,3) y=3 x=0 3
(0,0) x=0 y=0 0
Example Maximize p = 4x + 5y
Subject to

2x − 3y ≤ 4
4x + 9y ≤ 2
y ≤ 12
Vertex Lines Through Vertex Value of Objective
(0,0.222222) 4x + 9y = 2, x = 0 1.111111
(0.5,0) 4x + 9y = 2, y = 0 2 Maximum
(0,0) x = 0, y = 0 0

4.4.5 The Simplex Method


The simplex method is a method for solving problems in linear programming. Before
start discussing about the simplex method, convert linear program into standard form.

Max c1 x1 + c2 x2 + ... + cn xn

104
Subject to a11 x1 + a12 x2 + .... + a1n xn = b1

a21 x1 + a22 x2 + .... + a2n xn = b2

..........................................

am1 x1 + am2 x2 + .... + amn xn = bm

x1 ≥ 0, x1 ≥ 0, ........, xn ≥ 0.

where the objective is maximized, the constraints are equalities and the variables are
all non-negative.

Note:

• If the problem is min z, convert it to max -z.


• If any constraint is consist ≤, convertitintoanequalityconstraintbyaddinganon−negativeslackvaria
0. Lets explain the steps of the simplex method through an example.
Let us solve the following example.
Max Z = 3x1 + 2x2
Subject to x1 + x2 ≤ 4
x1 − x2 ≤ 2
x1 ≥ 0, x2 ≥ 0.
Step1 : Introducednon − negativeslackvariables.
x1 + x2 + x3 + 0x4 = 4
x1 − x2 + ox3 + x4 = 2
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0.
Step2 :

N ow, maximize3x1 + 2x2 + 0x3 + 0x4


 
   x1  
1 1 1 0 1 1 1 0  x2  4
Subjectto   =
1 −1 0 1 1 −1 0 1 x3   2
x4
Step3 : P utx1 = x2 = 0andgetvaluesof basicvariables(XB )(i.e.x3, x4).
Case1 : If all∆j ≥ 0, then solution is optimal.
Case 2: If for any negative ∆j all elements of column Xj are negative or zero. Then
solution will be unbounded.
Case 3: If at least one ∆j is negative then the solution under test is not optimal,
proceed for next step for solution.

105
Step 5: Choose biggest negative ∆j . Make arrow in upward direction, is called in-
coming vector (i.e. 3 )
Example Maximize
p = (1/3)x + 4y + 2z + 4w
2x + 3y + 4z + w ≤ 20
4x + 2y − 4z − w ≥ 10
w − y ≥ 10
Optimal Solution: p = 125/3; x = 5, y = 0, z = 0, w = 10

4.4.6 The Duality Theorem


4.4.7 Limitations of Linear Programming
Back to Top Limitations of linear programming are as follows:

• Linear programming treats all relationship as linear. But it is not true in many
real life situations.

• The decisions variables in LPP would be meaningful only if they are integers.

• The problems are complex if the number of variables and constraints are quite
large.

• Factors such as uncertainty, weather conditions etc. are not taken into consid-
eration.

• Parameters are assumed to be constants but in reality they may not be so.

• LPP deals with only a single objective problem whereas in real life situations,
there may be more than one objective.

106
107
4.4.8 Summary Questions
 
1 −7 9
1. AdjA = −2 5 −6
1 −1 0
 1 −7 9 
−3 −3 −3
|A| = −3, inverse:  −2 5
− −3 −6 
−3 −3
− 13 1
−3
0
−3
 
−3 6 −3
2. AdjB =  6 −12 6 
−3 6 −3
|B| = 0, no inverse !!
 
−2 4 0
3. C  1 −2 0
0 0 0
|C| = 0 no inverse because row 1 is twice row two!!!
 
7 −13 9
4. AdjD =  0 0 0
−7 13 −9
|D| = 0 no inverse because column 1 and 3 are identical!!!
   
1 4 5 7 0 −7
5. Transpose: 3 3 2 E = −13 0 13 
1 4 5 9 0 −9
|E| = 0

6. trace (F ) = 9
 
151/6 217/12 28 173/6
7. 385/4 415/8 126 105 
479/6 637/12 241/3 1141/9

(3 × 6) + (1/3 × 7/2) + (1 × 4) + (1/2 × 4) = 151/6


(3 × 5) + (7 × 1/4) + (1/3 × 1) + (9 × 4) = 637/12
(3 × 6) + (7 × 7/2) + (1/3 × 4) + (9 × 4) = 479/6
(3 × 6) + (7 × 6) + (1/3 × 7) + (9 × 2) = 241/3
(3 × 6) + (7 × 6) + (1/3 × 7) + (9 × 2) = 241/3

108
 
38/3 39 113/2
8. AB = 25/3 26 35 
11/2 89/2 173/4
(2 × 1/3) + (3/2 × 2) + (9 × 1) = 38/3
(2 × 6) + (3/2 × 6) + (9 × 2) = 39
(2 × 5) + (3/2 × 7) + (9 × 4) = 113/2
(1 × 1/3) + (2 × 2) + (4 × 1) = 25/3
(6 × 1/3) + (3/4 × 2) + (2 × 1) = 11/2
(6 × 5) + (3/4 × 7) + (2 × 4) = 173/4
(1 × 5) + (2 × 7) + (4 × 4) = 35
 
241/15 133/3 92/3
9. AB =  286/5 50 17 
99/10 67/2 15
(2 × 1/5) + (6 × 2) + (1/3 × 11) = 241/15
(1 × 1/5) + (1 × 2) + (5 × 11) = 286/5
(2 × 1/5) + (2 × 2) + (1/2 × 11) = 99/10
(2 × 12) + (6 × 3) + (1/3 × 7) = 133/3
(2 × 12) + (2 × 3) + (1/2 × 7) = 67/2
(2 × 3) + (2 × 4) + (1/2 × 2) = 15
(1 × 3) + (1 × 4) + (5 × 2) = 17
(2 × 3) + (6 × 4) + (1/3 × 2) = 92/3
10. |A| = −45

Calculate the cofactors of the matrix ?


   
1+1 6 5 1+2 5 5
C11 = (−1) = −28 C12 = (−1) = 25
8 2 7 2

  
1+35 6 2+1 1 7
C13 = (−1) = −2 C21 = (−1) = 54
7 8 8 2
   
2+2 2 7 2+3 2 1
C22 = (−1) = −45 C23 = (−1) = −9
7 2 7 8
   
3+1 1 7 3+2 2 7
C31 = (−1) = −37 C32 = (−1) = 25
6 5 5 5
 
  −28 25 −2
2 1
C33 = (−1)3+3 =7 C =  54 −45 −9
5 6
−37 25 7
 
−28 54 −37
T
C = 25 −45 25 

−2 −9 7

109
 
28/45 −6/5 37/45
CT
A−1 = det A
=  −5/9 1 −5/9 
2/45 1/5 −7/45

110
REFERENCES 1. Chiang, Alpha C.(1984),Fundamental Methods of Mathematical
Economics, McGraw-Hill, Inc 2. Bhardwaj, R. S.(2005) Mathematics for Economics
and Business, Excel Books. 3. Knut sydster and Peter Hammond: Mathematics
Essentials for Economic analysis. Ethiopian Edition 4. Knut sydster and Peter
Hammond: Further Mathematics for Economists. Ethiopian Edition 5. Yamane, T.
(2002), Mathematics for Economists: An Elementary Survey, 2nd ed., Prentice-Hall.
6. Dowling, E.T., (1980),Mathematics for Economists(Schaum’s Outline Series), Mc
Graw-Hill 7. Kapoor,V. k. (2002), Introductory Mathematics for Business and Eco-
nomics, Sultan Sons: New Delhi 8. .Monga, G.S. (1972),Mathematics and Statistics
for Economics, Vikas Publishing House. 9. Bowen, E.K.,etal., (1987), Mathematics
with Applications in Management and Economics, 6thed., Irwin Inc

111

You might also like