Algebra Module
Algebra Module
FOR
ECONOMISTS
MODULE
TADELE BAYU
Aksum University
Ethiopia
2014
DEPARTMENT OF ECONOMICS
Preface
i
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
1 Matrix Algebra 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Matrix Addition and Subtraction . . . . . . . . . . . . . . . . 3
1.2.2 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Inner Product and Outer Product . . . . . . . . . . . . . . . . 8
1.2.4 Transpose of a matrix . . . . . . . . . . . . . . . . . . . . . . 8
1.2.5 Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 Minors and Cofactors . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.2 Higher order determinants . . . . . . . . . . . . . . . . . . . . 16
1.3.3 Adjoint (Adjugate) of Matrices . . . . . . . . . . . . . . . . . 17
1.4 Matrix Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4.1 Derivation of a second order matrix inverse . . . . . . . . . . 20
1.4.2 Gauss Jordan Elimination Through Pivoting . . . . . . . . . . 22
1.5 Partitioned Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6 Rank of a Matrix and Linear Independence . . . . . . . . . . . . . . . 25
1.6.1 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.7 Vectors and Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . 27
1.7.1 Vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.7.2 Length of a column vector . . . . . . . . . . . . . . . . . . . . 28
1.7.3 Linear dependence . . . . . . . . . . . . . . . . . . . . . . . . 28
1.8 Powers and Trace of a Square Matrix . . . . . . . . . . . . . . . . . . 29
1.8.1 Trace of a Square Matrix . . . . . . . . . . . . . . . . . . . . . 30
1.9 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.10 SOLUTION FOR EXERCISES . . . . . . . . . . . . . . . . . . . . . 33
1.10.1 Exercise 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.10.2 Exercise 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.10.3 Exercise 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
ii
1.10.4 Exercise 1.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.10.5 Exercise 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.10.6 Exercise 1.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.10.7 Exercise 1.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.10.8 Exercise 1.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.10.9 Exercise 1.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.10.10 Exercise 1.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
1.10.11 Exercise 1.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
1.10.12 Exercise 1.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.11 Summary Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
iii
4.3.3 The Open Economy Model . . . . . . . . . . . . . . . . . . . . 97
4.3.4 Exercise 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.4 Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.4.2 Formulating Linear Programming Problems . . . . . . . . . . 102
4.4.3 solving Linear Programming Problems . . . . . . . . . . . . . 102
4.4.4 The Graphic Method . . . . . . . . . . . . . . . . . . . . . . . 102
4.4.5 The Simplex Method . . . . . . . . . . . . . . . . . . . . . . . 104
4.4.6 The Duality Theorem . . . . . . . . . . . . . . . . . . . . . . . 106
4.4.7 Limitations of Linear Programming . . . . . . . . . . . . . . . 106
4.4.8 Summary Questions . . . . . . . . . . . . . . . . . . . . . . . 108
iv
Chapter 1
Matrix Algebra
Objectives
1.1 Introduction
A matrix is an array of numbers or parameters arranged in rows and columns as
a11 a11 . . . a1n
a21 a22 . . . a2n
A=
. . . . . .
... ...
am1 am2 . . . amn
Matrix A is m × n matrix (i.e., m rows and n columns) where the entry in the ith row
and j th column is aij . So that a matrix that contains m rows and n columns can be
1
expressed as
A = (aij )m×n
The number of rows m and the number of columns n explains the dimension of matrix
A. In this case the row number always precedes the column number.
Two matrices are said to be of the same size if they have the same number of rows and
same number of columns.Hence, matrix equality is defined for two matrices if they are
in the same size. Given two m × n matrices A and B , A = B if aij = bij for every i, j.
Example
1 −9
1 −9 7
Suppose A = B = 0 1
0 1 −5
7 −5
Since the size of matrix A is 2 × 3 and that of B is 3 × 2, A 6= B.
Example
x2 y − x 1 x−y
Find all values of x and y so that =
2 y2 x+1 1
We see that the size of each matrix is 2 × 2. So we set the corresponding entries
equal:
x2 = 1 y−x=x−y
2=x+1 y2 = 1
2
2 4
5. E= 6
5
4 7
2 4 7
6. F=6 5 4
4 7 0
0 4 3 9 −4 9 −1 6
The sum of two matrices is defined when both matrices have equal size, and the
result is a new matrix of equal size, where each entry is obtained by adding the en-
tries at same position in both matrices. Matrices of different sizes cannot be added.
3
1+2 −3 + 9 −5 + 6 1 + (−7)
9+6 6 + (−9) −4 + 2 −9 + 4
A + B =
−3 + (−3) 7 + (−8) −9 + 1 −4 + (−3)
4
Exercise 1.2
2 3 −1 2 1 2 3
Let A = B= and C = .
−1 2 6 −2 −1 −2 −3
Compute each of the following, if possible.
1. A + B
2. B − A
3. B + C
4. 4C
5. 2A − 3B
♥ (A + B) + C = A + (B + C)
♥ A+B =B+A
♥ A+0=A
♥ A + (−A) = A − A = 0
♥ (α + β)A = αA + βA
♥ α(A + B) = αA + αB
Exercise 1.3: Given the following matrices answer question 1,2 and 3
2 4 7 0 8 7 1 0 3
A= 6 5 4 B= 0 5 3 C= 1 0 2
4 7 0 1 5 0 1 0 0
1. 2A+3B
5
2. 3A-3B
3. 3A-A try it!
4. Compute A + B A−B B + A and A + B + C for the following Matrices
2 4 7 0 8 7 1 0 3
A= 6 5 4 B= 0 5 3 C= 1 0 2
4 7 0 1 5 0 1 0 0
6
x xa xb xc
and BA = y a b c = ya yb yc
z za zb zc
m
X
(AB)ij = Aik Bk j
k=1
Thus the product AB is defined only if the number of columns in A is equal to the
number of rows in B, in this case m. Each entry may be computed one at a time.
Exercise 1.4
Dear learners, you need to now that matrix multiplication is possible only if the
number of columns in the lead matrix is equal with the number of rows in the lag
matrix.In this case we say that matrix AB is conformable. Moreover; if A and B are
two matrices, then Ab might be defined , even if BA is not.
7
Properties of Matrix multiplication
8
the first rows of the old matrix becomes the first column in the new matrix is called
transpose of a matrix. This new matrix is with an order n × m and it is denoted by
AT or A0 . In other words if matrix A has m × n dimension ,then its transpose AT
must be n × m dimension.
a11 a11 . . . a1n a11 a11 . . . an1
a21 a22 . . . a2n a12 a22 . . . an2
A=
. . . . . . . . . . . . AT =
. . . . . . . . . . . .
am1 am2 . . . amn a1m a2m . . . anm
Example Find the transpose of the following matrices
1 2 3
1. Given A = 4
5 6
7 8 9 3×3
1 4 7
Answer: AT = 2 5 8
3 6 9 3×3
3 4
2. Given A =
1 7 2×2
T3 1
Answer: A =
4 7 2×2
♥(A + B)T = AT + B T
Exercise 1.5
9
1. Suppose
7 −4 −2
−7 −6 −3
5
A= 8 −4
4 2 3
−8 0 8
(a) Compute(A)T
(b) state the dimension of (A)T
−1 −5 9
−4 3 4 6 −6 5
5
2. Given two matrices A and B as A = 8 7 B = 7 4 4
−5 −9 3 3 −9 −9
−1 1 −5
T
Find C = (AB)
−5 6 9 0 −6 −4
3. A = −7 0 −4 B= −6 8 1 Find B T AT
4 5 6 −5 −5 −3
4. If r is a scalar element and A and B represent two different 2 × 2 matrices
(a) show that the transpose of a transpose matrix is the original matrix
(b) show that the transpose of two added matrices is the same as the addition
of the two transpose matrices
(c) show that when a scalar element is multiplied to a matrix, the order of
transposition is irrelevant.
(d) show that the transpose of a product of matrices equals the product of
their transposes in reverse order
10
2. Row vector or column Vector If a matrix is composed of a single
column such
a11
a21
that its dimension is m × 1, it is a column vector. A=
...
am1
10
5
Example A=
. . .
2
Similarly, if a matrix is composed of a single row such that its dimension is
1 × n,
it is a row vector.
A= a11 a11 . . . a11
Example A= 10 5 . . . 2
3. Diagonal matrix A matrix is said to be diagonal if its off-diagonal elements (i.e.,
aij , i 6= j) are all zeros and at least one of its diagonal elements is non-zero, i.e.,
aii 6= 0 for some i = 1, ..., n .
Example
4 0 0 1 0 0
A = 0 1 0 B = 0 5 0
0 0 2 0 0 8
4. Identity matrix An identity matrix of order n , denoted by In (most of the time
by I), is the n × n matrix having ones along the main diagonal and zeroes of
the principal diagonal.
1 0 0 0 ··· 0
0 1 0 0 · · · 0
0 0 1 0 · · · 0
In = .. .. .. . . ..
. . . . .
. . . . . . ..
.. .. .. .
0 0 0 0 ··· 1
If A is any m × n matrix , then AIn = A. This is because an identity matrix is
equivalent to 1 in the real number system.
5. Triangular matrix
A matrix A is said to be lower (upper) triangular if aij = 0 for i < (>)j.
Example
1 2 3 1 0 0
A = 0 5 3 B = 2 5 0
0 0 02 3 4 8
11
6. Symmetric Matrix
A square matrix with the property A = AT is called a symmetric matrix. In
other words matrix A = (a
ij )m×n is symmetric
if and only if aij = aji , for all i,j.
a b c 2 −1 5
−3 2 b d e −1 −3 2
Example
2 0
c e f 5 2 8
1.3 Determinants
If a matrix is square (that is, if it has the same number of rows as columns),then we
may have a unique number called determinant. So determinants are defined only for a
square matrix and denoted by |A|. Using determinants we can solve matrix equations
and it is also useful in determining whether a matrix has an inverse, without actually
going through the process of trying to find its inverse.
Note:
a11 a12
Given a 2 × 2 matrix A as A = ,the its determinant is given by
a21 a22
4 5
2. A =
4 5
|A| = (4)(5) − (5)(4) = 0
Give a 3 × 3 matrix A as
a11 a12 a13
A = a21 a22 a23
a31 a32 a33
12
,then its determinant
is given by
a22 a23 a a a a
|A| = a11 − a12 21 23 + a13 21 22
a32 a33 a31 a33 a31 a32
|A| = a11 a22 a33 − a11 a32 a23 − a12 a21 a33 + a12 a31 a23 + a13 a21 a32 − a13 a31 a22
Example Find the determinant of the following matrices
2 1 3
A = 4 5 6
7 8 9
5 6 4 6 4 5
|A| = 2 −1 +3
8 9 7 9 7 8
|A| = −9
Exercise 1.6
5 2 8
1. A = 8 0 6
7 9 0
0 2 8
2. A = 0 0 6
0 9 0
4 8 8
3. A = 2 4 6
1 1 0
−1 −5 −8 5
0 0 −5 −1
4. D=
6 −4 −5 −2
−4 9 −8 −7
Self Test Exercise
Suppose matrices A and B are
2 4 7 0 8 7
A = 6 5 4 B = 0 5 3
4 7 0 1 5 0
13
1. Compute |AB|
2. Find |BA|
1 0 4
3. Find |DT | if matrix D= 0 3 7
4 7 2
Here |M11 | is the minor of a11 ,|M12 | is the minor of a12 and |M13 | is the minor of a13
A minor with associated sign is called a cofactor.The rule for the cofactor is given by
This implies that if the sum of the subscripts is even number,|Cij | = |Mij | , since
-1 raised to an even number is positive. But if the sum of the subscripts is an odd
number,|Cij | = −|Mij |.This implies that it is irrelevant which row or column we
choose to expand a determinant of a square matrix. We always obtain the same
result. The sign pattern is given by
+ − + ...
− + − . . .
+ − + . . .
.. .. .. ..
. . . .
14
The cofactor of the entry located on i-th row and j-th column, is defined to be the
determinant of the submatrix that remains after the i-th row and j-th column are
deleted from the matrix, changing the sign if i+j is odd. Step 1: Compute the
cofactor for each entry of the matrix A.
2 3
C11 = = 2(−1) − 3(−7) = 19
−7 −1
1 3
C12 = = (−1)(1)(−1) − 3(−8)) = −23
−8 −1
1 2
C13 = = 1(−7) − 2(−8) = 9
−8 −7
0 −8
C21 = = (−1)0(−1) − (−8)(−7)) = 56
−7 −1
−6 −8
C22 = = (−6)(−1) − (−8)(−8) = −58
−8 −1
−6 0
C23 = = (−1)(−6)(−7) − 0(−8) = −42
−8 −7
0 −8
C31 = = 0(3) − (−8)2 = 16
2 3
−6 −8
C32 = = (−1)(−6)(3) − (−8)(1) = 10
1 3
−6 0
C33 = = (−6)2 − 0(1) = −12
1 2
Step 2: Compose the matrix using the cofactors previously computed.
19 −23 9
Cof (A) = 56 −58 −42
16 10 −12
−2 −7 7
4 8 −5
5 1 −4
The determinant can be computed by multiplying the entries in any row (or column)
by their cofactors and adding the resulting products, where the cofactor of the entry
located on i-th row and j-th column, is defined to be the determinant of the submatrix
that remains after the i-th row and j-th column are deleted from the matrix, changing
15
the sign if i+j is odd. From the previous definition it could be seen that cofactors
involve determinants of lower order. Using this technique recursively together with
the formulas for determinants of order 2 and 3, we have a method for calculating
the determinant. In practice when expanding cofactors along a row or column to
calculate the determinant, use the row or column with the greatest amount of zeros,
because it is not needed to compute the associated cofactors. When a matrix is trian-
gular, its determinant is the product of the entries on the main diagonal of the matrix.
|A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a12 a21 a33 − a11 a23 a32
−2 −7 7
4 8 −5
5 1 −4
|A| = (−2)8(−4)+(−7)(−5)5+(4)(1)(7)−(7)(8)(5)−(−7)(4)(−4)−(−5)(1)(−2) = −135
Exercise 1.7 Find the cofactor matrix for each of the following matrices
−7 0 −5
1. A= −9 0 5
5 8 −3
−5 −7 5
2. B= 2 −9 9
−8 8 5
−3 9 −6 −3
8 −8 −1 −1
3. C=−7 −3 −4 5
−5 2 1 2
Where |Cij | is a cofactor based on a second order determinant.In this way Laplace
expansion allow evaluation of a determinant a long any row or column. For n order
case
|A| = a11 |C11 | + a12 |C12 | + a13 |C13 | + . . .
16
PROPERTIES OF DETERMINANT
1. Adding or subtracting any zero multiple of one row (column) from another
row (column) will have no effect on the determinant.
2. Interchanging any two rows or columns of a matrix will change the sign ,
but not the absolute value of the determinant.
6. If all the elements of any row or column are zero , then the determinant
is zero.
17
Replacing the elements of aij with their cofactors |Cij |
1 2 4 2 4 1
C11 = = −2 C12 = − = −6 C13 = =7
3 4 5 4 5 3
3 1 5 4 5 3
C21 = − = −9 C22 = =3 C23 = − =9
3 4 2 3 5 3
3 1 2 1 2 3
C31 = =5 C32 = − =0 C33 = = −10
1 2 4 2 4 1
−2 −6 7
C = −9 3 9
5 0 −10
The transpose the cofactor matrix known as adjoint of matrix A is given by
−2 −9 5
Adj A = C T = −6 3 0
7 9 −10
Exercise 1.8 Find the adjoint of the following matrices
3 3 −2
1. A=−5 0 −3
−6 −7 0
2 2 4
2. B=3 −7 9
9 4 0
3 −2 2
3. C=−8 −9 −3
0 −5 6
−3 2 −5
4. D= 6 2 7
8 −5 7
18
1.4 Matrix Inversion
Given a square matrix A = (aij )n×n , with determinant |A| =
6 0 has a unique inverse
−1 −1
AA = A A = I, which is given by
1
A−1 = adj(A)
|A|
a b
A=
c d
d −b
B=
−c a
Multiplying these matrices gives
ad − bc 0
AB = = (ad − bc)I
0 ad − bc
−1 1 d −b
A =
ad − bc −c a
, so long as ad − bc 6= 0, where1/(ad-bc) is the reciprocal of the determinant of the
matrix in question.
19
PROPERTIES OF MATRIX INVERSION
(A−1 )−1 = A
(AB)−1 = B −1 A−1
Then much like the transpose, taking the inverse of a product reverses
the order of the product.
20
so that applying matrix multiplication
ax + bz = 1 zx + dz = 0
cy + dw = 1 ay + bw = 0
Solving them using Cramer’s rule yields
d −b
x = ad−bc y = ad−bc
−c a
z = ad−bc w = ad−bc
So that the inverse become
−1 1 d −b
A =
ad − bc −c a
Similarly, we can have the second method or the adjoint method as follows
21
1.4.2 Gauss Jordan Elimination Through Pivoting
There are three types of elementary matrices, which correspond to three types of row
operations (respectively, column operations):
1. Row switching
A row within the matrix can be switched with another row.
Ri ↔ Rj
2. Row multiplication
Each element in a row can be multiplied by a non-zero constant.
kRi → Ri , where k 6= 0
3. Row addition
A row can be replaced by the sum of that row and a multiple of another row.
Ri + kRj → Ri , where i 6= j
For each row in a matrix, if the row does not consist of only zeros, then the left-most
non-zero entry is called the leading coefficient (or pivot) of that row. A matrix is
said to be in row echelon form if the lower left part of the matrix contains only zeros,
and all of the zero rows are below the non-zero rows. The word echelon is used here
because one can roughly think of the rows being ranked by their size, with the largest
being at the top and the smallest being at the bottom.
A matrix is said to be in reduced row echelon form if furthermore all of the lead-
ing coefficients are equal to 1 and in every column containing a leading coefficient all
of the other entries in that column are zero.
Example Given a 3 × 3 matrix A below find its inverse using the above method
2 −1 0
A = −1 2 −1
0 −1 2
To find the inverse of this matrix, one takes the following matrix augmented by the
identity, and row reduces it as a 3 by 6 matrix:
2 −1 0 1 0 0
[A|I] = −1
2 −1 0 1 0
0 −1 2 0 0 1
By performing row operations, one can check that the reduced row echelon form of
this augmented matrix is:
22
3 1 1
1 0 0 4 2 4
−1 1 1
[I|A ] = 0 1 0 1
2 2
1 1 3
0 0 1 4 2 4
The matrix on the left is an identity matrix, which shows A is invertible. The 3 by 3
matrix on the right, A−1 , is the inverse of A. This procedure for finding the inverse
works for square matrices of any size1 .
Example Determine whether or not the matrix is invertible and if so find its in-
verse.
2 1
1. A =
1 −1
1 0 2
2. B = −1 1 −2
2 2 1
1 −1
3. C =
−1 1
4. Check that B is the inverse of A
−1 2 −3 −5 4 −3
A= 2 1 0 B = 10 −7 6
4 −2 5 8 −6 5
Solution
2 1 | 1 0 1 −1 | 0 1
1. →
1 −1 | 0 1 2 1 | 1 0
1 −1 | 0 1 1 −1 | 0 1 1 0 | 1/3 1/3
→ →
0 3 | 1 −2 0 1 | 1/3 −2/3 0 1 | 1/3 −2/3
So we see that the reduced echelon
form of A is the identity. Thus A is in-
1/3 1/3
vertible and A−1 = . We can rewrite this inverse a bit more nicely
1/3 −2/3
by factoring out the 1/3:
−1 1 1 1
A =
3 1 −2
1
The Cayley-Hamilton method A−1 = 1
det(A) [(trA) I − A]
23
1 0 2 | 1 0 0 1 0 2 | 1 0 0 1 0 2 | 1 0 0
2. −1 1 −2
| 0 1 0 → 0 1 0
| 1 1 0 → 0
1 0 | 1 1 0
2 2 1 | 0 0 1 0 2 −3 | −2 0 1 0 0 −3 | −4 −2 1
1 0 2 | 1 0 0 1 0 0 | −5/3 −4/3 2/3
0 1 0 | 1 1 0 → 0 1
0 | 1 1 0
0 0 1 | 4/3 2/3 −1/3 0 0 1 | 4/3 2/3 −1/3
So we see that A is invertible and
−5/3 −4/3 2/3
B −1 = 1 1 0
4/3 2/3 −1/3
Notice all of those thirds in the inverse? Factoring out 1/3, we get
−5 −4 2
1
B −1 = 3 3 0
3
4 2 −1
1 −1 | 1 0 1 −1 | 1 0
3. → .
−1 1 | 0 1 0 0 | 1 1
Since the reduced echelon form of C is not I, C is not invertible.
4. AA−1 = I
−1 2 −3 −5 4 −3 1 0 0
AB = 2 1 0 10 −7 6 = 0 1 0
4 −2 5 8 −6 5 0 0 1
Exercise 1.9 Find the inverse of the following matrices using row reduction method
1 2 3
1. A = 4
5 3
7 8 9
2 8 3
2. B = 1 7 3
9 8 9
0 0 3
3. A = 1
0 1
0 8 9
24
1 0 4
4. A = 1 0 1
1 5 10
2 2 0
5. A = 1.5 1 1
4 0 0
1 2
6. A=
3 4
1/4 2/9
7. A=
1/5 4/3
Example
1 1 2 2
1 1 2 2
3 3 4 4 can be partitioned into four 2 × 2
The matrix P = blocks
3 3 4 4
1 1 2 2 3 3 4 4
P11 = P12 = P21 = P22 =
1 1 2 2 3 3 4 4
The partitioned matrix can then be written as
P11 P12
P=
P21 P22
25
dependent set is called the rank of A usually written as r(A). In other words, the
maximum number of linearly independent column vectors in A is called the rank of
a matrix or the rank of a given matrix is equal to the order of the largest minor of A
that is different from 0. 2
2R1 − R2 → R2
3R1 − R3 → R3
1 4 7
0 −3 −6
0 −6 −12
Devide the 2nd row by -3
1 4 7
0 1 2
0 −6 −12
4R2 − R1 → R1
−6R2 − R3 → R3
1 0 −1
0 1 2
0 0 0
Since the non-zero rows 2, then Rank(A) = 2
Exercise 1.10
26
2 8 4
3. C = 4 1 1
6 0 8
9 7/9 4/3
4. Given D = 0.5 1.2 1.6 find the rank of DT
0.6 1.5 0
Definition: The sum of vector u and vector v is the vector of the sums of each
corrsponding entry of the two vectors.
u1 v1 u1 + v1
u + v = ... + ... = ...
un vn un + vn
Note that for the addition to be de?ned the vectors must have the same number of
entries. This entry-by-entry addition works for any pair of matrices, not just vectors,
provided that they have the same number of rows and columns.
The scalar multiplication of the real number λ and the vector v is given by
v1 λv1
λv = λ ... = ...
vn λvn
If P = (x1 , y1 ) and Q = (x2 , y2 ) are two points on the plane then the coordinate form
of the vector v represented by P Q is given by v = (x2 − x1 , y2 − y1 ) and the length
of v denoted by |v| is given by
p
|v| = (x2 − x1 )2 + (y2 − y1 )2
27
Example 1.19 Find the coordinate form and the length of the vector v that has
initial points (3, −7) and terminal point (−2, 5).
4
q
||V || = V12 + V22 + · · · + Vn2
√ √
||V || = 12 + 22 + 32 + 42 = 30
Example Check that whether the following vectors are linearly independent or
dependent
2 1 4
V1 = V1 = V1 =
7 8 5
28
Solution
6 2 4
− =
21 16 5
Exercise 1.11 Check
that
whether V1 and V12 are functionally dependent or indepen-
dent vectors V1 = 1 0 and V2 = 0 1 are lineally independent or independent?
An = A
| .{z
. . A}
n−times
• Scalar multiplication:
(λA)k = λk Ak
• Determinant
det(Ak ) = det(A)k
A special case is the power of a diagonal matrix. Since the product of diagonal
matrices amounts to simply multiplying corresponding diagonal elements together,
the power k of a diagonal matrix A will have entries raised to the power.
k
A11 0 · · · 0 Ak11 0 · · · 0
0 A22 · · · 0 0 Ak · · · 0
22
Ak = .. =
.. .. .. .. .. .. ..
. . . . . . . .
0 0 · · · Ann 0 0 · · · Aknn
This implies that it is easy to raise a diagonal matrix to a power. When raising an
arbitrary matrix (not necessarily a diagonal matrix) to a power, it is often helpful to
29
exploit this property by diagonalizing the matrix first.
The matrix C obtained by multiplying the square matrix A by itself, is always defined
due to the number of columns and rows of A are equal. The matrix C will have the
same size as A. To find the entry associated to row i and column j: Cij , multiply the
entries of the i-th row by the corresponding entries in the j-th column of A and then
add up the resulting products.
Step 1: Multiply each row by each column of the matrix A. The first index in C
indicates the row index and the second one indicates the column index in A.
105 35
A.A =
40 60
Exercise 1.11 Find the power of the following matrices
8 −2
1.
−7 −5
−3 7 7
2. A= 5 −4 8
−5 −1 −5
30
Example Find the trace of
1 −7 8
A = −4 −5 2
−4 4 5
n
X
trace(A) = aii = 1 − 5 + 5 = 1
i=1
1. trace(A)=trace(AT )
2. trace (cA + dB)= c trace(A) + d trace (B), where c and d are scalars.
3. trace (AB)=trace (BA) ,provided that both AB and BA are defined. The
trace of a product AB is independent of the order of A and B
N
4. trace(A B)=trace(A)trace(B)
1.9 summary
An m × n matrix is a rectangular array of numbers with m rows and n columns. Each
number in the matrix is an entry. Matrix addition is also defined for two matrices of
31
the same size. Given two m × n matrices A and B,their sum, C = A + B,is the m × n
matrix with the (i, j)t h element ci j = ai j +bi j . Note that matrix addition, if defined,
is commutative: A+B = B +A and associative A+(B +C) = (A+B)+C.Moreover,
A+0=A
If all entries of A below the main diagonal are zero, A is called an upper triangular
matrix. Similarly if all entries of A above the main diagonal are zero, A is called a
lower triangular matrix. If all entries outside the main diagonal are zero, A is called
a diagonal matrix.
A vector (or column vector ) is a matrix with a single column. A matrix with a
single row is a row vector . The entries of a vector are its components. A column or
row vector whose components are all zeros is a zero vector.
The identity matrix In of size n is the n-by-n matrix in which all the elements on the
main diagonal are equal to 1 and all other elements are equal to 0.
A matrix in which the number of rows are identical with the number of columns
is called a square matrix. A square matrix A that is equal to its transpose( i.e.,
A = AT ), is called a symmetric matrix. If instead, A was equal to the negative of its
transpose (i.e., A = −AT ), then A is called skew-symmetric matrix.
The elements of a matrix remaining after the deletion of the ith row and the j th
column is called a minor .A minor with associated sign is called a cofactor.The rule
for the cofactor is given by |Cij | = (−1)i+j |Mij |. This implies that if the sum of the
subscripts is even number,|Cij | = |Mij | , since -1 raised to an even number is positive.
32
But if the sum of the subscripts is an odd number,|Cij | = −|Mij |. A square matrix M
is invertible (nonsingular) if there exists a matrix A−1 such that A−1 A = I = AA−1 .
A set of vectors v1 , . . . , vn is said to be linearly dependent if and only if it can be
expressed as a linear combination of the remaing vectors. Otherwise, they are linearly
independent.
4. Sydasaeter K. & Hammond Peter J., (2010). Mathematics for Economic Anal-
ysis. 5th edition, New Delhi.
2. B = 3 × 1
3. C = 2 × 2
4. D = 2 × 3
5. E = 3 × 2
6. F = 3 × 3
33
1.10.2 Exercise 1.2
1. Since A and B are both 2 × 2 matrices, we can add them as follow
2 3 −1 2 2 + (−1) 3+2 1 5
A+B = + = = .
−1 2 6 −2 −1 + 6 2 + (−2) 5 0
2. Since A and B are both 2 × 2 matrices, we can subtract them
−1 2 2 3 −1 − 2 2−3 −3 −1
B−A= − = = .
6 −2 −1 2 6 − (−1) −2 − 2 7 −4
3. Impossible!! B and C have different sizes: B is 2 × 2 and C is 2 × 3.
34
4(1) + (−3)(3) 4(2) + (−3)(4) −5 −4
= .
−2(1) + 1(3) −2(2) + 1(4) 1 0
Did you notice what just happened? We have that AB 6= BA! Yes, it’s true:
Matrix multiplication is not commutative.
2. No. The first matrix is 2 × 3 and then second is also 2 × 3. The number of
columns of the first is not the same as the number of rows of the second.
is 2 ×
3. Yes, the first 3 and the second is 3 × 2, so their product is 2 × 2.
1 2
2 2 9
5 2
−1 0 8
−1 3
2(1) + 2(5) + 9(−1) 2(2) + 2(2) + 9(3) 3 35
=
−1(1) + 0(5) + 8(−1) −1(2) + 0(2) + 8(3) −9 22
a b c x ax + by + cz
4. AB = d e f y =dx + ey + f z
g h i z gx + hy + iz
BA is not defined
a b c α β γ aα + bλ + cρ aβ + bµ + cσ aγ + bν + cτ
5. AB = d e f λ µ ν =dα + eλ + f ρ dβ + eµ + f σ dγ + eν + f τ
g h i ρ σ τ gα + hλ + iρ gβ + hµ + iσ gγ + hν + iτ
2. Step 1: Multiply each row of the matrix A by each column of the matrix B (To
multiply a row by a column just multiply the corresponding entries and then
add up the resulting products). The first index in C indicates the row index in
A and the second one indicates the column index in B.
C2 1 = (−4)6 + 37 + 43 = 9
C2 2 = (−4)(−6) + 34 + 4(−9) = 0
C2 3 = (−4)5 + 34 + 4(−9) = −44
35
C3 1 = 56 + 87 + 73 = 107
C3 2 = 5(−6) + 84 + 7(−9) = −61
C3 3 = 55 + 84 + 7(−9) = −6
3. Step 1: Multiply each row of the matrix A by each column of the matrix B (To
multiply a row by a column just multiply the corresponding entries and then
add up the resulting products). The first index in C indicates the row index in
A and the second one indicates the column index in B.
36
−81 33 −1
AB = 20 62 40
−60 −14 −29
−81 20 −60
(AB)T = 33 62 −14 = B T AT
−1 40 −29
−1 −5 5 −1 −5 −8
- 5(-1) 6 −4 −2 - 1 6 −4 −5
−4 9 −7 −4 9 −8
−1 −5 5
6 −4 −2 = −106
−4 9 −7
−1 −5 −8
6 −4 −5 = −721
−4 9 −8
|A| = −5(106) − 1(−721) = 191
37
C12 = −9 55 −3 = (−1)((−9)(−3) − 55) = −2
−9 0
C13 = = (−9)8 − 05 = −72
5 8
0 −5
C21 = = (−1)(0(−3) − (−5)8) = −40
8 −3
−7 −5
C22 = = (−7)(−3) − (−5)5 = 46
5 −3
−7 0
C23 = = (−1)((−7)8 − 05) = 56
5 8
0 −5
C31 = = 05 − (−5)0 = 0
0 5
−7 −5
C32 = = (−1)((−7)5 − (−5)(−9)) = 80
−9 5
−7 0
C33 = = (−7)0 − 0(−9) = 0
−9 0
Step 2: Compose the matrix using the cofactors previously computed.
−40 −2 −72
Cof(A)=−40 46 56
0 80 0
−117 −82 −56
2. Cof(A)= 75 15 96
−18 55 59
83 66 −11 147
228 186 −36 402
3. Cof(A)=−75 −60 15 −135
38
3 −2
C21 = = (−1)(30 − (−2)(−7)) = 14
−7 0
3 −2
C22 = = 30 − (−2)(−6) = −12
−6 0
3 3
C23 = = (−1)(3(−7) − 3(−6)) = 3
−6 −7
3 −2
C31 = = 3(−3) − (−2)0 = −9
0 −3
3 −2
C32 = = (−1)(3(−3) − (−2)(−5)) = 19
−5 −3
3 3
C33 = = 30 − 3(−5) = 15
−5 0
Step 2: Compose the matrix using the cofactors previously computed.
−21 18 35
Cof(A) = 14 −12 3
−9 19 15
Step 3: Transpose the matrix of cofactors.
−21 14 −9
Adj(A) = Cof (A)T = 18 −12 19
35 3 15
−36 16 46
2. Adj(B) = Cof (B)T = 81 −36 −6
75 10 −20
−69 2 24
3. Adj(C) = Cof (C)T = 48 18 −7
40 15 −43
49 11 24
4. Adj(D) = Cof (D)T = 14 19 −9
−46 1 −18
R2 − 4R1 → R2
39
R3 − 7R1 → R3
1 2 3 1 0 0
[A|I] = 0 −3 −9 −4 1 0
0 −6 −12 −7 0 1
Devide the 2-th row by -3
1 2 3 1 0 0
[A|I] = 0 1 3 4/3 −1/3 0
0 −6 −12 −7 0 1
R1 − 2R2 → R2
R3 + 6R2 → R3
1 0 −3 −5/3 2/3 0
[A|I] = 0 1 3 4/3 −1/3 0
0 0 6 1 −2 1
Devide the 3rd row by 6
1 0 −3 −5/3 2/3 0
[A|I] = 0 1 3 4/3 −1/3 0
0 0 1 1/6 −1/3 1/6
R1 + 2R3 → R1
R2 − 3R3 → R2
1 0 0 −7/6 −1/3 0.5
[A|I] = 0 1 0 5/6 2/3 −0.5
0 0 1 1/6 −1/3 1/6
−7/6 −1/3 0.5
A−1 = 5/6 2/3 −0.5
1/6 −1/3 1/6
2. Augumented
matrix:
2 8 3 1 0 0
1 7 3 0 1 0
[A|I] =
9
8 9 0 0 1
40
1 4 1.5 0.5 0 0
[A|I] = 1 7 3 0 1 0
9 8 9 0 0 1
R2 − 1R1 → R2
R3 − 9R1 → R3
1 4 1.5 0.5 0 0
[A|I] = 0 3 1.5 −0.5 1 0
0 −28 −4.5 −4.5 0 1
devide the 2nd row by 3
1 4 1.5 0.5 0 0
[A|I] = 0 1 0.5 −1/6 1/3 0
0 −28 −4.5 −4.5 0 1
R1 − 4R2 → R2
R3 + 28R2 → R3
1 0 −0.5 7/6 −4/3 0
[A|I] = 0 1 0.5 −1/6 1/3 0
0 0 9.5 −55/6 28/3 1
Devide the 3rd row by 9.5
1 0 −0.5 7/6 −4/3 0
[A|I] = 0 1 0.5 −1/6 1/3 0
0 0 1 −55/57 56/57 2/19
R1 + 0.5R3 → R1
R2 − 0.5R3 → R2
1 0 0 13/19 −16/19 1/19
[A|I] = 0 1 0 6/19 −3/19 −1/19
0 0 1 −55/57 56/57 2/19
13/19 −16/19 1/19
A−1 = 6/19 −3/19 −1/19
−55/57 56/57 2/19
0 0 3 1 0 0
3. [A|I] = 1 0 1 0 1 0
0 8 9 0 0 1
41
Change places the 1st and the 2nd rows
1 0 1 0 1 0
[A|I] = 0 0 3
1 0 0
0 8 9 0 0 1
Change places the 2nd and the 3rd rows
1 0 1 0 1 0
[A|I] = 0 8 9
0 0 1
0 0 3 1 0 0
Devide the 2nd row by 8
1 0 1 0 1 0
[A|I] = 0 1 1.125 0 0 0.125
0 0 3 1 0 0
Devide the 3rd row by 3
1 0 1 0 1 0
[A|I] = 0 1 1.125 0 0 0.125
0 0 1 1/3 0 0
R1 − R3 → R1
R2 − 1.125R3 → R2
1 0 0 −1/3 1 0
[A|I] = 0 1 0 −0.375 0 0.125
0 0 1 1/3 0 0
−1/3 1 0
A−1 = −0.375 0 0.125
1/3 0 0
1 0 4 1 0 0
4. [A|I] = 1 0 1 0 1 0
1 5 10 0 0 1
R2 − R1 → R2
R3 − R1 → R3
1 0 4 1 0 0
[A|I] = 0 0 −3 −1 1 0
0 5 6 −1 0 1
42
Change places the 2nd and the 3rd rows
1 0 4 1 0 0
[A|I] = 0 5 6 −1 0 1
0 0 −3 −1 1 0
Devide the 2nd row by 5
1 0 4 1 0 0
[A|I] = 0 1 1.2 −0.2 0 0.2
0 0 −3 −1 1 0
Devide the 3rd row by -3
1 0 4 1 0 0
[A|I] = 0 1 1.2 −0.2 0 0.2
0 0 1 1/3 −1/3 0
R1 − 4R3 → R1
R2 − 1.2R3 → R2
1 0 0 −1/3 4/3 0
[A|I] = 0 1 0 −0.6 0.4 0.2
0 0 1 1/3 −1/3 0
−1/3 4/3 0
A−1 = −0.6 0.4 0.2
1/3 −1/3 0
2 2 0 1 0 0
5. [A|I] = 1.5 1 1 0 1 0
4 0 0 0 0 1
Devide the 1st row by 2
1 1 0 0.5 0 0
[A|I] = 1.5
1 1 0 1 0
4 0 0 0 0 1
R2 − 1.5R1 → R2
R3 − 4R1 → R2
1 1 0 0.5 0 0
[A|I] = 0 −0.5 1 −0.75 1 0
0 −4 0 −2 0 1
43
Devide the 2nd row by -0.5
1 1 0 0.5 0 0
[A|I] = 0 1 −2 1.5 −2 0
0 −4 0 −2 0 1
R1 − R2 → R1
R3 + 4R2 → R3
1 0 2 −1 2 0
[A|I] = 0 1 −2 1.5 −2 0
0 0 −8 4 −8 1
Devide the 3rd row by -8
1 0 2 −1 2 0
[A|I] = 0 1 −2 1.5 −2 0
0 0 1 −0.5 1 −0.125
R1 − 2R3 → R1
R2 + 24R3 → R2
1 0 0 0 0 0.25
[A|I] = 0 1 0 0.5 0 −0.25
0 0 1 −0.5 1 −0.125
0 0 0.25
A−1 = 0.5 0 −0.25
−0.5 1 −0.125
1 2 1 0
6. [A|I] =
3 4 0 1
R2 − 3R1 → R2
1 2 1 0
[A|I] =
0 −2 −3 1
Devide the 2nd row by -2
1 2 1 0
[A|I] =
0 1 1.5 −0.5
R1 − 2R2 → R1
44
1 0 −2 1
[A|I] =
0 1 1.5 −0.5
−1 −2 1
A =
1.5 −0.5
1/4 2/9 1 0
7. [A|I] =
1/5 4/3 0 1
Devide the 1st row by 1/4
1 8/9 4 0
[A|I] =
1/5 4/3 0 1
R2 − 1/5R1 → R2
1 8/9 4 0
[A|I] =
0 52/45 −4/5 1
Devide the 2nd row by 52/45
1 8/9 4 0
[A|I] =
0 1 −9/13 45/52
R1 − 8/9R2 → R1
1 0 60/13 −10/13
[A|I] =
0 1 −9/13 45/52
−1 60/13 −10/13
A =
−9/13 45/52
1 2 3
8. A = 4 5 6
7 8 9
R2 − 4R1 → R2
R3 − 7R1 → R3
1 2 3
0 −3 −6
0 −6 −12
Devide the 2nd row by -3
45
1 2 3
0 1 2
0 −6 −12
R1 − 2R2 → R1
R3 + 6R2 → R3
1 0 −1
0 1 2
0 0 0
Since the non-zero rows are 2, then Rank(A) = 2.
1 4 7
9. B = 2 5 8
3 6 9
2R1 − R2 → R2
3R1 − R3 → R3
1 4 7
0 −3 −6
0 −6 −12
Devide the 2nd row by -3
1 4 7
0 1 2
0 −6 −12
4R2 − R1 → R1
−6R2 − R3 → R3
1 0 −1
0 1 2
0 0 0
Since the non-zero rows 2, then Rank(A) = 2.
2 8 4
10. C = 4 1 1
6 0 8
st
Devide
the
1 row by 2
1 4 2
4 1 1
6 0 8
R2 − 4R1 → R2
R3 − 6R1 → R3
46
1 4 2
0 −15 −7
0 −24 −4
Devide the 2nd row by -15
1 4 2
0 1 7/15
0 −24 −4
R1 − 4R2 → R1
R3 + 24R2 → R3
1 0 2/15
0 1 7/15
0 0 7.2
Devide the 3rd row by 7.2
1 0 2/15
0 1 7/15
0 0 1
R1 − 2/15R3 → R1
R2 − 7/15R3 → R3
1 0 0
0 1 0
0 0 1
Since the non-zero rows 3, then Rank(A) = 3.
9 7/9 4/3
11. D = 0.5 1.2 1.6
0.6 1.5 0
Since rank(D)=rank(DT ) we can check the answer via D
47
Devide the 2nd row by 73/90
1 7/9 4/27
0 1 25/219
0 31/30 −4/45
R1 − 7/9R2 → R1
R3 − 31/304R2 → R3
1 0 13/219
0 1 25/219
0 0 −151/730
Devide the 3rd row by -151/730
1 0 13/219
0 1 25/219
0 0 1
R1 − 13/219R3 → R1
R2 − 25/2194R3 → R2
1 0 0
0 1 0
0 0 1
Since the non-zero rows 3, then Rank(A) = 3.
[λ1 0] + [0 λ2 ] = 0
λ1 = 0 and λ2 = 0
Since the scalars are zero (λ1 = λ1 = 0), the two vectors are linearly independent.
48
C11 = (8)(8) + (−2)(−7) = 78
C12 = (8)(−2) + (−2)(−5) = −6
C21 = (−7)(8) + (−5)(−7) = −21
C22 = (−7)(−2) + (−5)(−5) = 39
Step 2: Final solution
78 −6
A.A =
−21 39
2. Multiply each row by each column of the matrix A. The first index in C indicates
the row index and the second one indicates the column index in A.
49
1.11 Summary Question
Given the following matrices find
2 3 1
1. A = 2 3 4
1 2 3
1 2 3
2. B = 4 5 6
7 8 9
2 4 8
3. C = 1 2 4
1 2 3
1 3 1
4. D = 4 3 4
5 2 5
1 3 1
5. E = 4 3 4
5 2 5
1 3 1
6. F = 4 3 4
5 2 5
50
2 3/2 9 1/3 6 5
8. A = 1
2 4 B= 2
6 7
6 3/4 2 1 2 4
2 6 1/3 1/5 12 3
9. A = 1 1 5 B= 2 3 4
2 2 1/2 11 7 2
2 1 7
10. Find the inverse of the following matrices through cofactors A=5 6 5
7 8 2
51
Chapter 2
Systems of Linear Equations
Objectives
52
or complex numbers, but integers and rational numbers are also seen, as are polyno-
mials and elements of an abstract algebraic structure.
In the equation system there are three essential ingredients. These are
1. The set of coefficients
6x1 + 3x2 + x3 = 22
x1 + 4x2 − 2x3 = 12
4x1 − x2 + 5x3 = 10
6 3 1 x1 22
A= 1 4 −2 X= x2 D= 12
4 −1 5 x3 10
Exercise 2.1 Given the following system of linear equations represent in matrix form
1. x1 + x2 + x3 + x4 = 3
x1 − x2 + x 3 + x4 = 5
x2 − x3 − x4 = −4
x1 + x2 − x3 − x4 = −3
53
We have made the distinction in the definition because a system with the same num-
ber of equations as variables behaves in one of two ways, depending on whether its
matrix of coefficients is nonsingular or singular. Where the matrix of coefficients
is nonsingular the system has a unique solution for any constant on the right side.
Example
x + 2y = a
3x + 4y = b
has the unique solution x = b − 2a and y = (3a − b)/2. On the other hand, where the
matrix of coefficients is singular the system never has a unique solution;it has either
no solutions or else has infinitely many, as with these.
Example
x + 3y = 3
4x + 12y = 10
has no solution.
Example
2x + 3y = 5
54
Figure 2.3: Infinitely many solutions
4x + 6y = 10
has infinitely many solutions.
To solve a certain given system of linear equations we can solve the unknown through
one of the following method.
• Simultaneous equation
• Gauss-Jordan Row Reduction
• Cramer’s Rule
• inverse Method
2x + 3y = 4
x + 2y = 1
Multiply the the second equation by -2 and add to the first equation, then you will
get
−y = 2 ⇒ y = 2
If you substitute y = 2 either in the first equation or in the second equation you will
have x = 5.
x + 3y − 2z = 5
55
3x + 5y + 6z = 7
2x + 4y + 3z = 8
Solving the first equation for x gives x = 5 + 2z − 3y, and plugging this into the
second and third equation yields
−4y + 12z = −8
−2y + 7z = −2
Solving the first of these equations for y yields y = 2 + 3z, and plugging this into the
second equation yields z = 2. We now have:
x = 5 + 2z − 3y
y = 2 + 3z
z=2
Substituting z = 2 into the second equation gives y = 8, and substituting z = 2 and
y = 8 into the first equation yields x = −15. Therefore, the solution set is the single
point (x, y, z) = (−15, 8, 2).
1. 2x1 + 3x2 = 5
x1 + 2x2 = 10
2. 0.6x1 + 0.2x2 = 1
0.5x1 + 0.3x2 = 4
4. 4x1 + 5x2 = 30
2x1 + x2 = 13
56
2.2.2 Gauss-Jordan Method
If a linear system is changed to another by one of these operations
1. an equation is swapped with another
2. an equation has both sides multiplied by a nonzero constant
3. an equation is replaced by the sum of itself and a multiple of another then the
two systems have the same set of solutions.
Each of the three Gausss Method operations has a restriction. Multiplying a row by
0 is not allowed because obviously that can change the solution set. Similarly, adding
a multiple of a row to itself is not allowed because adding -1 times the row to itself
has the effect of multiplying the row by 0. We disallow swapping a row with itself to
make some results in the fourth chapter easier, and also because its pointless.
In each row of a system, the first variable with a nonzero coefficient is the rows
leading variable. A system is in echelon form if each leading variable is to the right
of the leading variable in the row above it, except for the leading variable in the first
row, and any all-zero rows are at the bottom.
The Gauss-Jordan row reduction method can be facilitated by the following easy
steps:
1. Express the system of equations as an augmented matrix.
2 3 4
Reconsider Example 2.2
1 2 1
2. Use elementary row operations to find a row equivalent matrix in reduced row
echelon form. There are three types of elementary row operations:
• Type 1: Swap the positions of two rows.
• Type 2: Multiply a row by a nonzero scalar.
• Type 3: Add to one row a scalar multiple of another.
1 0 5
Reconsider Example 2.2
0 1 2
3. Solve the variables in the columns with leading entries in terms of free variables.
Example 2.4 Reconsider Example 2.3
57
1 3 −2 5
3 5 6 7
2 4 3 8
This matrix is then modified using elementary row operations until it reaches reduced
row echelon form.
Because these operations are reversible, the augmented matrix produced always rep-
resents a linear system that is equivalent to the original.
There are several specific algorithms to row-reduce an augmented matrix, the sim-
plest of which are Gaussian elimination and Gauss-Jordan elimination. The following
computation shows Gauss-Jordan elimination applied to the matrix above:
1 3 −2 5 1 3 −2 5 1 3 −2 5 1 3 −2 5
3 5 6 7 → 0 −4 12 −8 → 0 −4 12 −8 → 0 1 −3 2
2 4 3 8 2 4 3 8 0 −2 7 −2 0 −2 7 −2
1 3 −2 5 1 3 −2 5 1 3 0 9 1 0 0 −15
0 1 −3 2 → 0 1 0 8 → 0 1 0 8 → 0 1 0 8
0 0 1 2 0 0 1 2 0 0 1 2 0 0 1 2
The last matrix is in reduced row echelon form, and represents the system x = -15, y
= 8, z = 2. A comparison with the example in the previous section on the algebraic
elimination of variables shows that these two methods are in fact the same; the dif-
ference lies in how the computations are written down.
Example 2.5 Use Gauss Jordan Elimination method to solve the following prob-
lem
6 6 3/2 9
1 6 6 3
6 5 3 2
Divide row1 by 6
1 1 1/4 3/2
1 6 6 3
6 5 3 2
Add (-1 * row1) to row2
1 1 1/4 3/2
0 5 23/4 3/2
6 5 3 2
Add (-6 * row1) to row3
1 1 1/4 3/2
0 5 23/4 3/2
0 −1 3/2 −7
58
Divide row2 by 5
1 1 1/4 3/2
0 1 23/20 3/10
0 −1 3/2 −7
Add (1 * row2) to row3
1 1 1/4 3/2
0 1 23/20 3/10
0 0 53/20 −67/10
Divide row3 by 53/20
1 1 1/4 3/2
0 1 23/20 3/10
0 0 1 −134/53
Add (-23/20 * row3) to row2
1 1 1/4 3/2
0 1 0 170/53
0 0 1 −134/53
2. x1 + 3x2 + 2x3 = 4
2x1 + x2 + 2x3 = 5
x1 + 2x2 + x3 = 4
3. 4x1 + 2x2 = 5
x1 + 2x2 = 2
4. x1 + 0.5x2 = 4
2x1 + 3x2 = 1
59
5. x1 + 2x2 + 4x3 = 12
x1 + x2 + 5x3 = 10
3x2 + 6x3 = 8
x + 3y − 2z = 5
3x + 5y + 6z = 7
2x + 4y + 3z = 8
5 3 −2
7 5 6
8 4 3 60
x= = = −15
1 3 −2 −4
3 5 6
2 4 3
1 5 −2
3 7 6
2 8 3 −32
y= = =8
1 3 −2 −4
3 5 6
2 4 3
60
1 3 5
3 5 7
2 4 8 −8
z= = =2
1 3 −2 −4
3 5 6
2 4 3
Exercise 2.4 Use Cramer’s Rule to solve the following equations
1. x + 2y + 5z = 3
0.01x + 0.12y + 3z = 4
3x + 4y + 2z = 2.5
3. x + 2x + 3z = −10
4x + 5y + 6z = 12/13
7x + 8y + 9z = 7.5
But if you get a row of all zeros except for the right hand side, then there is no
solution to the system. Moreover; if you get a row of all zeros, and the number of
non-zero rows is less than the number of variables, then the system is dependent, you
will have many answers, and you need to write your answer in parametric form.
61
2.3 Homogeneous Systems of Linear Equations
Definition: A linear equation is homogeneous if it has a constant of zero, so that it
can be written as a11 x1 + a12 x2 + ... + a1n xn = 0
Linear dependence
Definition: vectors, a1 ,a2 ,...,an in Rm are linearly dependent if there exist numbers
K1 ,K2 ,...,Kn not all zero such that
If this equation holds only when K1 = K2 = ... = Kn = 0, then the vectors are said
to be linearly independent
1 4 7
Given the set S = { 2 , 5 , 8}
3 6 9
of vectors in the vector space R3 , determine whether S is linearly independent or
linearly dependent.
c1 v1 + c2 v2 + c3 v3 = 0 (*)
62
1 4 7 0
C1 2 + C2 5 + C3 8 = 0
3 6 9 0
The matrix equation above is equivalent to the following homogeneous system of
equations (**)
c1 + 4c2 + 7c3 = 0
2c1 + 5c2 + 8c3 = 0
3c1 + 6c2 + 9c3 = 0
Step 2: Transform the coefficient matrix of the system to the reduced row
echelon form
We now transform the coefficient matrix of the homogeneous system above to the
reduced row echelon form to determine whether the system has the trivial solution
only (meaning that S is linearly independent), or the trivial solution as well as non-
trivial ones (S is linearly dependent).
1 4 7
2 5 8
3 6 9
R2 − 2R1 → R2
1 4 7
0 −3 −6
3 6 9
R3 − 3R1 → R3
1 4 7
0 −3 −6
0 −6 −12
−1/3R2 → R2
1 4 7
0 1 2
0 −6 −12
R3 + 6R2 → R3
1 4 7
0 1 2
0 0 0
63
R1 − 4R2 → R1
1 0 −1
0 1 2
0 0 0
Step 3: Interpret the reduced row echelon form
The reduced row echelon form of the coefficient matrix of the homogeneous system
(**) is
1 0 −1
0 1 2
0 0 0
which corresponds to the system
c1 − c3 = 0
c2 + 2c3 = 0
0=0
Since some columns do not contain leading entries, then the system has nontrivial
solutions, so that some of the values c1 , c2 , c3 solving (*) may be nonzero. Therefore
the set S = {v1 , v2 , v3 } is linearly dependent.
1/4 1/3 1
Example Given the set S = { 4 , 1 , 6} of vectors in the vector space
3 0 2
3
R , determine whether S is linearly independent or linearly dependent?
c1 v1 + c2 v2 + c3 v3 = 0 (*)
64
3c1 + 0c2 + 2c3 = 0
Step 2: Transform the coefficient matrix of the system to the reduced row
echelon form
We now transform the coefficient matrix of the homogeneous system above to the
reduced row echelon form to determine whether the system has the trivial solution
only (meaning that S is linearly independent), or the trivial solution as well as non-
trivial ones (S is linearly dependent).
1/4 1/3 1
4 1 6
3 0 2
65
Add -30/13 times the 3r d row to the 2n d row
1 4/3 4
0 1 0
0 0 1
Add -4 times the 3r d row to the 1s t row
1 4/3 0
0 1 0
0 0 1
Add -4/3 times the 2n d row to the 1s t row
1 0 0
0 1 0
0 0 1
Step 3:Interpret the reduced row echelon form
The reduced row echelon form of the coefficient matrix of the homogeneous system
(**) is
1 0 0
0 1 0
0 0 1
which corresponds to the system
1c1 =0
1c2 = 0
1c3 = 0
Since each column contains a leading entry (highlighted in yellow), then the sys-
tem has only the trivial solution, so that the only solution of (*) is c1 , c2 , c3 =0.
Therefore the set S = {v1 , v2 , v3 } is linearly independent!!
Exercise 2.5 Check that whether the following vectors are linearly or independent
1 2 3
Given the set S = { 4 , 5 , 6} of vectors in the vector space R3 , determine
7 8 9
whether S is linearly independent or linearly dependent??
66
prior to the discount , find the value v2 after the 10 percent discount,if
67
Step 1: Multiply each entry of the matrix A by the scalar k.
1.2 × 200 1.2 × 0 1.2 × 540 1.2 × 600 240 0 648 720
kA = 1.2 × 900 1.2 × 500 1.2 × 200 1.2 × 200 = 1080 600 240 240
1.2 × 200 1.2 × 100 1.2 × 400 1.2 × 600 240 120 480 720
Example
The quantity of goods sold Q,the selling price of the goods P,and the goods C are
give for a hypothetical ABC company
100 10.50 1.25
Q = 200 P = 20.25 C = 2.25
300 30 3.50
Calculate
1. Total revenue
2. Total Cost
3. Per Unit profit
4. Total profit
Solution
1. Total revenue
100
T
P Q = 10.50 20.25 30 200 = 14100
300
2. Total Cost
100
C T Q = 1.25 2.25 3.50 200 = 1625
300
3. Per Unit profit
10.50 1.25 9.25
Per unit profit = AP = 20.25 − 2.25 = 18
30 3.50 26.50
4. Total profit
100
T
AP = 9.25 18 26.50 200 = 12475
300
Note
Total profit = 14100 − 1625 = 12475
68
CHAPTER SUMMARY
The main ideas of chapter two are summarized below.
In the equation system there are three essential ingredients. These are the set
of coefficients,the set of variables and the set of constant terms.
To solve a certain given system of linear equations we can solve the un-
known through one of the following method. Elimination ,Gauss-Jordan
Row Reduction or Cramer’s Rule. There are three types of elementary row
operations:
x = A−1 b
69
−1
where A is the inverse of A.
Reading Materials
4. Sydasaeter K. & Hammond Peter J., (2010). Mathematics for Economic Anal-
ysis. 5t h edition, New Delhi.
x1 = −1.5x2 + 2.5
x1 + 2x2 = 10
In 2 equation we substitute x1
x1 = −1.5x2 + 2.5
1(−1.5x2 + 2.5) + 2x2 = 10
after simplification we get:
x1 = −1.5x2 + 2.5
0.5x2 = 7.5
70
Divide the 2nd equation by 0.5 and express x2 by other variables
x1 = −1.5x2 + 2.5
x2 = +15
Now, moving from the last to the first equation can find the values of the other
variables.
Answer:
• x1 = −20
• x2 = 15
x1 = −(1/3)x2 + (5/3)
5x1 + 3x2 = 40
In 2 equation we substitute x1
x1 = −(1/3)x2 + (5/3)
x1 = −(1/3)x2 + (5/3)
(4/3)x2 = 95/3
Divide the 2nd equation by 4/3 and express x2 by other variables
x1 = −(1/3)x2 + (5/3)
x2 = +23.75
Now, moving from the last to the first equation can find the values of the other
variables.
Answer:
x1 = −6.25
x2 = 23.75
71
3. Simplify the system:
3x1 + 4x2 = 3
5x1 + 30x2 = 46
Divide the 1st equation by 3 and express x1 by other variables
x1 = −(4/3)x2 + 1
5x1 + 30x2 = 46
In 2 equation we substitute x1
x1 = −(4/3)x2 + 1
5(−(4/3)x2 + 1) + 30x2 = 46
after simplification we get:
x1 = −(4/3)x2 + 1
(70/3)x2 = 41
Divide the 2nd equation by 70/3 and express x2 by other variables
x1 = −(4/3)x2 + 1
x2 = +(123/70)
Now, moving from the last to the first equation can find the values of the other
variables.
Answer:
x1 = −47/35
x2 = 123/70
x1 = −1.25x2 + 7.5
2x1 + x2 = 13
In 2 equation we substitute x1
x1 = −1.25x2 + 7.5
2(−1.25x2 + 7.5) + x2 = 13
after simplification we get:
x1 = −1.25x2 + 7.5
72
−1.5x2 = −2
Divide the 2nd equation by -1.5 and express x2 by other variables
x1 = −1.25x2 + 7.5
x2 = +(4/3)
Now, moving from the last to the first equation can find the values of the other
variables.
Answer:
x1 = 35/6
x2 = 4/3
x2 = −20.4x3 + 14
73
−(35/11)x2 + (265/11)x3 = 863/11
In 3 equation we substitute x2
x2 = −20.4x3 + 14
89x3 = 123
Divide the 3rd equation by 89 and express x3 by other variables
x2 = −20.4x3 + 14
x3 = +(123/89)
Now, moving from the last to the first equation can find the values of the other
variables.
Answer:
x1 = 2776/89
x2 = −6316/445
x3 = 123/89
74
10(−4.5x2 − 3x3 + 9.5) + 43x2 + 12x3 = 270
2(−4.5x2 − 3x3 + 9.5) + x2 + 2x3 = 30
after simplification we get:
x2 = −9x3 − 87.5
−8(−9x3 − 87.5) − 4x3 = 11
after simplification we get:
x2 = −9x3 − 87.5
x3 = −(689/68)
Now, moving from the last to the first equation can find the values of the other
variables.
Answer:
x1 = 3167/136
x2 = 251/68
x3 = −689/68
75
2.5.3 Exercise 2.3
2 3 4 10
1. 2 7 1 9
1 4 5 6
Devide the 1st row by 2
1 1.5 2 5
2 7 1 9
1 4 5 6
R2 − 2R1 → R2
R3 − R1 → R3
1 1.5 2 5
0 4 −3 −1
0 2.5 3 1
Devide the 2nd row by 4
1 1.5 2 5
0 1 −0.75 −0.25
0 2.5 3 1
R1 − 1.5R2 → R1
R3 − 2.5R2 → R3
1 0 3.125 5.375
0 1 −0.75 −0.25
0 0 4.875 1.625
Devide the 3rd row by 4.875
1 0 3.125 5.375
0 1 −0.75 −0.25
0 0 1 1/3
R1 − 3.125R3 → R1
R2 + 0.75R3 → R2
1 0 0 13/3
0 1 0 0
0 0 1 1/3
Answer:
• x1 = 13/3
• x2 = 0
76
• x3 = 1/3
1 3 2 4
2. 2 1 2 5
1 2 1 4
R2 − 2R1 → R2
R3 − R1 → R3
1 3 2 4
0 −5 −2 −3
0 −1 −1 0
Devide the 2nd row by -5
1 3 2 4
0 1 0.4 0.6
0 −1 −1 0
R1 − 3R2 → R1
R3 + R2 → R1
1 0 0.8 2.2
0 1 0.4 0.6
0 0 −0.6 0.6
Devide the 3rd row by -0.6
1 0 0.8 2.2
0 1 0.4 0.6
0 0 1 −1
R1 − 0.8R3 → R1
R2 − 0.4R3 → R2
1 0 0 3
0 1 0 1
0 0 1 −1
Answer:
• x1 = 3
• x2 = 1
• x3 = −1
4 2 5
3.
1 2 2
Devide the 1st row by 4
77
1 0.5 1.25
1 2 2
R2 − R1 → R2
1 0.5 1.25
0 1.5 0.75
Devide the 2nd row by 1.5
1 0.5 1.25
0 1 0.5
R1 − 0.5R2 → R1
1 0 1
0 1 0.5
Answer:
• x1 = 1
• x2 = 0.5
1 0.5 4
4.
2 3 1
R2 − 2R1 → R2
1 0.5 4
0 2 −7
Devide the 2nd row by 2
1 0.5 4
0 1 −3.5
R1 − 0.5R2 → R1
1 0 5.75
0 1 −3.5
Answer:
• x1 = 5.75
• x2 = −3.5
1 2 4 12
5. 1 1 5 10
0 3 6 8
R2 − R1 → R2
78
1 2 4 12
0 −1 1 −2
0 3 6 8
Devide the 2nd row by -1
1 2 4 12
0 1 −1 2
0 3 6 8
R1 − 2R3 → R1
R3 − 3R3 → R3
1 0 6 8
0 1 −1 2
0 0 9 2
Devide the 3rd row by 9
1 0 6 8
0 1 −1 2
0 0 1 2/9
R1 − 6R3 → R1
R2 + R3 → R2
1 0 0 20/3
0 1 0 20/9
0 0 1 2/9
Answer:
• x1 = 20/3
• x2 = 20/9
• x3 = 2/9
3 2 1 1
6. 4 1 2 10
5 3 3 5
Devide the 1st row by 3
1 2/3 1/3 1/3
4 1 2 10
5 3 3 5
R2 − 4R1 → R2
R3 − 5R1 → R3
79
1 2/3 1/3 1/3
0 −5/3 2/3 26/3
0 −1/3 4/3 10/3
Devide the 2nd row by -5/3
1 2/3 1/3 1/3
0 1 −0.4 −5.2
0 −1/3 4/3 10/3
R1 − 2/3R2 → R1
R3 + 1/3R2 → R3
1 0 0.6 3.8
0 1 −0.4 −5.2
0 0 1.2 1.6
Devide the 3rd row by 1.2
1 0 0.6 3.8
0 1 −0.4 −5.2
0 0 1 4/3
R1 − 0.6R3 → R1
R2 + 0.4R3 → R2
1 0 0 3
0 1 0 −14/3
0 0 1 4/3
Answer:
• x1 = 3
• x2 = −14/3
• x3 = 4/3
10 1 2 11
7. 1.5 1 1 1
3 0 1 5
Devide the 1st row by 10
1 0.1 0.2 1.1
1.5 1 1 1
3 0 1 5
R2 − 1.5R1 → R2
R3 − 3R1 → R3
80
1 0.1 0.2 1.1
0 0.85 0.7 −0.65
0 −0.3 0.4 1.7
Devide the 2nd row by 0.85
1 0.1 0.2 1.1
0 1 14/17 −13/17
0 −0.3 0.4 1.7
R1 − 0.1R2 → R1
R3 + 0.3R2 → R3
1 0 2/17 20/17
0 1 14/17 −13/17
0 0 11/17 25/17
Devide the 3rd row by 11/17
1 0 2/17 20/17
0 1 14/17 −13/17
0 0 1 25/11
R1 − 2/17R3 → R1
R2 − 14/17R3 → R2
1 0 0 10/11
0 1 0 −29/11
0 0 1 25/11
Answer:
• x1 = 10/11
• x2 = −29/11
• x3 = 25/11
81
2. suppose that
4.25 13 10 15.4
A = 2.25 3 1.25 B = 12
3.5 0 1 20
15.4 13 10
|A1 | = 12 3 1.25 = −247.3
10 0 1
4.25 15.4 10
|A2 | = 2.25 12 1.25 = 7.475
3.5 20 1
4.25 13 15.4
|A3 | = 2.25 3 12 = 54.3
3.5 0 20
|A1 | |A1 | |A1 |
x= y= z=
|A| |A| |A|
3. suppose that
1 2 3 −10
A = 4 5 6 B = 12/13
7 8 9 7.5
1 2 3
|A| = 4 5 6 = 0
7 8 9
No solution !
82
Chapter 3
Special Determinants and Matrices in
Economics
3.1 Introduction
Objectives
Given
y1 = f1 (x1 , x2 , x3 )
83
y2 = f2 (x1 , x2 , x3 )
y3 = f3 (x1 , x2 , x3 )
∂y1 ∂y1 ∂y1
∂x1 ∂x2 ∂x3
∂y1 ,∂y2 ,∂y3 ∂y2 ∂y2 ∂y2
|J| = ∂x1 ,∂x2 ,∂x3 = ∂x1 ∂x2 ∂x3
∂y3 ∂y3 ∂y3
∂x1 ∂x2 ∂x3
Note:
• Elements of each row are the partial derivatives of one of function yi with respect
to each of the independent variables x1 ,x2 ,x3 .
• Elements of each column are the partial derivatives of one of function y1 ,y2 ,y3
with respect to one of the independent variables xj
• If |J| = 0, the equations are functionally dependent.
• If |J| =
6 0, the equations are functionally independent.
Example Use the Jacobian to test for functional dependence
y 1 = x1 + x2
y2 = 2x1 + 3x2
Solution
∂y1 ∂y1
∂x1
=1 ∂x2
=1
∂y2 ∂y2
∂x1
=2 ∂x2
=3
∂y1 ∂y1
∂x1 ∂x2
|J| = ∂y2 ∂y2
∂x1 ∂x2
1 1
|J| = =3−2=1
2 3
Since |J| =
6 0, y1 and y2 are functionally independent!
Self Test Exercise Use the Jacobian to test for functional dependence
1. y1 = 2x1 + 4x2
y2 = 8x1 + 16x2
2. y1 = x1 + x22
y 2 = x1 + x2
3. y1 = 2x1 + 3x2
y2 = 4x1 + 12x1 x2 + 9x22
4. y1 = 13 x1 + 4x2
y2 = 2.4x1 + 3.5x2
84
3.3 The Hessian Determinant
Definition: A Hessian |H| is a determinant composed of all the second order partial
derivatives with the second order direct partials on the principal diagonal and the
second order cross partials off the principal diagonal. If the first order or necessary
conditions for a multivariate function z = f (x1 , x2 ) to be at optimum
zx1 x1 zx1 x2
|H| =
zx2 x1 zx2 x2
Where
zx1 x2 = zx2 x1
1. If the first elements on the principal diagonal , the first principal minor |H1 | =
zx1 x1 > 0 and the second principa minor
zx1 x1 zx1 x2
|H2 | = = zx1 x1 zx2 x2 − (zx1 x2 )2 > 0
zx2 x1 zx2 x2
meets the second order condition for a minimum.The |H| is called positive
definite.
2. If the first elements on the principal diagonal , the first principal minor |H1 | =
zx1 x1 < 0 and the second principa minor
zx1 x1 zx1 x2
|H2 | = = zx1 x1 zx2 x2 − (zx1 x2 )2 > 0
zx2 x1 zx2 x2
meets the second order condition for a maximum.The |H| is called negative
definite.
85
f11 f12 . . . f1n
f21 f22 . . . f2n
|H| = .. .. . . ..
. . . .
fn1 fn2 . . . fnn
The necessary condition for the function to be at point of relative extremum is that
all the first derivatives should vanish.
f1 = f2 = · · · = fn = 0
The second order sufficient condition for extremum is that
1. All the principal minors should be positive for the function to be a minimum.
|H1 | > 0, |H2 | > 0. . . , |Hn | > 0 , this is equivalent to say the quadratic form of
the discriminant described before is positive definite.
2. For a maximum the principal minor should start with negative and alternate in
sign |H1 | < 0, |H2 | > 0 , |H3 | < 0 . . .
Example : Find the extreme value of
z = 2x21 + x1 x2 + 4x22 + x1 x3 + x23 + 2
Solution
f1 = 0 4x1 + x2 + x3 = 0
f2 = 0 x1 + 8x2 + 0 = 0
f3 = 0 x1 + 0 + 2x3 = 0
86
So that
|H1 | = 4 > 0
4 1
|H2 | = = 31 > 0
1 8
4 1 1
|H3 | = 1 8 0 = 54 > 0
1 0 2
Thus we can conclude that z̄ = 2 is a minimum point.
Example Given the following profit function for a competitive firm that produces
two products as
Π = P1 Q1 + P2 Q2 − 2Q21 − Q1 Q2 − 2Q22
Does the firm maximize profit by choosing Q1 and Q2 ?
Solution
Π1 = 0 P1 − 4Q1 − Q2 = 0
Π2 = 0 P2 − Q1 − 4Q2 = 0
So that
|H1 | = −4 < 0
−4 −1
|H2 | = |H| = = 15 > 0
−1 −4
Therefore we can conclude that the firm can maximize profit
Ax = λx
(A − λI)x = 0
Where I denotes the identity matrix of order n ,then this homogenous system of
equations has a non trivial solution x 6= 0 if and only if the coefficient matrix has
87
determinant equal to zero. That is, iff
|A − λI| = 0
Example Given a 2 matrix
a a
A = 11 11
a21 a22
1. Determine the sign of its Eigen Values?
|A − λI| = 0
a11 − λ a11
=0
a21 a22 − λ
λ2 − (a11 + a22 )λ + (a11 a22 − a12 a21 ) = 0
The roots of this quadratic equation known as the characteristic equation is
r
1 1
λ = (a11 + a22 ) ± (a11 + a22 )2 − (a11 a22 − a12 a21 )
2 4
So that the roots are real when
Then sum λ1 + λ2 of the Eigen Values is equal to a11 + a22 , the sum of the diag-
onal elements (i.e., the trace of the matrix). The product λ1 λ2 of the Eigen Values
is equal to the determinant a11 a22 − a12 a21 = |A|. So the following are some of the
points deducted from the above characteristic equation.
• Both Eigen Values are positive if and only if a11 + a22 > 0 and |A| > 0
• Both Eigen Values are positive if and only if a11 + a22 > 0 and |A| > 0
• The two Eigen Values have different signs if and only if |A| < 0
88
Example Find the Eigen Values and Eigen Vectors of a 2 × 2 matrix
4 2
A=
1 3
First we compute
4 2 λ 0 4−λ 2
|A − λI| = − =
1 3 0 λ 1 3−λ
|A − λI| = (4 − λ)(3 − λ) − (2)(1)
We have to set this equal to zero to find the values of λ that make this true:
(4 − λ)(3 − λ) − 2 · 1 = 10 − 7λ + λ2 = (2 − λ)(5 − λ) = 0
This means that λ = 2 and λ = 5 are solutions.
Now if we want to find the Eigen Vectors that correspond to these values we look at
vectors v Such that
4−λ 2
v=0
1 3−λ
For λ = 5
4−5 2 x 0
=
1 3−5 y 0
−1 2 x 0
=
1 −2 y 0
This gives us the equalities
−x + 2y = 0
x − 2y = 0
1 2
These equations give us the line y = x.
Any point on this line, so for example
2
,
1
is an Eigen Vector with Eigen Value λ = 5.
89
which gives the equalities
2x + 2y = 0
x+y =0
These two
equations are not independent
of one another. This means any vector
x 1
v= where y = −x , such as , or any scalar multiple of this vector on the
y −1
line y = −x is an Eigen Vector with Eigen Value 2. This solution could be written
neatly as
2 1
λ1 = 5, v1 = , and λ2 = 2, v2 =
1 −1
Exercise 3.1 Given the following matrices find the characteristic polynomial and
characteristic root
4 1
1. A=
2 −1
5 −6
2. A =
1 −5
9 −3
3. A=
−1 7
−2 −9
4. A=
−4 −2
−6 0
5.
−3 8
8 7
6.
8 −4
90
a11 a12 x1
q(x) = x1 x2 1×2
a21 a22 2×2 x2 2×1
q(x) = a11 x21 + a12 x1 x2 + a21 x2 x1 + a22 x22
The general for can be written as
n X
X n
T
Q(x) = x AX = aij xi xj
i=1 j=1
qz = au2 + 2huv + bv 2
Then we can form symmetric matrix from the above equation by placing the squared
terms on the diagonal and splitting 2huv in to two equal parts and placing it on the
off diagonal
a h u
q(x) = u v
h b v
a h
The form D = is known as the discriminant of the quadratic form.Here
h b
a h
1. q is positive definite if and only if |a| > 0 and >0
h b
a h
2. q is negative definite if and only if |a| < 0 and <0
h b
a h
Where |a| = a is the first leading principal minor and is the second leading
h b
principal minor of D. So using these two terms we determine the sign from the total
differential case.
d2 z = fxx dx2 + 2fx dx dy + fyy dy 2
91
The discriminant for this is the d2 z is the second order partial derivatives.Such a
discriminant is known as Hessian determinant and it is given by
fxx fxy
|H| =
fyx fyy
Solution
5 1.5
D= = 7.75 > 0
1.5 2
Therefore, q is positive definite.
Example If fxx = −2 and fxy = 1 and fyy = −1, what is the sign for d2 z if
z = f (x, y)
Solution
−2 1
D=
1 −1
|D1 | = | − 2| < 0
And
−2 1
|D2 | = =1>0
1 −1
Therefore, d2 z is negative definite.
1. The quadratic form q will be positive definite if and only if |D1 | > 0,|D2 | > 0
and |D3 | > 0
2. The quadratic form q will be negative definite if and only if |D1 | < 0,|D2 | > 0
and |D3 | < 0
Exercise 3.2 State whether the following quadratic forms are positive or negative
definite.
1. Q = 5u2 − 4uv + 2v 2 ?
92
3.5.1 Positive and negative definiteness
1. The quadratic form q is said to be positive definite if q is invariably positive
(i.e., q(x) > 0)
5. If q changes signs when the variable assume different values of, q is said to be
sign indefinite.
CHAPTER SUMMARY
The main ideas of chapter two are summarized below.
Reading Materials
93
4. Sydasaeter K. & Hammond Peter J., (2010). Mathematics for Economic Anal-
ysis. 5th edition, New Delhi.
94
3.5.3 Exercise 3.2
5 2
1. D = =6>0
2 2
Therefore, q is positive definite.
1 −1 0
2. D = −1 6 −2
0 −2 3
1 −1
|D1 | = 1 > 0 ,|D2 | = = 5 > 0 and
1 6
1 −1 0
|D3 | = −1 6 −2 = 11 > 0
0 −2 3
95
Chapter 4
Input out put and linear programming
4.1 Introduction
Objectives
96
Economic planning.In order to produce something, each sector needs to consume of
its own output and some of output from the other sectors. In the model there are n
industries producing n different products such that the input equals the output or,
in other words, consumption equals production. One distinguishes two models:
open model: some production consumed internally by industries, rest consumed
by external bodies.
Problem: Find production level if external demand is given.
closed model: entire production consumed by industries.
Problem: Find relative price of each product.
2. Each industry uses a fixed input ratio for the production of its output
97
If the inverse of the matrix In A exists. (In − A)−1 is then called the Leontief inverse.)
For a given realistic economy, a solution obviously must exist.
• Let aij : the number of units produced by industry Si to produce one unit of
industry Si
Sector 1: Auto
Sector 2: Steel
Sector 3: Electricity
In general, let x1, x2, ..., xn, be the total output of industry S1 , S2 , . . . , Sn respectively.
Then
x1 = a11 x1 + a12 x2 + · · · + a1n xn + D1
x2 = a21 x1 + a22 x2 + · · · + a2n xn + D2
...
xn = am1 x1 + am2 x2 + · · · + amn xn + Dm
Assume further that aij represents the dollar amount of sector i used to produce $1.00
of sector j. The matrix containing these terms is given as
98
Since aij xj is the number of units produced by industry Si and consumed by in-
dustry Sj . The total consumption equals the total production for the product of
each industry Si .
The first column of this matrix can be interpreted as the dollar amounts of each
industry needed to produce 1 dollar of Auto: 15 cents worth of Auto, 40 cents worth
of Steel, 10 cents worth of Electricity, 10 cents worth of Coal, and 5 cents worth of
Chemical go into producing 1 dollar of Auto. More generally, the consumption of
Auto as it produces x1 dollars is
D1 x1
a11 · · · a1n D2 x2
.. ..
A= . D = .. X = ..
. . .
an1 · · · ann
Dn xn
A is called the input-output matrix, B the external demand vector and X the pro-
duction level vector. The above system of linear equations is equivalent to the matrix
equation
X = AX + D
X − AX = D
[I − A]X = D
X = [I − A]−1 D
If we have two sectors
−1 −1
x1 1 − a11 0 − a12 D1 1 − a11 −a12 D1
= =
x2 0 − a21 1 − a22 D2 −a21 1 − a22 D2
If we have three sectors
−1 −1
x1 1 − a11 0 − a12 0 − a13 D1 1 − a11 −a12 −a13 b1
x2 = 0 − a21 1 − a22 0 − a23 D2 = −a21 1 − a22 −a23 b2
x3 0 − a31 0 − a32 1 − a33 D3 −a31 −a32 1 − a33 b3
The consumption of Steel as it produces x1 dollars is
0.1x1
x1 = 0.4x1
0.7x1
The consumption of Electricity as it produces x2 dollars is
0.2x2
x2 = 0.5x2
0.8x2
99
The consumption of Coal as it produces x3 dollars is
0.3x3
x3 = 0.6x3
0.1x3
Therefore, the total consumption of all 3 sectors is
0.1x1 0.2x2 0.3x3
x = 0.4x1 + 0.5x2 + 0.6x3
0.7x1 0.8x2 0.1x3
The assumption in a closed economy is that production equals total consumption.
This yields
x1 0.1x1 0.2x2 0.3x3
x2 = 0.4x1 + 0.5x2 + 0.6x3
x3 0.7x1 0.8x2 0.1x3
| {z }
Total consumption
Example: Calculate the total demand for sector A, B and C given the matrix of
technical coefficients A and the final demand vector D as follow:
0.1 0.2 0.3 10
A = 0.4 0.5 0.6
D = 15
0.7 0.8 0.1 20
−1
x1 0.9 −0.4 −0.3 10
x2 = −0.2 0.8 −0.1 15
x3 −0.1 −0.1 0.7 20
x1 1.358 .76543 0.69136 10 37.35779
x2 = 0.37037 1.4815 0.37037 15 = 30.3706
x3 0.24691 0.32099 1.5802 20 38.24597
If final demand decreases by 2 and 4 for industry 1 and 2 ; if increase by 5 for industry
3 calculate the new level of final
1.358 .76543 0.69136 −2 −2.3209
∆X = 0.37037 1.4815 0.37037 −4 = −4.81489
0.24691 0.32099 1.5802 5 6.12322
37.35779 −2.3209
New final demand = 30.3706 + −4.81489
38.24597 6.12322
For our example we have:
0.05 0.5 8, 000 x
A= B= X=
0.1 0 2, 000 y
100
We obtain therefore the solution
X = (I − A)−1 D
−1
1 0 0.05 0.5 8, 000
X= −
0 1 0.1 0 2, 000
−1
0.95 −0.5 8, 000 1 10 5 8, 000
x= =
−0.1 1 2, 000 9 1 9.5 2, 000
10, 000
x=
3, 000
7, 300
If the external demand changes, ex. ∆D = Then the change in x and y is
2, 500
given by
∆x −1 1 10 5 7, 300 9, 500
= (I − A) ∆D = =
∆y 9 1 9.5 2, 500 3, 450
R S External
Industry R production 50 50 20
Industry S production 60 40 100
Assume the new external demand is 100 units of R and 100 units of S. Determine
the new production levels. Solution: The total production is 120 units for R and 200
units for S. We obtain
50 50
120 20 100
X= B= A = 12060
200 and ∆B =
40
200 100 120
100
200
96 30 100 307.3
The solution is ∆X = (I − A)−1 ∆B = 41 1
=
60 70 100 317.0
101
4.4.2 Formulating Linear Programming Problems
The number of problems, showing how to model them by the appropriate choice of
decision variables, objective, and constraints. Any linear programming problem in-
volving more than two variables may be expressed as follows:
Find the values of the variable x1, x2,............, xn which maximize (or minimize)
the objective function
Z = c1 x1 + c2 x2 + .............. + cn xn
x1 , x2 , . . . xn ≥ 0
A set of values x1 , x2 , . . . xn which satisfies the constraints of linear programming
problem is called its solution. Any solution to a linear programming problem which
satisfies the non negativity restrictions of the problem is called its feasible solution.
Any feasible solution which maximizes(or minimizes) the objective function of the
linear programming problem is called its optimal solution.
102
♠Step 5. Find the values of x and y for which the objective function z = ax + by has
maximum or minimum value (as the case may be).
Example
A carpenter makes tables and chairs. Each table can be sold for a profit of 30 and each
chair for a profit of 10. The carpenter can afford to spend up to 40 hours per week
working and takes six hours to make a table and three hours to make a chair. Cus-
tomer demand requires that he makes at least three times as many chairs as tables.
Tables take up four times as much storage space as chairs and there is room for at
most four tables each week. Formulate this problem as a linear programming problem.
Solution: Let
xT = number of tables made per week
xC = number of chairs made per week
Constraints total work time 6xT + 3xC ≤ 40 customer demand xC ≥ 3xT storage
space ( xC
4
) + xT ≤ 4, and xT , xC ≥ 0
( xC
4
) + xT = 4 6xT + 3xC = 40
xC
4
+ xT = 4................(i)
xC
6(4 − 4
) + 3xC = 40
3x
=24 − C2
+ 3xC = 40
3x
=24 + C2
= 40
3x
= C2
= 40 − 24 = 16
or xC = 323 = 10.667
103
Plug the value of xC in equation (i)
10.667
4
+ xT = 4
xT = 4 − 10.6674 = 1.333
Example Maximize p = 3x + y
Subject to
2x − y ≤ 4
2x + 3y ≤ 12
y≤3
Vertex Lines Through Vertex Value of Objective
(3,2) 2x − y = 4 2x + 3y = 12 11 Maximum
(2,0) 2x − y = 4 y = 0 6
(1.5,3) 2x + 3y = 12 y = 3 7.5
(0,3) y=3 x=0 3
(0,0) x=0 y=0 0
Example Maximize p = 4x + 5y
Subject to
2x − 3y ≤ 4
4x + 9y ≤ 2
y ≤ 12
Vertex Lines Through Vertex Value of Objective
(0,0.222222) 4x + 9y = 2, x = 0 1.111111
(0.5,0) 4x + 9y = 2, y = 0 2 Maximum
(0,0) x = 0, y = 0 0
Max c1 x1 + c2 x2 + ... + cn xn
104
Subject to a11 x1 + a12 x2 + .... + a1n xn = b1
..........................................
x1 ≥ 0, x1 ≥ 0, ........, xn ≥ 0.
where the objective is maximized, the constraints are equalities and the variables are
all non-negative.
Note:
105
Step 5: Choose biggest negative ∆j . Make arrow in upward direction, is called in-
coming vector (i.e. 3 )
Example Maximize
p = (1/3)x + 4y + 2z + 4w
2x + 3y + 4z + w ≤ 20
4x + 2y − 4z − w ≥ 10
w − y ≥ 10
Optimal Solution: p = 125/3; x = 5, y = 0, z = 0, w = 10
• Linear programming treats all relationship as linear. But it is not true in many
real life situations.
• The decisions variables in LPP would be meaningful only if they are integers.
• The problems are complex if the number of variables and constraints are quite
large.
• Factors such as uncertainty, weather conditions etc. are not taken into consid-
eration.
• Parameters are assumed to be constants but in reality they may not be so.
• LPP deals with only a single objective problem whereas in real life situations,
there may be more than one objective.
106
107
4.4.8 Summary Questions
1 −7 9
1. AdjA = −2 5 −6
1 −1 0
1 −7 9
−3 −3 −3
|A| = −3, inverse: −2 5
− −3 −6
−3 −3
− 13 1
−3
0
−3
−3 6 −3
2. AdjB = 6 −12 6
−3 6 −3
|B| = 0, no inverse !!
−2 4 0
3. C 1 −2 0
0 0 0
|C| = 0 no inverse because row 1 is twice row two!!!
7 −13 9
4. AdjD = 0 0 0
−7 13 −9
|D| = 0 no inverse because column 1 and 3 are identical!!!
1 4 5 7 0 −7
5. Transpose: 3 3 2 E = −13 0 13
1 4 5 9 0 −9
|E| = 0
6. trace (F ) = 9
151/6 217/12 28 173/6
7. 385/4 415/8 126 105
479/6 637/12 241/3 1141/9
108
38/3 39 113/2
8. AB = 25/3 26 35
11/2 89/2 173/4
(2 × 1/3) + (3/2 × 2) + (9 × 1) = 38/3
(2 × 6) + (3/2 × 6) + (9 × 2) = 39
(2 × 5) + (3/2 × 7) + (9 × 4) = 113/2
(1 × 1/3) + (2 × 2) + (4 × 1) = 25/3
(6 × 1/3) + (3/4 × 2) + (2 × 1) = 11/2
(6 × 5) + (3/4 × 7) + (2 × 4) = 173/4
(1 × 5) + (2 × 7) + (4 × 4) = 35
241/15 133/3 92/3
9. AB = 286/5 50 17
99/10 67/2 15
(2 × 1/5) + (6 × 2) + (1/3 × 11) = 241/15
(1 × 1/5) + (1 × 2) + (5 × 11) = 286/5
(2 × 1/5) + (2 × 2) + (1/2 × 11) = 99/10
(2 × 12) + (6 × 3) + (1/3 × 7) = 133/3
(2 × 12) + (2 × 3) + (1/2 × 7) = 67/2
(2 × 3) + (2 × 4) + (1/2 × 2) = 15
(1 × 3) + (1 × 4) + (5 × 2) = 17
(2 × 3) + (6 × 4) + (1/3 × 2) = 92/3
10. |A| = −45
1+35 6 2+1 1 7
C13 = (−1) = −2 C21 = (−1) = 54
7 8 8 2
2+2 2 7 2+3 2 1
C22 = (−1) = −45 C23 = (−1) = −9
7 2 7 8
3+1 1 7 3+2 2 7
C31 = (−1) = −37 C32 = (−1) = 25
6 5 5 5
−28 25 −2
2 1
C33 = (−1)3+3 =7 C = 54 −45 −9
5 6
−37 25 7
−28 54 −37
T
C = 25 −45 25
−2 −9 7
109
28/45 −6/5 37/45
CT
A−1 = det A
= −5/9 1 −5/9
2/45 1/5 −7/45
110
REFERENCES 1. Chiang, Alpha C.(1984),Fundamental Methods of Mathematical
Economics, McGraw-Hill, Inc 2. Bhardwaj, R. S.(2005) Mathematics for Economics
and Business, Excel Books. 3. Knut sydster and Peter Hammond: Mathematics
Essentials for Economic analysis. Ethiopian Edition 4. Knut sydster and Peter
Hammond: Further Mathematics for Economists. Ethiopian Edition 5. Yamane, T.
(2002), Mathematics for Economists: An Elementary Survey, 2nd ed., Prentice-Hall.
6. Dowling, E.T., (1980),Mathematics for Economists(Schaum’s Outline Series), Mc
Graw-Hill 7. Kapoor,V. k. (2002), Introductory Mathematics for Business and Eco-
nomics, Sultan Sons: New Delhi 8. .Monga, G.S. (1972),Mathematics and Statistics
for Economics, Vikas Publishing House. 9. Bowen, E.K.,etal., (1987), Mathematics
with Applications in Management and Economics, 6thed., Irwin Inc
111