Linear Algebra
Linear Algebra
Linear Algebra
Course Outline
Course Outline
Course Outline
Course Outline
Chapter One: Linear Algebra
I. Introduction
• Linear Algebra
• Permits expression of a complicated system of equations in a succinct, simplified way
• Provides a shorthand method to determine whether a solution exists before it is
attempted,
• Furnishes the means of solving the equation system
• Can be applied to system of linear equations ( Many economic relationships can be
approximated by linear equations, if not they can be converted to linear)
I. Introduction
I. Introduction: Definition and Terms
• A matrix is a rectangular array of numbers, parameters, or variables, each of which
has a carefully ordered place within the matrix.
• The numbers (parameters, or variables) are referred to as elements of the matrix. The
numbers in a horizontal line are called rows; the numbers in a vertical line are called
columns.
• The number of rows r and columns c defines the dimensions of the matrix (rxc),
which is read “r by c”. The row number always precedes the column number. In a
square matrix, the number of rows equals the number of columns (that is, r =c).
• If the matrix is composed of a single column, such that its dimensions are r*1, it is a
column vector and if the matrix is a single row, with dimensions 1xc, it is a row
vector.
• A matrix which converts the rows of A to columns and the columns of A to rows is
called the transpose of A and is designated by A’ or AT
I. Introduction: Definition and Terms
I. Introduction: Definition and Terms
• The transpose of A is
I. Introduction: Addition and Subtraction of Matrices
.
I Introduction: Scalar Multiplication
.
I Introduction: Vector Multiplication
.
I Introduction: Vector Multiplication
.I Introduction: Multiplication Matrices
• The matrices should be conformable (Column of the first matrix or lead
matrix should be equal to number of rows in the second or lag matrix)
• Each row vector in the lead matrix is then multiplied by each column vector
of the lag matrix
• The row-column products, called inner products or dot products, are then
used as elements in the formation of the product matrix, such that each
element cij of the product matrix C is a scalar derived from the multiplication
of the ith row of the lead matrix and the jth column of the lag matrix.
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Commutative, Associative, and Distributive Laws in Matrix Algebra
I. Identity and Null Matrices
• Identity matrix (I) is square matrix which has 1 for every element on the
principal diagonal from left to right and 0 every where else.
• When a subscript is used, as in In , n denotes the dimensions of the
matrix(nxn).
• Identity matrix is like the number 1 in algebra as multiplication of a matrix by
an identity matrix leaves the original matrix unchanged (AI=IA=A)
• Multiplication of an identity matrix by itself leaves the identity matrix
unchanged: IxI=I2=I
• Any matrix which A=A- is a symmetric matrix
• A symmetric matrix for which AxA=A is an idempotent matrix. The identity
matrix is symmetric and idempotent.
I. Identity and Null Matrices
• A null matrix is composed of all 0s and can be any dimension
• It is not necessarily a square matrix
• Addition or subtraction of the null matrix leaves the original matrix unchanged
• Multiplication of a null matrix produces a null matrix.
I. Identity and Null Matrices
I. Identity and Null Matrices
Matrix Expression of a System of Linear Equations
• Matrix algebra allows concise expression of system of linear equations
Next slides
Third order determinant
Minors and Cofactors
• The elements of a matrix remaining after the deletion process described above
form a sub-determinant of the matrix called a minor.
• The minor |Mij| is the determinant of a submatrix formed by deleting ith row
and jth column of the matrix.
• Where |M11| is the minor of a11, |M12| is the minor of a12 and |M13| is the
determinant of a13.
• Thus, the determinant of the above the previous 3x3 matrix equals
Minors and Cofactors
• A cofactor is a minor with a prescribed sign.
• The rule for the sign of the cofactor is
Minors and Cofactors
• Example: The cofactors (1) |C11| , (2) |C12| , and (3) |C13| for the matrix
Laplace Expansion and Higher-order Determinants
• Laplace expansion is a method for evaluating determinants in terms of
cofactors.
• It thus simplifies matters by permitting higher-order determinants to
be established in terms of lower-order determinants.
• Laplace expansion of a third-order determinant can be expressed as
Laplace Expansion and Higher-order Determinants
• Laplace expansion permits evaluation of a determinant along any row or
column.
• Selection of a row or column with more zeros than others simplifies evaluation
of the determinant by eliminating terms.
• Laplace expansion also serves as the basis for evaluating determinants of orders
higher than three.
Laplace Expansion and Higher-order Determinants
• Example
Laplace Expansion and Higher-order Determinants
• Laplace expansion for a fourth-order determinant is
•
Solving Simultaneous Equations using Matrix
•
CRAMER’S RULE FOR MATRIX SOLUTIONS
• Cramer’s rule provides a simplified method of solving
a system of equations through the use of determinants.
|𝐴𝑖 |
• It states 𝑥ҧ𝑖 =
|𝐴|
• where xi is the ith unknown variable in a series of equations,
|A| is the determinant of the coefficient matrix, and |𝐴𝑖 | is
the determinant of a special matrix formed from the original
coefficient matrix by replacing the column of coefficients of xi
with the column vector of constants.
CRAMER’S RULE FOR MATRIX SOLUTIONS
•
CRAMER’S RULE FOR MATRIX SOLUTIONS
•
Gauss Elimination Method
• The idea is to add multiples of one equation to the others in order
to eliminate a variable and to continue this process until one
variable is left.
• Once this final variable is determined, its value is substituted back
into the other equations in order to evaluate the remaining
unknowns.
• This method, characterized by step‐by‐step elimination of the
variables, is called Gaussian elimination.
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
• Since the coefficient matrix has been transformed into echelon form, the
“forward” part of Gaussian elimination is complete
• What remains now is to use the third row to evaluate the third unknown, then to
back‐substitute into the second row to evaluate the second unknown, and,
finally, to back‐substitute into the first row to evaluate the first unknown
•
Gauss Elimination Method
Gauss Elimination Method
•.
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Elimination Method
•
Gauss Jordan Method
• Reduce ( eliminate entirely) the computations involved in
back-substitution above by performing additional row
operations to transform the matrix from echelon to reduced
echelon form.
• A matrix is in reduced echelon form when, in addition to being
in echelon form, each column that contains a non-zero entry(
usually made to be 1) has zeros not just below that entry but
also above that entry.
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Gauss Jordan Method
•
Homogeneous System of Linear Equations
• The constant term in every equation is equal to zero(0); no equation in
such systems has a constant term in it.
• A system with n unknowns
• Elements of each row are the partial derivatives of one function yi with
respect to each of the independent variables x1, x2, x3, and the elements of
each column are the partial derivatives of each of the functions y1, y2, y3
with respect to one of the independent variables xj.
• Example
Higher Order HESSIANS
Higher Order HESSIANS
ഥ 3 <0, 𝑯
𝑯 ഥ 4> 0, etc., the bordered Hessian is negative definite, and a negative definite Hessian always
meets the sufficient condition for a relative maximum.
Eigenvalues and Eigenvectors
• Eigenvalue is a number λ which when subtracted from the
diagonal of a square matrix A results in the matrix to be
singular.
• We say that a matrix is singular when the inverse of the
matrix does not exist. This means the determinant is 0 and
the rank is less than n.
• Consider the following matrix
Eigenvalues and Eigenvectors
• The matrix is singular as all columns are exactly the same now and
the rank of the matrix thus is less than 3.
• Therefore, the number 2 is an eigenvalue of this matrix.
Eigenvalues and Eigenvectors
• Example :
Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Eigenvectors
• Eigenvectors are vectors that correspond to a certain eigenvalue and
can be found by solving the equation:
Eigenvectors
•
Eigenvectors
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
Economic Applications
•
INPUT-OUTPUT MODEL(LEONTIEF MODEL)
• Input-output analysis is a type of economic model that
describes the interdependent relationships between
industrial sectors within an economy.
• It shows how the outputs of one sector flow into another
sector as inputs.
• I-O analysis is a technique which was invented by Professor
Wassily W. Leontief in 1951 won Nobel prize in Economics in
1973 .
INPUT-OUTPUT MODEL: Features of the Model
INPUT-OUTPUT MODEL: Assumptions
INPUT-OUTPUT MODEL
• Suppose an economics system that consists of four producing sectors( three
inter-industry sectors: agriculture, manufacturing, service, and, and one final
demand sector that is the household sector).
• The table below provides a simplified example of such an economy.
INPUT-OUTPUT MODEL
• The data in any row show the distribution of output to
various sectors and uses
• The data in any column indicate the source of inputs needed
for production.
• For example, reading across the first row, altogether the
agricultural output is valued at birr 300. Of his total, 100
birr worth of goods goes to final consumption; the remaining
output from agriculture goes as inputs-50 birr to itself, 150
birr to manufacturing and no output to service sectors.
INPUT-OUTPUT MODEL
• An I-O table is based up on “transaction matrix”.
• A transaction matrix shows how the total output of one industry is distributed
to all other industries as inputs and to the final demand sector.
• Now we can convert the above input output table in the form of “transaction
matrix” as given below.
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
• We can add the entries in the transaction matrix row wise to get the
output of each industry. Thus,
INPUT-OUTPUT MODEL
• Now we can construct a table indicating the input requirement for the
production of one birr worth of output. For example, to produce br 300 worth
of agricultural goods, agriculture requires br 50 worth of agricultural goods, br
100 worth of manufacturing and br 150 worth of labor.
• Because of the assumption of fixed proportion, producing br 1 worth of
agricultural output requires
INPUT-OUTPUT MODEL
• By applying the same analysis to other sectors i.e. manufacturing and service, we
have to find what worth of agricultural, manufacturing and service are required to
produce br 1 worth of manufacturing and service output.
INPUT-OUTPUT MODEL
• For our four-sector economy, the technical coefficients would be:
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL
INPUT-OUTPUT MODEL: Determination Of Employment Levels
INPUT-OUTPUT MODEL: Determination Of Employment Levels
INPUT-OUTPUT MODEL: Determination Of Employment Levels
INPUT-OUTPUT MODEL: Determination Of Employment Levels
INPUT-OUTPUT MODEL: Determination Of Employment Levels
INPUT-OUTPUT MODEL: Determination Of Employment Levels
INPUT-OUTPUT MODEL: Determination Of Employment Levels
Linear Programming
• Definition of LP
• Formulate LPP
• Solve LPP using Graphic Method
• Solve LPP using Simplex Method
• Exercise Application of LP in different areas of
Economics
Definition of LP
• LP is a composite of two words: Linear and
Programming
• Linear mean all mathematical functions in the model
are required to be linear functions
• Programming is planning aspect of activities
• LP is a tool or technique by which one can select the
optimal solution of problems like distribution of
commodities, allocation of machine, scheduling
transport services, etc….
Steps in LP
1. Identify the Objective function as a linear function of its variables and
state all the limitations on resources as linear equations or inequalities
2. Use mathematical techniques to find all possible sets of values of the
variables or unknowns, satisfying the constraints
3. Select the particular set of values of variables obtained in step two that
leads to the achievement of the objective function
▪ The result of step one is called linear programming problem.
▪ The set of solutions in step two are known as the set of feasible
solutions
▪ The solution finally selected in step three is called optimum(best)
solution of the LPP.
Steps in LP
• Therefore, a typical LPP has three parts:
• The objective function
• The constraints
• The non-negative principle(constraint)
The Objective Function
• Objective lies on two extremities: either maximizing or minimizing
something ( maximize profit, sales, equity, efficiency, production… or
wants to minimize cost, time, pollution, etc.
• The objective function can be stated as
The Constraints
• Constraints are situations under which the objective function should be
maximized or minimized
• It answers questions like how much money, time, resource, technical
capacity, etc are available to achieve the objective function
• A constraint is formulated in linear form and the constraint is expressed
in either <, ≤, ≥, =, or >.
• It can be formulated as
The Constraints
The Non-Constraint
• The non-negativity constraint is a principle which prohibits that any one of
the variables be negative
• In economics, generally, and in linear programming specifically, negative
numbers are rarely used. This is because using negative numbers may be
misleading
• For example, what does it mean to produce -20 quintals of teff or -25
quintals of maize?
• Any producer cannot produce negative output from use of inputs
• Generally, the decision (choice) variables should be non-negative.
• In linear programming the negative aspect of variables should be given as:
Assumptions of Linear Programming
• Proportionality: in both the objective function and the
constraints
• If you want to double the output, we need simply to double the
resources
• Additivity:
It states that the sum of resources used by different activities must be equal to
the total quantity of resources used by each activity for all resources
individually and collectively. In other words, interaction among the activities of
resources does not exist.
• Divisibility: This assumption implies that the solution need not be in
whole numbers(integers). Instead, they are divisible and may take any
fractional value.
Assumptions of Linear Programming
• Certainty:
We assume that the conditions of certainty exist. That is , coefficients
in the objective function and constraints are completely known(
deterministic) and do not change during the period being studies. For
example: profit per unit of product and amount of resources available
are fixed during the planning period.
• Finiteness: an optimum solution cannot be computed in the
situations where there are infinite number of alternative activities and
resource restrictions
• Optimality: In LP, the maximum profit solution or the minimum
cost solution always occur at the corner point of the set of feasible
solutions
Formulations of of Linear Programming Problem
• Identify the objective function
• Identify the variables
• State the feasible alternatives. Non-negative constraint of
variables should be imposed
• Express the constraints in terms of the variables
• Express the objective function quantitatively in terms of
the variables
Formulations of of Linear Programming Problem
• The standard form of the model (LPP) can be stated as
Formulations of of Linear Programming Problem
• The function being maximized is called the objective function
and the restrictions are called the constraints.
Formulations of of Linear Programming Problem
Formulations of of Linear Programming Problem
Methods of Solving Linear Programming Problems
•The graphic and
•The simplex (algebraic) methods
Methods of Solving LPP: Graphic Method
• LPP involving only variables can be solved easily by graphic method.
• It entails the following procedures
• Graph the constraints
• Identify the solution space
• Locate the solution points
• Identify the corner points from the solution space. This is because the
optimal solution of a linear programming problem is always found at the
corner point of the solution space under normal circumstances
•Determine the optimal solution. A solution that optimizes the objective
function.
Methods of Solving LPP: Graphic Method
Methods of Solving LPP: Graphic Method
Methods of Solving LPP: Graphic Method
Methods of Solving LPP: Graphic Method
Methods of Solving LPP: Graphic Method
Methods of Solving LPP: Graphic Method
Methods of Solving LPP: Graphic Method
• After graphing the constraints, the next step is to locate the corner
points since optimal solution is obtained at one of the corner point
feasible basic solution
Methods of Solving LPP: Graphic Method
Methods of Solving LPP: Graphic Method
Methods of Solving LPP: Graphic Method
Methods of Solving LPP: Graphic Method
• The optimum solution is multiple which lies on the ray between (3,
6) to (6, 3). That means all points lying on this ray are optimal. For
example, the point (4, 5) and (5, 4) are the coordinates that belong
to this ray and the profit is to be 990. Therefore, the linear
programming is a multiple optimal solution, i.e. there is more than
one optimal solution.
Methods of Solving LPP: Graphic Method
Unbounded solution: - This is a situation that occurs whenever
a maximization LPP has unbounded solution space. No finite
solution can be determined. This is because one or more of the
variables can be increased indefinitely without violating the
feasibility.
Mostly this situation occurs when the assumption of finiteness
is violating.
Example: - solve the following linear programming problem
using graphic method.
Methods of Solving LPP: Graphic Method
Unbounded solution
Methods of Solving LPP: Graphic Method
From the above table we can read the initial solution for the problem as S1 = 48, S2 = 15, S3= 10 and others zero with zero
profit.
Procedures of Simplex Method
• Step 3: Test the solution for optimality. The test for optimality can be
formalized in terms of the sign of the members of the index row. The initial
solution is optimum if all the elements of index row are non-negative. If at least
one of the elements of the index row is negative, the given solution is not
optimal and goes to the next step.
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method
Procedures of Simplex Method: Minimization
• In minimization case even though the basic process of simplex is the same, there
are some arrangements.
• One of those arrangements is the use of artificial variables
• Let’s discuss about artificial variables before going through minimization
Procedures of Simplex Method: Minimization
Procedures of Simplex Method: Minimization
Procedures of Simplex Method: Minimization
2
Procedures of Simplex Method: Minimization
• There are some steps which are different in minimization from maximization
are: Test for optimality and means of selecting entry variable.
• For minimization test for optimality is carried out until all elements of net
evaluation row or index row are non-positive.
• That means, a given solution is not optimal if at least one element of the index
row is positive.
• The variable which has the largest positive value is selected as entry variable
and the column containing this variable is a pivot column.
• But there is no any difference in selecting the leaving variable and pivot number
in minimization from that of maximization.
Procedures of Simplex Method: Minimization
Procedures of Simplex Method: Minimization
Procedures of Simplex Method: Minimization
Procedures of Simplex Method: Minimization