C H A P T E R 10
L U Decomposition and Matrix Inversion
This chapter deals with a class of elimination methods called LU decomposition techniques.
The primary appeal of LU decomposition is that the time-consuming elimination step can be
formulated so that it involves only operations on the matrix of coefficients, [A].
Thus, it is well suited for those situations where many right-hand-side vectors {B} must be
evaluated for a single value of [A]. Although there are a variety of ways in which this is done,
we will focus on showing how the Gauss elimination method can be implemented as an L U
decomposition.
One motive for introducing LU decomposition is that it provides an efficient means to compute
the matrix inverse.
The inverse has a number of valuable applications in engineering practice. It also provides a
means for evaluating system condition.
10.1 L U DECOMPOSITION
As described in Chap. 9, Gauss elimination is designed to solve systems of linear algebraic
equations,
[A]{X} = {B} (10.1)
Although it certainly represents a sound way to solve such systems, it becomes inefficient when
solving equations with the same coefficients [A], but with different right-hand-side constants
(the b's).
Recall that Gauss elimination involves two steps: forward elimination and back substitution (Fig.
9.3).
Of these, the forward-elimination step comprises the bulk of the computational effort (recall
Table 9.1). This is particularly true for large systems of equations.
LU decomposition methods separate the time-consuming elimination of the matrix [A] from the
manipulations of the right-hand side { B } .
Thus, once [A] has been "decomposed," multiple right-hand-side vectors can be evaluated in an
efficient manner.
Interestingly, Gauss elimination itself can be expressed as an LU decomposition.
Before showing how this can be done, let us first provide a mathematical overview of the
decomposition strategy.
10.1.1 Overview of L U Decomposition
Just as was the case with Gauss elimination, L U decomposition requires pivoting to avoid
division by zero.
However, to simplify the following description, we will defer the issue of pivoting until after the
fundamental approach is elaborated.
In addition, the following explanation is limited to a set of three simultaneous equations.
The results can be directly extended to n-dimensional systems.
Equation (10.1) can be rearranged to give
[A]{X}-{B} = 0 (10.2)
Suppose that Eq. (10.2) could be expressed as an upper triangular system:
mi mi mi jJTJ
0 K g :H23
0 I I H33 (10.3)
Recognize that this is similar to the manipulation that occurs in the first step of Gauss
elimination.
That is, elimination is used to reduce the system to upper triangular form.
Equation (10.3) can also be expressed in matrix notation and rearranged to give
[U]{X}-{D} = 0 (10.4)
Now, assume that there is a lower diagonal matrix with l's on the diagonal,
ri 0c
4i ki 1- (10.5)
that has the property that when Eq. (10.4) is premultiplied by it, Eq. (10.2) is the result.
That is,
[L]{[U]{X} - { D } } = [A]{X} - {B} (10.6)
I f this equation holds, it follows from the rules for matrix multiplication that
[L][U] = [A] (10.7)
and
[L]{D} = {B} (10.8)
A two-step strategy (see Fig. 10.1) for obtaining solutions can be based on Eqs. (10.4), (10.7),
and (10.8):
1. L U decomposition step. [A] is factored or "decomposed" into lower [L] and upper [U]
triangular matrices.
2. Substitution step. [L] and [U] are used to determine a solution {X} for a right-hand side {B}.
This step itself consists of two steps.
First, Eq. (10.8) is used to generate an intermediate vector {D} by forward substitution.
Then, the result is substituted into Eq. (10.4), which can be solved by back substitution for { X } .
Now, let us show how Gauss elimination can be implemented in this way.
M ( 4 = 14
Os composition 4 j r \
M ( 4 14
— - p * ID) Forward
Substitution
V id Backward
{4
FIGURE 10.1
llie slsps m fUdeoampc-sitcr.
10.1.2 L U Decomposition Version of Gauss Elimination
Although it might appear at face value to be unrelated to L U decomposition, Gauss elimination
can be used to decompose [A] into [L] and [U].
This can be easily seen for [U], which is a direct product of the forward elimination.
Recall that the forward-elimination step is intended to reduce the original coefficient matrix [A]
to the form
IS
1 ll Sdmm ZSL,
L 0 0
(10.9)
which is in the desired upper triangular format.
Though it might not be as apparent, the matrix [L] is also produced during the step.
This can be readily illustrated for a three-equation system,
-Jt,]
h i 1'-1
Ml I JKI = M
-mm, •<W3 U-J i
The first step in Gauss elimination is to multiply row 1 by the factor [recall Eq. (9.13)]
and subtract the result from the second row to eliminate a21. Similarly, row 1 is multiplied by
_ a&i
m = —
and the result subtracted from the third row to eliminate a . 3)
The final step is to multiply the modified second row by
hz — "7"
312
and subtract the result from the third row to eliminate a'32.
Now suppose that we merely perform all these manipulations on the matrix [A].
Clearly, i f we do not want to change the equation, we also have to do the same to the right hand
side {B}.
But there is absolutely no reason that we have to perform the manipulations simultaneously.
Thus, we could save the f's and manipulate {B} later.
Where do we store the factors fzi, fa, and fat
Recall that the whole idea behind the elimination was to create zeros in &2\, &31, and a^- Thus, we
can store fn in an, fa in a i, and fa in a^-
3
After elimination, the [A] matrix can therefore be written as
3ll #12 3\i
Si 4z
Si fa ^33
(10.10)
This matrix, in fact, represents an efficient storage of the LU decomposition of [A],
[A] -+ [L\[U\
(10.11)
Where
mi an
m I
C 0 .-• ••
%3
And
- l 0 0"
1 0
^2 1_
The following example confirms that [A] = [L][U] .
E X A M P L E 10.1 LU Decomposition with Gauss Elimination
Problem Statement. Derive an LU decomposition based on the Gauss elimination performed in
Example 9.5.
Solution.
In Example 9.5, we solved the matrix
3 * 1 - 0.<Ua__O.2.x. 3 -
0.1* +
{ Oi3 ^ ^
0^3x| _ iOK 2 =
3 -0,1
CA3 -0,3
0,3 - O v l 10
WivocUon , 4 W f o l l o w ^ upper- friQnouUr matrix
3 -0-1 _o.2-
0 7-00333 -0.293333
0 0 lO.OfXO
q^j a^) Q^>\. e\\m\r\cd<ad using -rWse_
.(laicVors •
TWe- -e-W^-ej^rV 0 _ 3 2 IS e l l m l ^ a l e i i b\ using _riQ.c^or"
a 2-2- 7.00333
C 0
!
0,03333 3 0
,100000
0 'Ov(Dl71300 '
c t: 3 _o.1
0,023333 1 0 o
0,100000 0,0271300 1 k . o O
Example 10,2 Th^ SubsHkjiUor» SWps
eim SVaWmenr dompW-W problem ir\rUcxre.a m Example.
\0.\ generadirig
u p a l s o l u t i o n M>*HN f o r w a r d an A W i L
So\urV» o n
T"V>e~ o_p ^ onward subsetluVion is Vo ir^poSe- "HVB- d \ Inchon
m orkV^ulcvV\orvS on Hie. r^Q^-f _ r » a n J l _ s \ c i e 1 8 ^.
I
OA 1 -0^3
"The* rorxoarA -
A*
resu.
1 \
3 -0.4 5 1 7.?5
0 T.O0333 1 *z
G 0 10. 0110 1 x_ 3
( TO .OKU
U3 i o l = I 6
-I 0 0
1 o C
c 0.033333 / 0
1 0,100000 -0O2?U00 L
\ o o
0.033333 4 o
714
0,400000 0.0271300 i J
0 , 0 3 3 3 3 3 3 ^ ^ -^ui-rr - 1 7 . 3
0.1 ei{ _ Q . 0 2 7 < 3 c k t r i 3 = T1. If.
= H % 3 - 0.0 3 33333 =
LTO.O?43J
3 .0.1 _Q,2_ 7^5
° 7.00333 -0,293333 -19,5617
0 o fO.012.0 . TO,o!43
* = 70,0243
7,00003
3
10.0{2_O
^ _ -t-O.^3333 (7,00003) - -Z50oO
7.00^3 j
7.15 -v 0_j- ( ^ . 0 0 0 0 3 ) f 0.4 (-2,5-) = 3,00000
3
^oooo3j
10.2 T H E MATRIX INVERSE
In our discussion of matrix operations (Sec. PT3.2.2), we introduced the notion that i f a matrix
[A] is square, there is another matrix, [ A ] - l , called the inverse of [A], for which
[AHAr^tAr'tA]^!]
Now we will focus on how the inverse can be computed numerically. Then we will explore how
it can be used for engineering analysis.
10.2.1 Calculating the Inverse
The inverse can be computed in a column-by-column fashion by generating solutions with unit
vectors as the right-hand-side constants. For example, i f the right-hand-side constant has a 1 in
the first position and zeros elsewhere,
I
|&| D
0
the resulting solution will be the first column of the matrix inverse. Similarly, i f a unit vector
with a 1 at the second row is used
I)
b\ I
the result will be the second column of the matrix inverse.
The best way to implement such a calculation is with the LU decomposition algorithm described
at the beginning of this chapter. Recall that one of the great strengths of LU decomposition is
that it provides a very efficient means to evaluate multiple right-handside vectors. Thus, it is
ideal for evaluating the multiple unit vectors needed to compute the inverse.
E X A M P L E 10.3 Matrix Inversion
Problem Statement. Employ L U decomposition to determine the matrix inverse for the system
from Example 10.2.
r3 •0.1 -0.2'
141 — o.i T -0.3
0.3 -0.2 10 .
Recall that the decomposition resulted in the following lower and upper triangular matrices:
rs -fu -§J i 1 0
a 0 IO.OIZO J rnmm - « 7 I 303 1.
. 0JO0O00
Tn<^ ^iVs-r column 0 _p -fne* roolr.V inverse can U - A^Wr^med by
per^orramg ^ o r u j a n ^ - sJas-K-fu-rW So\u-rion procedures uoitf a
On.4- Ve^cW (UJ)^ J_ )W 4 W .p\Ys4~ rQua) a s 4 ^ r^M-~ko.na- s i d e
[u3 io3-[*3
o
0 7
1 o
I I
- 0 . 023f130G 1 (
0,100000
Fbruiard - suWs-rrkA\or> ;
&z = - o.0333334i = -0.033333
cb - 0.02.^300 dz 0 . 1 d l = 0.0271300 (-0,033333^ _0.4
-0. jOOS
T
£o] = [l -0.033333 -0.4OG9J
-0,O333§3
t°J- - o . ioo y
1
"3 -0.4 0.2-
-0,033?33
0 7 00333 -0.293323
- 0 . 1O07
0 G 4 0.O12P
^ 3 ^ - 0 . 1 0 0 9/10.012-0^ -O.CHOO?
. - 0 . 0 3 3 3 3 3 + 0.293333 c-O.QHoo?) ^ ~ 0 t O 05l?
7,00333
if.
0.332^9
3
0,332^.9
[*<JT=[ 0.322^9 -0.005IS -o.oiooHJ
w - -0.0051 ?
- 0 . G100S
pi'rs4- co\vAmr\: "Vlne_ inverse. moWnc,
0 0
0 c
c 0
To des-VtsTrvM n<£L sexionA column
0
-I o ( ) 0
o,033333 f o =
0 ^ 0 0 0O0 -0.0271300 0
1
^Oj] "is <WVerrn\YieA by -CoruoarA _ soloS+HMVI'
3 _0.1 _O 2_t
o ?
0 f.oo333> -0.293333 1
0 O 10,0120 0. 0 2 7 1 1 0 0
H
X
0,00271
L
CAT = 1 O
-O.OiOOS' o , 0027-1 o
To <UW*o. . a e 3 ^ column'
| o G
0 O I
oHooooo -O.02?I3OO I 1
3, -0.1 _o,2
o 7-, 00333 -0,2.93333
0 0 10.0<2.O d 3
^ )(j is i & W m m e c i back-suLsTvTuho n
0 99<?S
©.532^ 0,00^94 ^ 0.006f9^
—G•00S1? 0,1/42903 0 . 0 0 4 1 g3
-0, 0100? G-099SS