Lectia I - Algebra
Lectia I - Algebra
Lesson I
Determinants, matrices, systems of linear equations
Determinants, minors, Laplace rule, Cramer rule.
The vector space, on the commutative field K, designated by M
m, n
(
K
) (resp. M
n
(
K
)
in the case m = n) of the matrices of type (m, n) with the elements from K (in the following,
sometime K may be only commutative ring with unit).
The multiplication of the matrices, invertible matrices, the rank of a matrix.
Systems of linear equations (Kronecker - Capelli theorem, Rouch theorem).
Systems of homogenous and linear equations.
Completions for matrices
Binet-Cauchy formula. Let A := [
a
ik
] from M
m, n
(
K
), B := [
b
ik
] from M
n, m
(
K
) and
m n. Then
det
...
...
...
...
...
AB
a a
a a
b b
b b
=
< <
1 1
1
1
1
1
1
1
1
k k
mk mk
k k n
k k m
k k m
m
m
m
1
m m
M M M M .
To shorten the writing one uses the notation:
M
i i
j j
1
1
...
...
p
p
|
\
|
.
| designates the minor of the matrix M formed with the lines i
1
, . . . , i
p
and the
columns j
1
, . . . , j
p
.
C : = AB =
a b a b
a b a b
1 1
1
1
1
1
1 1
1 1
1
1 1
1
r r
r
n
r r m
r
n
mr r
r
n
mr r m
r
n
m m
m
m m
m
= =
= =
(
(
(
(
(
. . .
. . .
M M . Followings the columns of det C one can write
det
C = . . .
. . .
. . .
r
n
r r r r m
mr r mr r m
r
n
1
m m
m m
m
= =
1
1 1 1
1
1
1 1
1 1
a b a b
a b a b
M M and also, observing the common factor at each
column, det
C = . . .
. . .
. . .
. . .
r
n
m r
n
r r m
m
m
1
1
1 1 1
1
1
= =
|
\
|
.
|
A
m
r r
b b . In the sum there are n
m
terms and
those, to which in the sequence r
1
, . . . , r
m
two or more terms are equal, are equal to 0, so
det
C is the sum of A
n
m
terms A
m
r r
b b
1
1
1
1
. . .
. . .
. . .
m
r r m
m
|
\
|
.
|
at which r
1
, . . . , r
m
are different
two by two. One distributes them in C
n
m
groups, taking in a group those which differ only
by the succession of the indices r
1
, . . . , r
m
, consequently det
C = T
k , . . ., k
1 k k n
1 m
1 m
< <
. . .
,
14
T
k
1
, . . . , k
m
: =
m i 1 i
m 1
m 1
. . .
. . .
. . . 1
b b
i i
m
A
|
|
.
|
\
|
, where is the permutation
k k
i i
1
1
. . .
. . .
m
m
|
\
|
.
|
|
. But
A
1
1
. . .
. . .
m
i i
m
|
\
|
.
|
|
= (
) A
1
1
. . .
. . .
m
k k
m
|
\
|
.
|
|
, (
) the signature of : =
k k
i i
1
1
. . .
. . .
m
m
|
\
|
.
|
|
(= (1)
,
the number of inversions presented by , t the number of transpositions in which this is
decomposed), hence T
k
1
, . . . , k
m
= A
|
|
.
|
\
|
m i 1 i
m 1
m 1
) (
. . .
. . . 1
b b
k k
m
K . On the other hand,
k k
i i
1
1
. . .
. . .
m
m
|
\
|
.
|
|
b
i
1
1
. . . b
i
m
m
=
1
1
1 1
. . .
. . .
. . .
m
j j
b b
m
k j k j
m m
|
\
|
.
|
|
, so T
k
1
, . . . , k
m
= A
|
|
.
|
\
|
m 1
. . .
. . . 1
k k
m
m
m m 1 1
S
j k j k
. . . ) ( b b = A
1
1
. . .
. . .
m
k k
m
|
\
|
.
|
|
B
k k
m
1
1
. . .
. . .
m
|
\
|
.
|, whence the conclusion.
Example. A =
t
0 1 2 3
2 1 0 1
,
1 0 2 3
2 1 0 1
(
=
(
B , det AB = ? The terms of the sum are given by the
couples (1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4), det AB = +
1 1
3 1
0 3
1 1
2 0
3 1
2 3
0 1
0 2
1 0
0 2
3 1
1 3
2 1
1 1
2 0
0 2
2 1
0 2
2 0
1 2
0 1
1 1
2 0
+
= 30.
Corollary. A, B M
n
(K) det AB = det A det B.
A
t
. Let A : = [
a
ik
] be from M
n
(K). A
t
: = [
b
ik
], where b
ik
= a
ki
, i, k = 1, n (the
transpose matrix of A; when A = A
t
, A is named symmetrical).
A
. Let A : = [
a
ik
] be a matrix from M
n
(K). One notes A
: = [
b
ik
], where b
ik
= A
ki
, i, k
= 1, n, A
ki
the cofactor of the element a
ki
. We have A A
= A
A = = (det A)
E, E the unit
matrix, therefore when A is nonsingular, A
1
=
1
det A
A
.
Complex matrix (real matrix). All the elements of this are complex numbers (real
numbers).
Triangular matrix. The matrix A : = [
a
ik
] from M
n
(K) is by definition upper
triangular (resp. lower triangular) when i > k a
ik
= 0 (resp. i < k a
ik
= 0 ). If A is
upper and lower triangular, it is named diagonal matrix and is designated by
diag { a
11
, a
22
, . . . , a
nn
}.
Nilpotent matrix. A from M
n
(K) is called nilpotent if p in N so that A
p
= 0.
f
(A). Let A be matrix from M
n
(K) and f a polynomial from K
[
X
], f (X) = a X
k
p k
k
p
0
.
By definition f (A) = a
0
A
p
+ a
1
A
p1
+ . . . + a
p1
A + a
p
E, E the unit matrix. If f = g + h
resp. f = gh, g and h from K
[
X
], then f (A) = g (A) + h (A) resp. f (A) = g (A) h (A).
Identifications. x
i
, i = 1, n being elements from K, one pass without warning and
without change the notation from the vector x :
= (x
1
, . . . , x
n
) in K
n
to the row matrix
x = [x
1
x
n
], or to the column matrix x = [x
1
x
n
]
t
, or to the row or column of a matrix
with the elements in the sequence x
1
, , x
n
.
Partition in blocks. Let A : = [
a
ik
] be a matrix from M
m,n
(K). Fix the following
packages L
1
, L
2
, . . . , L
p
of consecutive rows: L
1
with the first i
1
rows, L
2
with the next i
2
15
rows, L
3
with the next i
3
lines, . . . , L
p
with the last i
p
lines. Remark that i
1
+ i
2
+ . . .
+ i
p
= m. Fix also the next packages C
1
, C
2
, . . . , C
q
of consecutive columns: C
1
with the
first k
1
columns, C
2
with the next k
2
columns, . . . , C
q
with the last k
q
columns. One
observes that k
1
+ k
2
+ . . . + k
q
= n. The rows from L
1
and the columns from C
1
determine
the sub-matrix A
11
, the rows from L
1
and the columns from C
2
establish the sub-matrix
A
12
, . . . , the rows from L
1
and the columns from C
q
establish the sub-matrix A
1q
. Passing
to L
2
one obtains in the same manner the sub-matrices A
21
, . . . , A
2q
, . . . , passing to L
p
one obtains the sub-matrices A
p1
, . . . , A
pq
. A
uv
, u = 1, p , v = 1, q are named blocks, they
achieve by definition a partition
of A.
For
instance,
(
0 4 2 0
0 4 1 2
=
[A
11
A
12
],
where A
11
=
(
4 2 0
4 1 2
, A
12
=
(
0
0
.
The matrix A from M
n
(K) is by definition quasi-diagonal if it admits a partition in
blocks A
uv
, u, v = 1, m, m 2 e.g. u v A
uv
= 0 and the matrices A
uu
, u = 1, m are square.
In this case one use the notation A = diag {A
11
, . . . , A
mm
}. We have det A = det A
i i
i=1
m
.
The multiplication rule can be generalized at the matrices partitioned in blocks.
1.1 Theorem. Let A : = [
a
ik
] be from M
m,l
(K) with the blocks A
uv
of type (m
u
, l
v
),
u = 1, p, v = 1, q and B : = [
b
ik
] from M
l,n
(K) with the blocks B
vw
of type (l
v
, n
w
), v = 1, q,
w = 1, r . Then the product matrix AB is partitioned in the blocks C
uw
: =
A B u =1, p
uv vw
v=1
q
, , w = 1, r .
C
uw
is of type (m
u
, n
w
) because each term has this type. Consider the matrix C : =
C C
C C
11 1
1
. . .
. . .
r
p p r
M M
(
(
(
, correctly as C
u1
, C
u2
, . . . , C
u r
have m
u
rows and C
1w
, C
2w
, . . . , C
pw
have n
w
columns. It follows to show that C = AB, i. e., if C : = [
c
ij
], then c
ij
= a b
l
is sj
s=
1
,
i = 1, m, j = 1, n. Indeed, let c
ij
be in the position (k, t) of the block C
uw
, then c
ij
is the sum
of the elements in the position (k, t) in the matrices A
uv
B
vw
, v = 1, q. But the element in the
position (k, t) of the matrix A
uv
B
vw
is the sum of the products of the l
v
elements from the
row k of A
uv
with the elements of the t column of B
vw
. But the elements of the k row from
A
uv
are identical with certain elements of the i row from A and namely with the elements a
is
where s varies in accordance with the inequalities: s l
1
if u = 1; l s l
k
k=1
u
k
k=1
u
<
1
if u > 1.
Reasoning likewise about the elements of the t column from B
vw
one obtains
c
ij
= a b a b a b
l
l
l l
l l
l
is sj
s
is sj
s
is sj
s
q
= = +
+
= +
+ + +
1 1 1
1
1
1 2
. . . = a b
l
is sj
s=
1
(l
1
+ l
2
+ . . . + l
q
= l l
1
+ + . . . + l
q1
=
l l
q
).
Example. A = diag { A
1
, . . ., A
m
} A
k
= diag { A
1
k
, . . . , A
m
k
} k din N.
16
The reduction method. It gives an excellent algorithm to calculate the inverse of a
matrix, algorithm which is blocked when this has not an inverse. The method is based on
the remark
If C
[
AE
] = [
EB
] , A, B, C from M
n
(K), E the unit matrix, then B = C = A
1
.
[EB] = C [AE] = [CA CE] = [CA C] and so the conclusion.
Consider the matrix of the type (n, 2n)
U : = [AE] =
a a a
a a a
a a a
11 12 1
21 22 2
1 2
1 0 0
0 1 0
0 0 1
. . . . . .
. . . . . .
. . . . . .
n
n
n n nn
M M M M M M
(
(
(
(the vertical line helps in the development of the algorithm, at the end of this on the right
there is the chosen matrix A
1
). One makes elementary transformations about the matrix U
only on the rows of this (multiplication of a row with 0 from K, addition to a row of
another row multiplied with from K, change between them of two rows; each of the
reverts to the multiplication of U to the left with an invertible matrix of type (n, n), see
4.3
7
, 4.3
8
, 4.3
9
) until one arrives to a matrix of the form [EB] : T
p
T
p1
. . .
T
2
T
1
[AE] = [EB]. According to the last affirmation, B = A
1
. Here is the unfolding of the
calculus. Suppose a
11
0, put it in a border and it becomes pivot (if a
11
= 0 and even the
first column is 0, the algorithm is blocked, A is not invertible; if a
l1
0, one interchanges
the row 1 with row l). The following matrix of the algorithm is U
1
: = [b
ik
] of type (n, 2n)
where the first row is the pivot row divided by the pivot and the others are obtained from
the row of the pivot with the rectangle rule
, namely
row 2: b
21
= 0, b
22
= a
22
a
12
, b
23
= a
23
a
13
etc. , where =
a
a
21
11
row 3: b
31
= 0, b
32
= a
32
a
12
, b
33
= a
33
a
13
etc. , =
a
a
31
11
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
row n: b
n1
= 0, b
n2
= a
n2
a
12
, b
n3
= a
n3
a
13
etc. , =
a
a
n1
11
(on each row is constant; visualize the rule !). So the first column from U
1
has the
elements 1, 0, . . . , 0. The algorithm continues. If b
j2
= 0, 2 j n blockade, A is not
invertible, and if b
l2
0, 2 < l n, one changes the row 2 with the row l and so one can
assume b
22
0. One put this in a square getting a pivot and pass to the following matrix
U
2
: = [c
ik
] which has the rows
row 2: the line of the pivot divided to this
row 1: c
11
= 1, c
12
= 0, c
13
= b
13
b
23
, c
14
= b
14
b
24
etc. , =
b
b
12
22
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
row n: c
n1
= 0, c
n2
= 0, c
n3
= b
n3
b
23
, c
n4
= b
n4
b
24
etc. , =
b
b
n2
22
(remark that until in the column which corresponds to the pivot the rows does not suffer
any change and above and below the element 1 is 0) and so on. If A is invertible,
U
n
= [E A
1
].
17
Example. A =
2 1 0 0
3 2 0 0
1 1 3 4
2 1 2 3
(
(
A
1
= ?
U =
2 1 0 0 1 0 0 0
3 2 0 0 0 1 0 0
1 1 3 4 0 0 1 0
2 1 2 3 0 0 0 1
(
(
, U
1
=
1
1
2
0 0
1
2
0 0 0
0
1
2
0 0
3
2
1 0 0
0
1
2
3 4
1
2
0 1 0
0 2 2 3 1 0 0 1
(
(
(
(
(
(
, U
2
=
1 0 0 0 2 1 0 0
0 1 0 0 3 2 0 0
0 0 3 4 1 1 1 0
0 0 2 3 7 4 0 1
(
(
,
U
3
=
1 0 0 0 2 1 0 0
0 1 0 0 3 2 0 0
0 0 1
4
3
1
3
1
3
1
3
0
0 0 0
1
3
23
3
14
3
2
3
1
(
(
(
(
(
, U
4
=
1 0 0 0 2 1 0 0
0 1 0 0 3 2 0 0
0 0 1 0 31 19 3 4
0 0 0 1 23 14 2 3
(
(
. A
1
is in the right of the vertical
line.
Completions for systems of linear equations
Consider the system of m linear equations with n unknowns (1) a x
is s
s=1
n
= b
i
, i = 1, m,
a
ij
, b
i
K, K commutative field. If A
: = [a
ij
], B
: = [b
1
. . . b
n
]
t
and adopt the matrix
notation x : = [x
1
. . . x
n
]
t
for the unknowns, then (1) may be transcribed in a matrix form
Ax = b. The set of the solutions of the linear and homogeneous system Ax = 0 is a vector
subspace of K
n
, one notes this Ker A, the nucleus of A.
1.2 dim Ker A = n rk A.
The proof takes into account the solution of the homogeneous linear system. Note
r : = rk A. One can admit the partition A =
A A
A A
11 12
21 22
(
with det A
11
as principal minor and
one considers e
j
vector from K
nr
which in the position j has 1 and in the others 0,
j
vector
from K
n
equal to
(
(
A A e
e
1 1
1
12 j
j
, j = 1, n r and let E be the vector subspace of K
n
generated
by
1
, . . . ,
nr
. These being linearly independent, dim E = n r and we prove that
E = Ker A. E Ker A. A
j
=
(
(
+
=
(
(
(
(
j 2 2 j 2 1
1
1 1 1 2
1 . 2
j
j 2 1
1
1 1
2 2 1 2
2 1 1 1
0
e A e A A A
e
e A A
A A
A A
=
(
0
0
= 0,
because A
11
x' + A
12
e
j
= 0 A
21
x' + A
22
e
j
= 0 and as x' =
A
11
1
A
12
e
j
, it results
A
21
A
11
1
A
12
e
j
+ + A
22
e
j
= 0. Ker A E. Indeed, if (2) Ax = 0, let be x
t
= [
x
1
t
x
2
t
] with
x
2
K
nr
, hence x
2
=
=
r n
1 j
j j
e ,
j
K and as x
1
=
( ) 2
11
1
12 2
A A x , we have x =
=
r n
1 j
j j
.
Gauss method. Consider the system of n linear equations with n unknowns
18
(3)
a x a x a x a
a x a x a x a
a x a x a x a
11 1 1 2 2 1 1 1
2 1 1 2 2 2 2 2 1
1 1 2 2 1
+ + + =
+ + + =
+ + + =
+
+
+
. . .
. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . .
n n n
n n n
n n n n n n n
, a
ik
K, K commutative field.
If a
11
0 (principal coefficient), dividing in the first equation from (3) with a
11
, (4) x
1
+
+ b
2
x
2
+ . . . + b
n
x
n
= b
n+1
. One multiplies in (4) successively by a
21
, a
31
, . . . , a
n1
and add respectively the second, the third, . . . , the n-th equation from (3), it results
(5)
a x a x a x a
a x a x a x a
a x a x a x a
22
1
2 23
1
3 2 2 1
1
32
1
2 33
1
3 3 3 1
1
2
1
2 3
1
3 1
1
+ + + =
+ + + =
+ + + =
+
+
+
. . .
. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . .
n
1
n n
n
1
n n
n n n n
1
n n n
.
In (5) the unknown x
1
was eliminated and (4) + (5) is equivalent with (3). If a
22
1
0
(principal coefficient) one repeats the procedure to eliminate the unknown x
2
, one obtains
(6) x
2
+ b x b x
2
1
3
1
+ + . . .
n n
= b
n+1
1
,
(7)
a x a x a x a
a x a x a x a
a x a x a x a
33
2
3 34
2
4 3 3 1
2
43
2
3 44
2
4 4 4 1
2
3
2
3 4
2
4 1
2
+ + + =
+ + + =
+ + + =
+
+
+
. . .
. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . .
n
2
n n
n
2
n n
n n n n
2
n n n
and (4) + (6) + (7) is equivalent with (3) etc. , and in the hypothesis that all the principal
coefficients are 0, one obtains the system, equivalent to (3),
(8)
x b x b x b
x b x b x b
x b x b
x b b
a
a
1 2 2 1
2 3
1
3
1
1
1
1
2
1
2
1
1
1
1 1
1
1
+ + + =
+ + + =
+ =
= =
+
+
+
+
. . .
. . .
, : .
n n n
n n n
n n
n
n n
n
n n
n
n
n n n
n
n n
n
. . . . . . . . . . . . . . . . . .
(8) is solved bottom up.
Let B the matrix of the unknown coefficients from (8). Taking into account the
operations effectuated, we have 1 = det B =
det
. . .
A
a a a a
11 22
1
33
2 1
n n
n
, consequently the Gauss
method can be used in the calculus of the determinants. It can be also used for invert
matrix. Indeed, if A : = [
a
ik
], A
1
:
= [
x
ik
] then AA
1
= E a x
is sk
s=1
n
=
ik
, i, k = 1, n, i.e. n
systems of linear equations with n
2
unknowns, but to which the Gauss method can be
applied simultaneous, all the systems having the same matrix A.
The square root method. Consider the system of linear equation in matrix writing (9)
Ax = b, A M
n
(C) symmetrical, b C
n
. Put A in the form A = T
t
T with T = [
t
ij
] upper
triangular. The elements t
ij
are clearly obtained from t
11
= a t
a
t
11 1
1
11
;
j
j
= , j =
19
= 2 2
1
2
1
1
1
1
, ; , , ; n t a t i n t
t
a t t
ii ii si
s
i
ij
ii
ij si sj
s
i
= = =
|
\
|
. =
, 1 i < j n; t
ij
= 0 1 j < i n. If
t
ii
0 for i = 1, n, (9) has a unique solution because det A = det T
t
det T = (det T)
2
. (9) is
splited in (10) T
t
y = b (change of unknown y = Tx), which is solved top down, and (11)
Tx = y, which is solved bottom up.
Cholescky method. Consider the system of linear equations in matrix writing
(12) Ax = b, A M
n
(K), b K
n
, det A 0. Put A in the form A = BC, B =
=
b
b b
b b b
11
21 22
1 2
0 0
0
. . .
. . .
. . . . . . . . . . . .
. . .
n n n n
(
(
(
(
, C =
1
0 1
0 0 1
12 1
2
c c
c
. . .
. . .
. . . . . . . . . . . .
. . .
n
n
(
(
(
(
where, obviously, b
i1
= a
i1
,
i = 1, n; b
ij
= a
ij
b c
is sj
s
j
=
1
1
, 1 < j i n; c
1j
=
a
b
1
11
j
, j = 1, n; c
ij
=
1
1
1
b
a b c
ii
ij is sj
s
i
|
\
|
. =
1
= 0, i = 1, n , a
ik
, a
i n+1
K, K commutative field, a
ik
= a
ki
, i, k = 1, n,
det A 0. Consider, for each p, 2 p n + 1, the system of p 1 linear equations with
p 1 unknowns
a y a y a
a y a y a
11 1 1 1 1 1
1 1 1 1 1 1 1
0
0
+ + + =
+ + + =
. . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . .
p p p
p p p p p p
and let be t
1p
, t
2p
, . . . , t
p1 p
a solution of this (suppose that it exists). So (14) a t
i s s p
s
p
=
1
1
+
+ a
ip
= 0, i = 1 1 , p and obviously t
1 n+1
, t
2 n+1
, . . . , t
n n+1
is the solution of (13). One takes
the matrices
~
. . .
. . .
. . . . . . . . . . . . . . .
. . .
. . .
,
. . .
. . .
A
a a a a
a a a a
a a a a
a a a a
T
t t t
t t
t
t
=
(
(
(
(
(
=
(
(
(
(
(
(
(
+
+
+
+ + + + +
+
+
+
+
11 12 1 1 1
21 22 2 2 1
1 2 1
1 1 2 1 1 1 1
12 13 1 1
23 2 1
3 1
1
1
0 1
0 0 1
0 0 0
0 0 0 1
n n
n n
n n n n n n
n n n n n n
n
n
n
n n
M M M M
where a
n+1 n+1
is an element any from K submitted only to the condition det
~
A 0 (there
20
exists, as det A 0). Obviously
~ ~
A A
t
= . Let be (15) C = [
c
ik
] : = T. Taking into account
(14), from (15) one obtains c
i1
= a
i1
, i = 1 1 , n+
(16) 1 < l n + 1 :
c a t a l i n
c i l
l l
l
l
l
i ir r
r
i
i
= + +
= <
1
1
1
0 1
,
, .
. So C is lower triangular and as
det C 0 (see (15)), it results c
ii
0, i = 1 1 , n+ . Finally, put C in the form (17) C = C
0
,
where, if = [
ik
], i < k
ik
= 0, and C
0
= diag {c
11
, . . . , c
n+1 n+1
}, therefore, from (17),
(18)
il
=
c
c
l
ll
i
, 1 l i n + 1. From (15) and (17), C
0
= T and as det T = 1 0,
= C
0
T
1
, but
~ ~
A A
t
= , hence (T
1
)
t
C
0
t
= C
0
T
1
, consequently
t
= T
1
, T
t
= E, E
the unit matrix, i.e.
(19)
= +
= + + + +
= + + + +
+ +
+ + + +
+ + + +
. t
. . . . . . . .
t t . . . t
t t . . . t
0
0
0
1 p p p 1 p
1 p 2 p 2 p 1 p 23 3 1 p 2 1 p
1 p 1 p 1 p 1 p 12 2 1 p 1 1 p
p = 1, n.
With (16), (18) and (19) one calculates stepwise (by
escalade , each step is attached to
the index l of
il
) the solutions of the intermediate systems until arrive to t
1 n+1
, t
2 n+1
, . . . ,
t
n n+1
the solution of (13).
Step
1
i1
= = +
( )
, ,
18
11
1 1
a
a
i n
i1
and from (19) where p = 1 (a single relation) it results t
12
:
21
+ t
12
= 0.
Step
2
i2
=
c
c
a t a
a t a
i n
i 2 i1 i 2
22
16
12
21 12 22
2 1 =
+
+
= +
( )
, , and then t
13
, t
23
from (19) where p = 2
(two relations):
31
+
32
t
12
+ t
13
= 0,
32
+ t
23
= 0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Step
n
n+1, n
=
a t a
a t a
n s s n n 1 n
s
n
ns s n n n
s
n
+ +
=
+
+
1
1
1
1
1
and finally t
1 n+1
,. . . , t
n n+1
from (19) where p = n.
Matrix operators
Let A be a matrix from M
m, n
(K), K commutative field.
Definition. If in the matrix A the elements of a row or of a column are multiplied by
0 from K, an elementary transformation of type I was effected over A.
If in the matrix A the elements of a row (resp. column) are added with the elements of
another row (resp. column) multiplied by the scalar from K, an elementary
transformation of type II was effected over A.
If in the matrix A two rows or two columns are changed to each other, an elementary
transformation of type III was effected over A.
21
The square matrix
uv
: = [x
ij
], (i, j) (u, v) x
ij
= 0, x
uv
= 1, is named fundamental
unitary matrix. Obviously det
uv
= 0.
1.3 Let A be from M
m, n
(K).
uv
A is the matrix of type (m, n) whose row of index u is
equal with the row of index v from A and the other rows are equal with 0. A
uv
is the
matrix of type (m, n) whose column of index v is equal with the column of index u of A and
the other columns are equal with 0.
Calculate!
From 1.3, (0
1
) u v
2
uv
= 0, while (0
2
)
uu
2
uu
= .
Example. For u v the inverse of the matrix E +
uv
, K, is E
uv
: (E +
uv
) (E
uv
)
= E
) 0 (
1 2
uv
2
= E.
1.4 Let A be from M
m, n
(K), E the unit matrix from M
m
(K) (resp. M
n
(K)) and from
K. Then (E + ( 1)
u u
) A (resp. A (E + ( 1)
u u
)) is obtained from A by multiplying
the elements of the row (resp. column) u with .
For instance, (E + ( 1)
u u
)
A = A + ( 1)
u u
A, apply 1.3 !
1.5 Let A be from M
m, n
(K), E the unit matrix from M
m
(K) (resp. M
n
(K)) and from
K. Then (E +
u v
) A (resp. A (E +
v u
)) is obtained from A by adding to the elements
of the row (resp. column) u the elements of the row (resp. column) v multiplied with .
For instance, A
(E +
v u
)
= A + A
v u
, apply 1.3 !
One obtains as above
1.6 Let A be from M
m, n
(K), E the unit matrix from M
m
(K) (resp. M
n
(K)). Then
(E (
u u
+
v v
) +
u v
+
v u
) A (resp. A
(E (
u u
+
v v
) +
u v
+
v u
) ) is obtained
from A by changing between them the rows (resp. columns) of indices u and v.
The square matrices E +
u v
, E (
u u
+
v v
) +
u v
+
v u
are called operator
matrices. We have
(0
3
) det (E +
u v
) = 1 +
u v
(0
4
) det (E (
u u
+
v v
) +
u v
+
v u
) = 1
and so these matrices are non singular (in the first case, when u = v it must 1, which
happens in the case of an elementary transformation of type I). We sketch the calculus for
(0
4
). The determinant is equal with
22
v u
v
u
1
0
0 0
0
1
0
1
1 0 0
0
. 0
1
0
1
0
0 0
0
1
This is successively developed after the
first v 1 rows and then after the last
m u rows, one obtains
0 0 1
0 1 0
1
1
1 0 0 0
K K
M M
M M
K
, which is developed after the first row obtaining
0 1 0 0
0 0 0
0 1
1 0
K
M M
M
K K K
, which developed after the last row leads to an upper triangular
determinant.
The last three statements show that any elementary transformation on A reverts to the
multiplication of A, at the left or at the right, with an operator matrix which is non singular,
consequently,
1.7 Effecting on a matrix any elementary transformation, the rank of this remains the
same.
And moreover
1.8 For any matrix A from M
m, n
(K), effecting a finite number of elementary
transformations over the rows and the columns of A one obtains a matrix of the form
E 0
0 0
'
'' '''
(
, E the unit matrix of type (r, r), where r = rk A, and 0', 0'', 0''' blocks with the
elements equal to 0.
Remark. Taking into account 4.3
11
and the connection between the elementary
transformations and the operator matrices, one can affirm
A M
m, n
(K) U, V non singular square matrices e.g. UAV =
E 0
0 0
'
'' '''
(
, E the unit
matrix of type (r, r), r = rk A .
Now we can complete
23
1.9 Sylvester low of the nullity. For every A from M
m, n
(K) and B from M
n, p
(K),
rk A + rk B n rk (AB) min (rk A, rk B).
The particular case B =
E 0
0 0
'
' ' ' ' '
(
, E the unit matrix of type (r
2
, r
2
), r
2
: = rk B. We
partition accordingly A and apply 1.1, AB =
A A
A A
11 12
21 22
(
E 0
0 0
'
'' '''
(
=
A
A
11
21
0
0
'
'''
(
and so the
first r
2
columns of AB coincide with the first r
2
columns of A and the other columns are
equal to 0. Let S
1
be the sequence of the columns of A and S
1
the subsequence of the first r
2
columns. If
1
. Add to
1
other columns until one obtains
1
a maximal subsequence with columns l.i. of S
1
(their
number is equal to r
1
: = rk A). The number of columns from S
1
which is linearly expressed
by
1
, and among them there are also those which are linearly expressed by
1
, being
n r
1
, it results r
2
n r
1
, r
1
+ r
2
n . The general case. Let U, V be non singular
square matrices e.g. U
B
V =
E 0
0 0
'
'' '''
(
(remark to 4.3
11
). Then A
B
V = A
U
1
E 0
0 0
'
'' '''
(
,
rk A
B
V = rk A
U
1
E 0
0 0
'
'' '''
(
and using 4.3
5
and the particular case considered,
rk A
B
= rk (A
U
1
)
E 0
0 0
'
'' '''
(
rk A
U
1
+ r
2
n = r
1
+ r
2
n.
Staircase form
Let A be matrix from M
m, n
(K), K commutative field. Performing certain elementary
transformations of type II and III on the lines of A (see above) one obtains from A a matrix
with a form, staircase form, which generalizes the triangular form of a square matrix. The
exquisite importance of this notion will result from the following. Consider the first column
of A which has not all the elements equal to 0. Suppose that this is the column 1. One
changes two rows to each other for placing in the position (1,1) an element 0, this
becomes the pivot, and then under the pivot
one makes 0
with transformations of type
III. Go to the column 2. If all the elements from the positions (2, 2), (3, 2), . . . , (m, 2) are
equal with 0 go to the column 3, in the contrary case one changes between them two of the
rows 2, . . . , m to bring in the position (2, 2) an element 0, this becomes pivot, and then
under pivot
one makes 0
with transformations of type III etc. After a certain finite
number of steps, the possibilities of action as it was indicated are exhausted and the matrix
U resulted is by definition staircase form (or trapezoidal form) of A. Obviously it is not
unique (see the various possibilities of change between them two rows to obtain the pivot),
but the analysis of the method used shows that between A and U we have the relation
(20) P A = T U ,
24
P permuting matrix, T lower triangular matrix with the elements on the principal diagonal
equal with 1. Remark now that from (20) it results (21) Ker A = Ker U, because
A
x = 0 U
x = 0, x from K
n
, P and T being non singular matrices. The visualization of
the matrix U is the following, for instance in the case m = 5, n = 6,
(22) U =
0
0 0 0
0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
(
(
(
(
. . . .
. .
.
(the star in a circle means a pivot; the non zero rows are 1, 2, 3; the columns with pivot are
2, 4, 5; one descends a step only in front of a pivot; under the stair there are only elements
equal to 0).
Example 2. (K = R) A =
0 1 0 1 0
2 0 2 0 2
2 1 0 2 1
0 1 0 1 0
(
(
2 0 2 0 2
0 1 0 1 0
2 1 0 2 1
0 1 0 1 0
(
(
2 0 2 0 2
0 1 0 1 0
0 1 2 2 1
0 1 0 1 0
(
(
2 0 2 0 2
0 1 0 1 0
0 0 2 1 1
0 0 0 0 0
(
(
= U.
In the following properties of the matrix U staircase form.
(23) The number of the non zero rows of
U = the number of the columns with pivot of
U.
(24) The non zero rows of U similar to the columns with pivot of U are linearly
independent.
Let be for instance u
1
, u
2
, u
3
the non zero rows of U and u : = u
1
+ u
2
+ u
3
= 0,
, , from K. The first projection of u is pivot + 0 + 0, so pivot = 0, = 0.
The second projection of u is pivot + 0, hence pivot = 0, = 0 etc. The same
reasoning for the second affirmation.
(25) Corollary. U being a staircase form of A, rk A = rk U = the number of non zero
rows of U = the number of the columns with pivot of U.
The number from the statement is equal, taking into account (24), with rk U and
rk U = rk A.
For instance, in the case of ex. 2, rk A = 3.