Problems and Theorems in Linear Algebra - V. Prasolov PDF
Problems and Theorems in Linear Algebra - V. Prasolov PDF
Problems and Theorems in Linear Algebra - V. Prasolov PDF
IN LINEAR ALGEBRA
V. Prasolov
Abstract. This book contains the basics of linear algebra with an emphasis on nonstandard and neat proofs of known theorems. Many of the theorems of linear algebra
obtained mainly during the past 30 years are usually ignored in text-books but are
quite accessible for students majoring or minoring in mathematics. These theorems
are given with complete proofs. There are about 230 problems with solutions.
Typeset by AMS-TEX
1
CONTENTS
Preface
Main notations and conventions
Chapter I. Determinants
Historical remarks: Leibniz and Seki Kova. Cramer, LHospital,
Cauchy and Jacobi
1. Basic properties of determinants
The Vandermonde determinant and its application. The Cauchy determinant. Continued fractions and the determinant of a tridiagonal matrix.
Certain other determinants.
Problems
2. Minors and cofactors
Binet-Cauchys formula. Laplaces theorem. Jacobis theorem on minors
of the adjoint matrix. The generalized Sylvesters identity. Chebotarevs
p1
theorem on the matrix ij 1 , where = exp(2i/p).
Problems
A11 A12
, the matrix (A|A11 ) = A22 A21 A1
11 A12 is
A21 A22
called the Schur complement (of A11 in A).
3.1. det A = det A11 det (A|A11 ).
3.2. Theorem. (A|B) = ((A|C)|(B|C)).
Given A =
Problems
4. Symmetric functions, sums xk1 + +xkn , and Bernoulli numbers
Problems
Solutions
Chapter II. Linear spaces
Historical remarks: Hamilton and Grassmann
5. The dual space. The orthogonal complement
Linear equations and their application to the following theorem:
5.4.3. Theorem. If a rectangle with sides a and b is arbitrarily cut into
xi
xi
squares with sides x1 , . . . , xn then
Q and
Q for all i.
a
b
Typeset by AMS-TEX
1
Problems
6. The kernel (null space) and the image (range) of an operator.
The quotient space
6.2.1. Theorem. Ker A = (Im A) and Im A = (Ker A) .
Fredholms alternative. Kronecker-Capellis theorem. Criteria for solvability of the matrix equation C = AXB.
Problem
7. Bases of a vector space. Linear independence
Change of basis. The characteristic polynomial.
7.2. Theorem. Let x1 , . . . , xn and y1 , . . . , yn be two bases, 1 k n.
Then k of the vectors y1 , . . . , yn can be interchanged with some k of the
vectors x1 , . . . , xn so that we get again two bases.
7.3. Theorem. Let T : V V be a linear operator such that the
vectors , T , . . . , T n are linearly dependent for every V . Then the
operators I, T, . . . , T n are linearly dependent.
Problems
8. The rank of a matrix
The Frobenius inequality. The Sylvester inequality.
8.3. Theorem. Let U be a linear subspace of the space Mn,m of n m
matrices, and r m n. If rank X r for any X U then dim U rn.
A description of subspaces U Mn,m such that dim U = nr.
Problems
9. Subspaces. The Gram-Schmidt orthogonalization process
Orthogonal projections.
9.5.
Theorem. Let e1 , . . . , en be an orthogonal basis for a space V ,
di = ei . The projections of the vectors e1 , . . . , en onto an m-dimensional
2
subspace of V have equal lengths if and only if d2i (d2
1 + + dn ) m for
every i = 1, . . . , n.
9.6.1. Theorem. A set of k-dimensional subspaces of V is such that
any two of these subspaces have a common (k 1)-dimensional subspace.
Then either all these subspaces have a common (k 1)-dimensional subspace
or all of them are contained in the same (k + 1)-dimensional subspace.
Problems
10. Complexification and realification. Unitary spaces
Unitary operators. Normal operators.
10.3.4. Theorem. Let B and C be Hermitian operators. Then the
operator A = B + iC is normal if and only if BC = CB.
Complex structures.
Problems
Solutions
Chapter III. Canonical forms of matrices and linear operators
11. The trace and eigenvalues of an operator
The eigenvalues of an Hermitian operator and of a unitary operator. The
eigenvalues of a tridiagonal matrix.
Problems
12. The Jordan canonical (normal) form
12.1. Theorem. If A and B are matrices with real entries and A =
P BP 1 for some matrix P with complex entries then A = QBQ1 for some
matrix Q with real entries.
CONTENTS
The existence and uniqueness of the Jordan canonical form (V
aliachos
simple proof).
The real Jordan canonical form.
12.5.1. Theorem. a) For any operator A there exist a nilpotent operator
An and a semisimple operator As such that A = As +An and As An = An As .
b) The operators An and As are unique; besides, As = S(A) and An =
N (A) for some polynomials S and N .
12.5.2. Theorem. For any invertible operator A there exist a unipotent
operator Au and a semisimple operator As such that A = As Au = Au As .
Such a representation is unique.
Problems
13. The minimal polynomial and the characteristic polynomial
13.1.2. Theorem. For any operator A there exists a vector v such that
the minimal polynomial of v (with respect to A) coincides with the minimal
polynomial of A.
13.3. Theorem. The characteristic polynomial of a matrix A coincides
with its minimal polynomial if and only if for any vector (x1 , . . . , xn ) there
exist a column P and a row Q such that xk = QAk P .
Hamilton-Cayleys theorem and its generalization for polynomials of matrices.
Problems
14. The Frobenius canonical form
Existence of Frobeniuss canonical form (H. G. Jacobs simple proof)
Problems
15. How to reduce the diagonal to a convenient form
15.1. Theorem. If A 6= I then A is similar to a matrix with the
diagonal elements (0, . . . , 0, tr A).
15.2. Theorem. Any matrix A is similar to a matrix with equal diagonal
elements.
15.3. Theorem. Any nonzero square matrix A is similar to a matrix
all diagonal elements of which are nonzero.
Problems
16. The polar decomposition
The polar decomposition of noninvertible and of invertible matrices. The
uniqueness of the polar decomposition of an invertible matrix.
16.1. Theorem. If A = S1 U1 = U2 S2 are polar decompositions of an
invertible matrix A then U1 = U2 .
16.2.1. Theorem. For any matrix A there exist unitary matrices U, W
and a diagonal matrix D such that A = U DW .
Problems
17. Factorizations of matrices
17.1. Theorem. For any complex matrix A there exist a unitary matrix
U and a triangular matrix T such that A = U T U . The matrix A is a
normal one if and only if T is a diagonal one.
Gauss, Grams, and Lanczos factorizations.
17.3. Theorem. Any matrix is a product of two symmetric matrices.
Problems
18. Smiths normal form. Elementary factors of matrices
Problems
Solutions
Problems
20. Simultaneous diagonalization of a pair of Hermitian forms
Simultaneous diagonalization of two Hermitian matrices A and B when
A > 0. An example of two Hermitian matrices which can not be simultaneously diagonalized. Simultaneous diagonalization of two semidefinite matrices. Simultaneous diagonalization of two Hermitian matrices A and B such
that there is no x 6= 0 for which x Ax = x Bx = 0.
Problems
21. Skew-symmetric matrices
21.1.1. Theorem. If A is a skew-symmetric matrix then A2 0.
21.1.2. Theorem. If A is a real matrix such that (Ax, x) = 0 for all x,
then A is a skew-symmetric matrix.
21.2. Theorem. Any skew-symmetric bilinear form can be expressed as
r
P
(x2k1 y2k x2k y2k1 ).
k=1
Problems
22. Orthogonal matrices. The Cayley transformation
The standard Cayley transformation of an orthogonal matrix which does
not have 1 as its eigenvalue. The generalized Cayley transformation of an
orthogonal matrix which has 1 as its eigenvalue.
Problems
23. Normal matrices
23.1.1. Theorem. If an operator A is normal then Ker A = Ker A and
Im A = Im A.
23.1.2. Theorem. An operator A is normal if and only if any eigenvector of A is an eigenvector of A .
23.2. Theorem. If an operator A is normal then there exists a polynomial P such that A = P (A).
Problems
24. Nilpotent matrices
24.2.1. Theorem. Let A be an n n matrix. The matrix A is nilpotent
if and only if tr (Ap ) = 0 for each p = 1, . . . , n.
Nilpotent matrices and Young tableaux.
Problems
25. Projections. Idempotent matrices
25.2.1&2. Theorem. An idempotent operator P is an Hermitian one
if and only if a) Ker P Im P ; or b) |P x| |x| for every x.
25.2.3. Theorem. Let P1 , . . . , Pn be Hermitian, idempotent operators.
The operator P = P1 + + Pn is an idempotent one if and only if Pi Pj = 0
whenever i 6= j.
25.4.1. Theorem. Let V1 Vk , Pi : V Vi be Hermitian
idempotent operators, A = P1 + + Pk . Then 0 < det A 1 and det A = 1
if and only if Vi Vj whenever i 6= j.
Problems
26. Involutions
CONTENTS
Problems
Solutions
Chapter V. Multilinear algebra
27. Multilinear maps and tensor products
An invariant definition of the trace. Kroneckers product of matrices,
A B; the eigenvalues of the matrices A B and A I + I B. Matrix
equations AX XB = C and AX XB = X.
Problems
28. Symmetric and skew-symmetric tensors
The Grassmann algebra. Certain canonical isomorphisms. Applications
of Grassmann algebra: proofs of Binet-Cauchys formula and Sylvesters identity.
n
P
28.5.4. Theorem. Let B (t) = 1 +
tr(qB )tq and SB (t) = 1 +
q=1
n
P
q=1
q q
tr (SB
)t . Then SB (t) = (B (t))1 .
Problems
29. The Pfaffian
2n
The Pfaffian of principal submatrices of the matrix M = mij 1 , where
mij = (1)i+j+1 .
29.2.2. Theorem. Given a skew-symmetric matrix A we have
2
Pf (A + M ) =
n
X
2k
k=0
pk , where pk =
1
1
...
...
2(nk)
2(nk)
Problems
30. Decomposable skew-symmetric and symmetric tensors
30.1.1. Theorem. x1 xk = y1 yk 6= 0 if and only if
Span(x1 , . . . , xk ) = Span(y1 , . . . , yk ).
30.1.2. Theorem. S(x1 xk ) = S(y1 yk ) 6= 0 if and only
if Span(x1 , . . . , xk ) = Span(y1 , . . . , yk ).
Plu
cker relations.
Problems
31. The tensor rank
Strassens algorithm. The set of all tensors of rank 2 is not closed. The
rank over R is not equal, generally, to the rank over C.
Problems
32. Linear transformations of tensor products
A complete description of the following types of transformations of
V m (V )n
= Mm,n :
1) rank-preserving;
2) determinant-preserving;
3) eigenvalue-preserving;
4) invertibility-preserving.
Problems
Solutions
Chapter VI. Matrix inequalities
33. Inequalities for symmetric and Hermitian matrices
33.1.1. Theorem. If A > B > 0 then A1 < B 1 .
33.1.3. Theorem. If A > 0 is a real matrix then
(A1 x, x) = max(2(x, y) (Ay, y)).
y
A1
B
B
A2
|A2 |.
Hadamards inequality and Szaszs inequality.
n
P
33.3.1. Theorem. Suppose i > 0,
i = 1 and Ai > 0. Then
i=1
|1 A1 + + k Ak | |A1 |1 + + |Ak |k .
33.3.2. Theorem. Suppose Ai 0, i C. Then
| det(1 A1 + + k Ak )| det(|1 |A1 + + |k |Ak ).
Problems
34. Inequalities for eigenvalues
Schurs inequality. Weyls inequality
(foreigenvalues of A + B).
B C
> 0 be an Hermitian matrix,
34.2.2. Theorem. Let A =
C B
1 n and 1 m the eigenvalues of A and B, respectively.
Then i i n+im .
34.3. Theorem. Let A and B be Hermitian idempotents, any eigenvalue of AB. Then 0 1.
34.4.1. Theorem. Let the i and i be the eigenvalues of A and AA,
Problems
35. Inequalities for matrix norms
The spectral norm kAks and the Euclidean norm kAke , the spectral radius
(A).
35.1.2. Theorem. If a matrix A is normal then (A) = kAks .
Problems
36. Schurs complement and Hadamards product. Theorems of
Emily Haynsworth
CONTENTS
|A + B| |A|
1+
n1
X
k=1
|Bk |
|Ak |
+ |B|
1+
n1
X
k=1
|Ak |
|Bk |
!
.
Hadamards product A B.
36.2.1. Theorem. If A > 0 and B > 0 then A B > 0.
Oppenheims inequality
Problems
37. Nonnegative matrices
Wielandts theorem
Problems
38. Doubly stochastic matrices
Birkhoffs theorem. H.Weyls inequality.
Solutions
Chapter VII. Matrices in algebra and calculus
39. Commuting matrices
The space of solutions of the equation AX = XA for X with the given A
of order n.
39.2.2. Theorem. Any set of commuting diagonalizable operators has
a common eigenbasis.
39.3. Theorem. Let A, B be matrices such that AX = XA implies
BX = XB. Then B = g(A), where g is a polynomial.
Problems
40. Commutators
40.2. Theorem. If tr A = 0 then there exist matrices X and Y such
that [X, Y ] = A and either (1) tr Y = 0 and an Hermitian matrix X or (2)
X and Y have prescribed eigenvalues.
40.3. Theorem. Let A, B be matrices such that adsA X = 0 implies
s
adX B = 0 for some s > 0. Then B = g(A) for a polynomial g.
40.4. Theorem. Matrices A1 , . . . , An can be simultaneously triangularized over C if and only if the matrix p(A1 , . . . , An )[Ai , Aj ] is a nilpotent one
for any polynomial p(x1 , . . . , xn ) in noncommuting indeterminates.
40.5. Theorem. If rank[A, B] 1, then A and B can be simultaneously
triangularized over C.
Problems
41. Quaternions and Cayley numbers. Clifford algebras
Isomorphisms so(3, R)
= su(2) and so(4, R)
= so(3, R) so(3, R). The
vector products in R3 and R7 . Hurwitz-Radon families of matrices. HurwitzRadon number (2c+4d (2a + 1)) = 2c + 8d.
41.7.1. Theorem. The identity of the form
2
2
(x21 + + x2n )(y12 + + yn
) = (z12 + + zn
),
Problems
42. Representations of matrix algebras
Complete reducibility of finite-dimensional representations of Mat(V n ).
Problems
43. The resultant
Sylvesters matrix, Bezouts matrix and Barnetts matrix
Problems
44. The general inverse matrix. Matrix equations
44.3. Theorem.
a)The equation
AX
A O
b) The equation AX Y A = C is solvable if and only if rank
O B
A C
= rank
.
O B
if the matrices
Problems
45. Hankel matrices and rational functions
46. Functions of matrices. Differentiation of matrices
Differential equation X = AX and the Jacobi formula for det A.
Problems
47. Lax pairs and integrable systems
48. Matrices with prescribed eigenvalues
48.1.2. Theorem. For any polynomial f (x) = xn +c1 xn1 + +cn and
any matrix B of order n 1 whose characteristic and minimal polynomials
coincide there exists a matrix A such that B is a submatrix of A and the
characteristic polynomial of A is equal to f .
48.2. Theorem. Given all offdiagonal elements in a complex matrix A
it is possible to select diagonal elements x1 , . . . , xn so that the eigenvalues
of A are given complex numbers; there are finitely many sets {x1 , . . . , xn }
satisfying this condition.
Solutions
Appendix
Eisensteins criterion, Hilberts Nullstellensats.
Bibliography
Index
CONTENTS
PREFACE
There are very many books on linear algebra, among them many really wonderful
ones (see e.g. the list of recommended literature). One might think that one does
not need any more books on this subject. Choosing ones words more carefully, it
is possible to deduce that these books contain all that one needs and in the best
possible form, and therefore any new book will, at best, only repeat the old ones.
This opinion is manifestly wrong, but nevertheless almost ubiquitous.
New results in linear algebra appear constantly and so do new, simpler and
neater proofs of the known theorems. Besides, more than a few interesting old
results are ignored, so far, by text-books.
In this book I tried to collect the most attractive problems and theorems of linear
algebra still accessible to first year students majoring or minoring in mathematics.
The computational algebra was left somewhat aside. The major part of the book
contains results known from journal publications only. I believe that they will be
of interest to many readers.
I assume that the reader is acquainted with main notions of linear algebra:
linear space, basis, linear map, the determinant of a matrix. Apart from that,
all the essential theorems of the standard course of linear algebra are given here
with complete proofs and some definitions from the above list of prerequisites is
recollected. I made the prime emphasis on nonstandard neat proofs of known
theorems.
In this book I only consider finite dimensional linear spaces.
The exposition is mostly performed over the fields of real or complex numbers.
The peculiarity of the fields of finite characteristics is mentioned when needed.
Cross-references inside the book are natural: 36.2 means subsection 2 of sec. 36;
Problem 36.2 is Problem 2 from sec. 36; Theorem 36.2.2 stands for Theorem 2
from 36.2.
Acknowledgments. The book is based on a course I read at the Independent
University of Moscow, 1991/92. I am thankful to the participants for comments and
to D. V. Beklemishev, D. B. Fuchs, A. I. Kostrikin, V. S. Retakh, A. N. Rudakov
and A. P. Veselov for fruitful discussions of the manuscript.
Typeset by AMS-TEX
10
PREFACE
a11 . . . a1n
A = . . . . . . . . . denotes a matrix of size m n; we say that a square
am1 . . . amn
n n matrix is of order n;
aij , sometimes denoted by ai,j for clarity, is the element or the entry from the
intersection of the i-th row and the j-th column;
(a
another notation for the matrix A;
ij )is
aij n still another notation for the matrix (aij ), where p i, j n;
p
det(A), |A| and det(aij ) all denote the determinant
of the matrix A;
n
n
j=1
;
sign = (1) =
1 if is odd
Span(e1 , . . . , en ) is the linear space spanned by the vectors e1 , . . . , en ;
Given bases e1 , . . . , en and 1 , . . . , m in spaces V n and W m , respectively, we
x1
.
assign to a matrix A the operator A : V n W m which sends the vector ..
y1
a11
.. ..
into the vector . =
.
am1
ym
n
P
Since yi =
aij xj , then
...
...
...
a1n
x1
.. ..
.
.
.
amn
xn
j=1
A(
n
X
j=1
in particular, Aej =
P
i
x j ej ) =
m X
n
X
aij xj i ;
i=1 j=1
aij i ;
xn
11
12
CHAPTER
PREFACE I
DETERMINANTS
13
A C
2. If A and B are square matrices, det
= det A det B.
0 B
P
n
i+j
3. |aij |n1 =
aij Mij , where Mij is the determinant of the matrix
j=1 (1)
obtained from A by crossing out the ith row and the jth column of A (the row
(echelon) expansion of the determinant or, more precisely, the expansion with respect
to the ith row).
(To prove this formula one has to group the factors of aij , where j = 1, . . . , n,
for a fixed i.)
4.
1 a12 . . . a1n
1 + 1 a12 . . . a1n
1 a12 . . . a1n
..
..
.. .
..
.. = ..
..
..
+ ..
.
.
.
.
.
.
.
n an2 . . . ann
n + n an2 . . . ann
n an2 . . . ann
5. det(AB) = det A det B.
6. det(AT ) = det A.
1.1. Before we start computing determinants, let us prove Cramers rule. It
appeared already in the first published paper on determinants.
Theorem (Cramers rule). Consider a system of linear equations
x1 ai1 + + xn ain = bi (i = 1, . . . , n),
i.e.,
x1 A1 + + xn An = B,
n
where Aj is the jth column of the matrix A = aij 1 . Then
xi det(A1 , . . . , An ) = det (A1 , . . . , B, . . . , An ) ,
where the column B is inserted instead of Ai .
Proof. Since for j 6= i the determinant of the matrix det(A1 , . . . , Aj , . . . , An ),
a matrix with two identical columns, vanishes,
P
det(A1 , . . . , B, . . . , An ) = det (A1 , . . . , xj Aj , . . . , An )
X
=
xj det(A1 , . . . , Aj , . . . , An ) = xi det(A1 , . . . , An ).
If det(A1 , . . . , An ) 6= 0 the formula obtained can be used to find solutions of a
system of linear equations.
1.2. One of the most often encountered determinants is the Vandermonde determinant, i.e., the determinant of the Vandermonde matrix
1 x1 x21 . . . xn1
Y
.
.
.
.
..
..
.. =
V (x1 , . . . , xn ) = ..
(xi xj ).
1 xn x2 . . . xn1 i>j
n
n
To compute this determinant, let us subtract the (k 1)-st column multiplied
by x1 from the kth one for k = n, n 1, . . . , 2. The first row takes the form
14
DETERMINANTS
1 x2 x22 . . . xn2
Y
.
..
..
.. .
V (x1 , . . . , xn ) =
(xi x1 ) ..
.
.
.
i>1
1 xn x2 . . . xn2
n
n
For n = 2 the identity V (x1 , x2 ) = x2 x1 is obvious, hence,
Y
V (x1 , . . . , xn ) =
(xi xj ).
i>j
= (x1 + y1 )1 .
For a base of induction take
The step of induction will be performed in two stages.
First, let us subtract the last column from each of the preceding ones. We get
|aij |11
0 1
0 0
.
..
.
.
.
0 0
0 0
a0 a1
0
1
..
.
0
0
a2
...
...
..
.
..
.
...
...
0
0
..
.
1
0
an2
0
0
..
.
1
an1
15
1.5.
Let bi , i Z, such that bk = bl if k l (mod n) be given; the matrix
aij n , where aij = bij , is called a circulant matrix.
1
Let 1 , . . . , n be distinct nth roots of unity; let
f (x) = b0 + b1 x + + bn1 xn1 .
Let us prove that the determinant of the circulant matrix |aij |n1 is equal to
f (1 )f (2 ) . . . f (n ).
It is
1
1
1
b0 b2 b1
f (1)
f (1)
f (1)
1 1
1 21 b1 b0 b2 f (1 ) 1 f (1 ) 21 f (1 )
2 22
b2 b1 b0
f (2 ) 2 f (2 ) 22 f (2 )
1 1
= f (1)f (1 )f (2 ) 1 1
1 2
Therefore,
1
21 .
22
Taking into account that the Vandermonde determinant V (1, 1 , 2 ) does not
vanish, we have:
|aij |31 = f (1)f (1 )f (2 ).
The proof of the general case is similar.
n
1.6. A tridiagonal matrix is a square matrix J = aij 1 , where aij = 0 for
|i j| > 1.
Let ai = aii for i = 1, . . . , n, let bi = ai,i+1 and ci = ai+1,i for i = 1, . . . , n 1.
Then the tridiagonal matrix takes the form
a1 b1 0 . . .
0
0
0
c1 a2 b2 . . .
0
0
0
..
.
0 c2 a3
0
0
0
.
..
.. . .
..
..
..
.
.
.
.
.
.
.
. .
0 0 0 ... a
bn2
0
n2
0 0 0 ... c
an1 bn1
n2
0 0 0 ...
0
cn1
an
To compute the determinant of this matrix we can make use of the following
recurrent relation. Let 0 = 1 and k = |aij |k1 for k 1.
k
Expanding aij 1 with respect to the kth row it is easy to verify that
k = ak k1 bk1 ck1 k2 for k 2.
The recurrence relation obtained indicates, in particular, that n (the determinant
of J) depends not on the numbers bi , cj themselves but on their products of the
form bi ci .
16
DETERMINANTS
The quantity
a1
0
.
(a1 . . . an ) = ..
1
a2
0
1
1
..
.
a3
..
.
0
0
0
0
...
...
..
.
..
.
..
.
..
0
0
0
0
0
..
.
0
..
.
an2
1
0
an1
1
.
...
1
an
a1 +
a2 +
1
a3 + .
..
(a1 a2 . . . an )
.
(a2 a3 . . . an )
an1 +
1
an
1
(a1 a2 )
=
.
a2
(a2 )
1
(a1 a2 . . . an )
=
,
(a2 a3 . . . an )
(a2 a3 . . . an )
(a3 a4 . . . an )
rows.
A11 A12
Consider the matrix
, where A11 and A22 are square matrices of
A21 A22
order m and n, respectively.
Let D be a square matrix of order m and B a matrix of size n m.
DA11 DA12
A11
A12
= |A|
Theorem.
= |D| |A| and
A21
A22
A21 + BA11 A22 + BA12 .
Proof.
DA11
A21
A11
A21 + BA11
D 0
A11 A12
and
0 I
A21 A22
A12
I 0
A11 A12
=
.
A22 + BA12
B I
A21 A22
DA12
A22
17
Problems
n
1.1. Let A = aij 1 be skew-symmetric, i.e., aij = aji , and let n be odd.
Prove that |A| = 0.
1.2. Prove that the determinant of a skew-symmetric matrix of even order does
not change if to all its elements we add the same number.
1.3. Compute the determinant of a skew-symmetric matrix An of order 2n with
each element above the main diagonal being equal to 1.
1.4. Prove that for n 3 the terms in the expansion of a determinant of order
n cannot be all positive.
1.5. Let aij = a|ij| . Compute |aij |n1 .
0
0
1 1
h
1 0
x
1.6. Let 3 = 2
and define n accordingly. Prove that
hx
h 1
x
3
x hx2 hx h
n = (x + h)n .
1.7. Compute |cij |n1 , where cij = ai bj for i 6= j and cii = xi .
1.8. Let ai,i+1 = ci for i = 1, . . . , n, the other matrix elements being zero. Prove
that the determinant of the matrix I + A + A2 + + An1 is equal to (1 c)n1 ,
where c = c1 . . . cn .
1.9. Compute |aij |n1 , where aij = (1 xi yj )1 .
m
1.10. Let aij = n+i
j . Prove that |aij |0 = 1.
1.11. Prove that for any real numbers a, b, c, d, e and f
(a + b)de (d + e)ab
(b + c)ef (e + f )bc
(c + d)f a (f + a)cd
ab de
bc ef
cd f a
a + b d e
b + c e f = 0.
c + d f a
Vandermondes determinant.
1.12. Compute
.
..
1
1.13. Compute
x1
..
.
xn
...
...
1 x1
.
..
..
.
1 xn
xn2
1
..
.
xn2
n
...
...
n1
(x1 + x2 + + xn1 )
(x2 + x3 + + xn )n1
..
.
xn2
1
..
.
xn2
n
x1 x2 . . . xn1
x2 x3 . . . xn
..
.
18
DETERMINANTS
for ki + j i 0 ,
(k
+
j i)!
ai,j =
i
(xi xj )2 .
i>j
s0
s1
.
.
.
sn
s1
s2
..
.
sn+1
...
...
...
sn1
sn
..
.
1
y
..
.
s2n1
(xi xk )(yk yi ).
1
n
i>k
1 + + n = 0
............
n
1 + + nn = 0
in C.
1.22. Let k (x0 , . . . , xn ) be the kth elementary symmetric function. Set: 0 = 1,
(b
xj ) then |aij |n0 =
k
Q xi ) = k (x0 , . . . , xi1 , xi+1 , . . . , xn ). Prove that if aij = i (b
i<j (xi xj ).
Relations among determinants.
1.23. Let bij = (1)i+j aij . Prove that |aij |n1 = |bij |n1 .
1.24. Prove that
a1 c1 a2 d1 a1 c2 a2 d2
a3 c1 a4 d1 a3 c2 a4 d2 a1 a2 b1 b2 c1
=
b1 c3 b2 d3 b1 c4 b2 d4 a3 a4 b3 b4 c3
b3 c3 b4 d3 b3 c4 b4 d4
c2 d1
c4 d3
d2
.
d4
a1
b11
b21
b31
0
a2
0
b12
b22
b32
0
0
a3
b13
b23
b33
b1
0
0
a11
a21
a31
0
b2
0
a12
a22
a32
0
a a b1 b11
b3 1 11
=
a
a b1 b21
a13 1 21
a
a
1 31 b1 b31
a23
a33
a2 a12 b2 b12
a2 a22 b2 b22
a2 a32 b2 b32
a3 a13 b3 b13
a3 a23 b3 b23 .
a3 a33 b3 b33
1.26. Let sk =
Pn
i=1
s1 a11
..
sn an1
19
...
...
a11
= (1)n1 (n 1) ..
sn ann
an1
s1 a1n
..
.
...
mk
mk 1
...
n n+1
m
...
m1
1
.
..
.
..
= .
n. n. n+1
...
mk k
mk
mk
n
m1 k
...
k+i
2j
a1n
..
. .
ann
n+k
m1
.. .
.
n+k
mk
. Prove that
k(k + 1) . . . (k + n 1)
n1 (k 1).
1 3 . . . (2n 1)
n+i
Pn1.32. Let A and B be square matrices of order n. Prove that |A| |B| =
k=1 |Ak | |Bk |, where the matrices Ak and Bk are obtained from A and B, respectively, by interchanging the respective first and kth columns, i.e., the first
column of A is replaced with the kth column of B and the kth column of B is
replaced with the first column of A.
2. Minors and cofactors
2.1. There are many instances when it is convenient to consider the determinant
of the matrix whose elements stand at the intersection of certain p rows and p
columns of a given matrix A. Such a determinant is called a pth order minor of A.
For convenience we introduce the following notation:
..
.. .
A
= ...
.
k1 . . . kp
ai k ai k
. . . ai k
p 1
p 2
p p
20
DETERMINANTS
p
Theorem. If A ki11 ...i
is a basic minor of a matrix A, then the rows of A
...kp
are linear combinations of rows numbered i1 , . . . , ip and these rows are linearly
independent.
Proof. The linear independence of the rows numbered i1 , . . . , ip is obvious since
the determinant of a matrix with linearly dependent rows vanishes.
The cases when the size of A is m p or p m are
also
clear.
It suffices to carry out the proof for the minor A 11 ...p
...p . The determinant
a11
.
.
.
ap1
ai1
...
...
...
a1j
..
.
apj
aij
a1p
..
.
app
aip
vanishes for j p as well as for j > p. Its expansion with respect to the last column
is a relation of the form
a1j c1 + a2j c2 + + apj cp + aij c = 0,
where
the
p
2.2.1. Corollary. If A ki11 ...i
is a basic minor then all rows of A belong to
...kp
the linear space spanned by the rows numbered i1 , . . . , ip ; therefore, the rank of A is
equal to the maximal number of its linearly independent rows.
2.2.2. Corollary. The rank of a matrix is also equal to the maximal number
of its linearly independent columns.
2.3. Theorem (The Binet-Cauchy formula). Let A and B be matrices of size
n m and m n, respectively, and n m. Then
X
det AB =
where Ak1 ...kn is the minor obtained from the columns of A whose numbers are
k1 , . . . , kn and B k1 ...kn is the minor obtained from the rows of B whose numbers
are k1 , . . . , kn .
Pm
Proof. Let C = AB, cij = k=1 aik bki . Then
det C =
(1)
=
=
k1
m
X
k1 ,...,kn =1
m
X
k1 ,...,kn =1
a1k1 . . . ankn
bkn (n)
kn
21
The minor B k1 ...kn is nonzero only if the numbers k1 , . . . , kn are distinct; therefore, the summation can be performed over distinct numbers k1 , . . . , kn . Since
B (k1 )... (kn ) = (1) B k1 ...kn for any permutation of the numbers k1 , . . . , kn ,
then
m
X
X
a1k1 . . . ankn B k1 ...kn =
(1) a1 (1) . . . an (n) B k1 ...kn
k1 ,...,kn =1
k1 <k2 <<kn
n
where Mij is the determinant of the matrix obtained from the matrix A = aij 1
by deleting its ith row and jth column. The number Aij = (1)i+j Mij is called
the cofactor of the element aij in A.
It is possible to expand a determinant not only with respect to one row, but also
with respect to several rows simultaneously.
Fix rows numbered i1 , . . . , ip , where i1 < i2 < < ip . In the expansion of
the
of A there occur products of terms of
the expansion
of the minor
determinant
ip+1
...ip
...in
A ji11 ...j
by
terms
of
the
expansion
of
the
minor
A
,
where
j1 < <
jp+1 ...jn
p
jp ; ip+1 < < in ; jp+1 < < jn and there are no other terms in the expansion
of the determinant of A.
To compute the signs of these products let us shuffle the rows and the columns
p
so as to place the minor A ji11 ...i
...jp in the upper left corner. To this end we have to
perform
(i1 1) + + (ip p) + (j1 1) + + (jp p) i + j
(mod 2)
permutations, where i = i1 + + ip , j = j1 + + jp .
i1 ...ip
p+1 ...in
The number (1)i+j A jip+1
...jn is called the cofactor of the minor A j1 ...jp .
We have proved the following statement:
2.4.1. Theorem (Laplace).
Fix p rows of the matrix A. Then the sum of
products of the minors of order p that belong to these rows by their cofactors is
equal to the determinant of A.
1
The matrix adj A = (Aij )T is called the (classical) adjoint
of A. Let us prove
Pn
that A (adj A) = |A| I. To this end let us verify that j=1 aij Akj = ki |A|.
For k = i this formula coincides with (1). If k 6= i, replace the kth row of A with
the ith one. The determinant of the resulting matrix vanishes; its expansion with
respect to the kth row results in the desired identity:
0=
n
X
j=1
1 We
a0kj Akj =
n
X
aij Akj .
j=1
22
DETERMINANTS
If A is invertible then A1 =
adj A
.
|A|
A11 . . . A1p
ap+1,p+1 . . . ap+1,n
..
.. = |A|p1
..
.. .
.
.
.
Ap1 . . . App
an,p+1
...
ann
Proof. For
A11 . Let p > 1.
A11 . . .
..
.
Ap1 . . .
.
..
..
App Ap,p+1 . . . Apn
.
a1n . . . ann
0
|A|
|A|
0
= a1,p+1 . . .
..
a1n
...
implies that
A11
..
.
Ap1
...
...
ap+1,p+1
A1p
.. |A| = |A|p
..
.
.
an,p+1
App
...
...
0
...
...
ap+1,n
.. .
.
ann
an,p+1
..
.
ann
23
If |A| 6= 0, then dividing by |A| we get the desired conclusion. For |A| = 0 the
statement follows from the continuity of the both parts of the desired identity with
respect to aij .
Corollary. If A is not invertible then rank(adj A) 1.
Proof. For p = 2 we get
A11
A21
a33
.
A12
= |A| ..
A22
an3
...
...
a3n
..
. = 0.
ann
Besides, the transposition of any two rows of the matrix A induces the same transposition of the columns of the adjoint matrix and all elements of the adjoint matrix
change sign (look what happens with the determinant of A and with the matrix
A1 for an invertible A under such a transposition).
Application of transpositions of rows and columns makes it possible for us to
formulate Theorem 2.5.1 in the following more general form.
n
n
2.5.2. Theorem (Jacobi). Let A = aij 1 , (adj A)T = Aij 1 , 1 p < n,
i 1 . . . in
=
an arbitrary permutation. Then
j1 . . . jn
Ai1 j1
.
..
Ai j
p 1
...
...
aip+1 ,jp+1
Ai1 jp
.. = (1)
..
.
.
ai ,j
Aip jp
n p+1
aip+1 ,jn
..
|A|p1 .
.
ain ,jn
...
...
n
Proof. Let us consider matrix B = bkl 1 , where bkl = aik jl . It is clear that
|B| = (1) |A|. Since a transposition of any two rows (resp. columns) of A induces
the same transposition of the columns (resp. rows) of the adjoint matrix and all
elements of the adjoint matrix change their sings, Bkl = (1) Aik jl .
Applying Theorem 2.5.1 to matrix B we get
(1) Ai1 j1
..
(1) Ai j
p 1
...
...
aip+1 ,jp+1
(1) Ai1 jp
..
..
p1
= ((1) )
.
.
ai ,j
(1) Ai j
p p
p+1
...
...
aip+1 ,jn
..
.
.
ai ,j
n
By dividing the both parts of this equality by ((1) )p we obtain the desired.
24
DETERMINANTS
13
23
13
13
13
.
C2 (A) =
A
A
A 12
13
23
23
23
23
A
A
A
12
13
23
Making use of BinetCauchys formula we can show that Cr (AB) = Cr (A)Cr (B).
For a square matrix A of order n we have the Sylvester identity
n1
p
det Cr (A) = (det A) , where p =
.
r1
The simplest proof of this statement makes use of the notion of exterior power
(see Theorem 28.5.3).
n
2.7. Let 1 m r < n, A = aij 1 . Set An = |aij |n1 , Am = |aij |m
1 . Consider
r
whose elements are the rth order minors of A containing the left
the matrix Sm,n
r
is a minor of order
upper corner principal minor Am . The determinant of Sm,n
nm
r
of
C
(A).
The
determinant
of
S
can
be
expressed
in
terms of Am and
r
m,n
rm
An .
Theorem (Generalized Sylvesters identity, [Mohr,1953]).
nm1
nm1
r
(1)
|Sm,n
| = Apm Aqn , where p =
,q =
.
rm
rm1
Proof. Let us prove identity (1) by induction on n. For n = 2 it is obvious.
r
coincides with Cr (A) and since |Cr (A)| = Aqn , where q = n1
The matrix S0,n
r1
(see Theorem 28.5.3), then (1) holds for m = 0 (we assume that A0 = 1). Both
sides of (1) are continuous with respect to aij and, therefore, it suffices to prove
the inductive step when a11 6= 0.
All minors considered contain the first row and, therefore, from the rows whose
numbers are 2, . . . , n we can subtract the first row multiplied by an arbitrary factor;
r
this operation does not affect det(Sm,n
). With the help of this operation all elements
of the first column of A except a11 can be made equal to zero. Let A be the matrix
obtained from the new one by strikinging out the first column and the first row, and
r1
let S m1,n1 be the matrix composed of the minors of order r 1 of A containing
its left upper corner principal minor of order m 1.
r1
r1
r
Obviously, Sm,n
= a11 S m1,n1 and we can apply to S m1,n1 the inductive
hypothesis (the case m 1 = 0 was considered separately). Besides, if Am1 and
An1 are the left upper corner principal minors of orders m 1 and n 1 of A,
respectively, then Am = a11 Am1 and An = a11 An1 . Therefore,
p1
q1
r
1 q1
|Sm,n
| = at11 Am1 An1 = atp
Apm1 Aqn1 ,
11
nm1
where t = nm
= p and q1 = nm1
rm , p1 =
rm
rm1 = q. Taking into account
that t = p + q, we get the desired conclusion.
25
. . . k 1 lj
k 2 l1
. . . k 2 lj
.
.
..
.. = 0.
kj l1 . . . kj lj
Then there exist complex numbers c1 , . . . , cj not all equal to 0 such that the linear
combination of the corresponding columns with coefficients c1 , . . . , cj vanishes, i.e.,
the numbers k1 , . . . , kj are roots of the polynomial c1 xl1 + + cj xlj . Let
(1)
(x k1 ) . . . (x kj ) = xj b1 xj1 + bj .
Then
(2)
of ; hence, ft (1) = jt . Since ckl = bt = ft (), then |ckl |s0 = g() and g(1) = |c0kl |s0 ,
where c0kl = tkjl . The polynomial q(x) = xp1 + +x+1 is irreducible over Z (see
Appendix 2) and q() = 0. Therefore, g(x) = q(x)(x), where is a polynomial
with integer coefficients (see Appendix 1). Therefore, g(1) = q(1)(1) = p(1), i.e.,
g(1) is divisible by p.
To get a contradiction it suffices to show that the number g(1) = |c0kl |s0 , where
0
ckl = tkjl , 0 tk j + s and 0 < j + s p 1, is not divisible by p. It is easy
(see Problem 1.27). It is also
to verify that = |c0kl |s0 = |akl |s0 , where akl = j+l
tk
clear that
j+l
t
t
j+s
j+s
= 1
... 1
= sl (t)
.
t
j+l+1
j+s
t
t
Hence,
s (t0 ) s1 (t0 ) . . . 1
Y
s
s
Y
Y
j + s s (t1 ) s1 (t1 ) . . . 1
j+s
=
=
A
(t t ),
.
.
.
.
..
t
t
..
.
>
=0
=0
s (ts ) s1 (ts ) . . . 1
where A0 , A1 , . . . , As are the coefficients of the highest powers of t in the polynomials 0 (t), 1 (t), . . . , s (t), respectively, where 0 (t) = 1; the degree of i (t) is equal
to i. Clearly, the product obtained has no irreducible fractions with numerators
divisible by p, because j + s < p.
26
DETERMINANTS
Problems
Pn
2.1. Let An be a matrix ofsize n n. Prove that |A + I| = n + k=1 Sk nk ,
n
where Sk is the sum of all k principal kth order minors of A.
2.2. Prove that
a11 . . . a1n x1
.
.
.
X
.
..
..
.
xi yj Aij ,
=
an1 . . . ann xn
i,j
y 1 . . . yn
0
n
where Aij is the cofactor of aij in aij 1 .
2.3. Prove that the sum of principal k-minors of AT A is equal to the sum of
squares of all k-minors of A.
2.4. Prove that
...
a1n
u1 a11 . . . un a1n
a11
...
a2n
...
a2n
a21
a21
.. = (u1 + + un )|A|.
.. + + ..
.
.
.
.
u1 an1 . . . un ann
an1
...
ann
Inverse and adjoint matrices
2.5. Let A and B be square matrices of order n. Compute
I
0
0
1
A C
I B .
0 I
2.6. Prove that the matrix inverse to an invertible upper triangular matrix is
also an upper triangular one.
2.7. Give an example of a matrix of order n whose adjoint has only one nonzero
element and this element is situated in the ith row and jth column for given i and
j.
2.8. Let x and y be columns of length n. Prove that
adj(I xy T ) = xy T + (1 y T x)I.
2.9. Let A be a skew-symmetric matrix of order n. Prove that adj A is a symmetric matrix for odd n and a skew-symmetric one for even n.
2.10. Let An be a skew-symmetric matrix of order n with elements +1 above
the main diagonal. Calculate adj An .
Pn1
2.11. The matrix adj(A I) can be expressed in the form k=0 k Ak , where
n is the order of A. Prove that:
a) for any k (1 k n 1) the matrix Ak A Ak1 is a scalar matrix;
b) the matrix Ans can be expressed as a polynomial of degree s 1 in A.
2.12. Find all matrices A with nonnegative elements such that all elements of
A1 are also nonnegative.
n
2.13. Let = exp(2i/n); A = aij 1 , where aij = ij . Calculate the matrix
A1 .
2.14. Calculate the matrix inverse to the Vandermonde matrix V .
27
A B
3.1. Let P =
be a block matrix with square matrices A and D. In
C D
order to facilitate the computation of det P we can factorize the matrix P as follows:
(1)
A
C
B
D
A
C
0
I
I
0
Y
X
A
C
AY
CY + X
A
C
AY
CY + X
A
C
0
X
I
0
Y
I
(2)
P =
A
C
0
(P |A)
A1 B
I
I
0
I
CA1
0
I
A
0
0 (P |A)
I
0
A1 B
I
P =
I
0
BD1
I
A BD1 C
0
0
D
I
D1 C
0
I
I
0
X
I
I
0
X
I
A1 + A1 BX 1 CA1
X 1 CA1
A1 BX 1
X 1
, where X = (P |A).
28
DETERMINANTS
Is the above condition |A| 6= 0 necessary? The answer is no, but in certain
similar situations the answer is yes. If, for instance, CDT = DC T , then
|P | = |A BD1 C| |DT | = |ADT + BC T |.
This equality holds for any invertible matrix D. But if
A=
1 0
0 0
, B=
0 0
0 1
, C=
0
0
1
0
and D =
0
1
0
0
then
CDT = DC T = 0 and |ADT + BC T | = 1 6= 1 = P.
Let us return to Theorem 3.1.2. The equality |P | = |AD CB| is a polynomial
identity for the elements of the matrix P . Therefore, if there exist invertible matrices A such that lim A = A and A C = CA , then this equality holds for the
0
v
= a |A| u(adj A)v.
a
A v
1
A11 A12
(A|C) =
A22
A32
A23
A33
A21
A31
A1
11 (A12 A13 ).
(1)
A11
A = A21
A31
0
I
0
0
I
0 0
I
0
(A|C)
!
,
(2)
A11
A = A21
A31
A11 A12
A21 A22
(3)
A31 A32
A12
A22
A32
0
I
00
I
0
0
I
0
29
.
(A|B)
0
I
0
0
I
00
I
0
X1
X3
X5
X2
X4 .
X6
!
I X1
I
0
= 0 X3
(A|C)
0
0 X5
It follows that
(A|C) =
X3
X5
X4
X6
X2
I
X4 0
0
X6
I
0
.
(A|B)
0
I
0
(A|B)
A11 A12
A11 0
I X1
=
,
A21 A22
A21 I
0 X3
i.e., X3 = (B|C).
Problems
3.1. Let u and v be rows of length n, A a square matrix of order n. Prove that
|A + uT v| = |A| + v(adj A)uT .
3.2. Let A be a square matrix. Prove that
X
X
X
I
T A = 1
M12 +
M22
M32 + . . . ,
A
I
P 2
where
Mk is the sum of the squares of all k-minors of A.
4. Symmetric functions, sums xk1 + + xkn ,
and Bernoulli numbers
In this section we will obtain determinant relations for elementary symmetric
functions k (x1 , . . . , xn ), functions sk (x1 , . . . , xn ) = xk1 + + xkn , and sums of
homogeneous monomials of degree k,
pk (x1 , . . . , xn ) =
X
i1 ++in =k
xi11 . . . xinn .
30
DETERMINANTS
4.1. Let k (x1 , . . . , xn ) be the kth elementary function, i.e., the coefficient of
xnk in the standard power series expression of the polynomial (x + x1 ) . . . (x + xn ).
We will assume that k (x1 , . . . , xn ) = 0 for k > n. First of all, let us prove that
sk sk1 1 + sk2 2 + (1)k kk = 0.
The product skp p consists of terms of the form xkp
(xj1 . . . xjp ). If i
i
{j1 , . . . jp }, then this term cancels the term xkp+1
(x
.
.
.
x
b
.
.
. xjp ) of the product
j1
i
i
skp+1 p1 , and if i 6 {j1 , . . . , jp }, then it cancels the term xkp1
(xi xj1 . . . xjp )
i
of the product skp1 p+1 .
Consider the relations
1 = s1
s1 1 22 = s2
s2 1 s1 2 + 33 = s3
............
sk 1 sk1 2 + + (1)k+1 kk = sk
as a system of linear equations for
easy to see that
s1
s2
1 s3
k =
.
k! ..
s
k1
s
k
Similarly,
22
33
sk =
..
.
(k 1)k1
kk
0
1
s1
..
.
sk2
sk1
...
...
0
0
1
..
.
...
...
...
..
.
..
.
...
... ...
0
0
0
..
.
1
s1
1
1
2
..
.
0
1
1
..
.
0
0
1
..
.
...
...
...
..
.
0
0
0
..
.
k2
k1
...
...
...
...
...
...
1
1
4.2. Let us obtain first a relation between pk and k and then a relation between
pk and sk . It is easy to verify that
1 + p1 t + p2 t2 + p3 t3 + = (1 + x1 t + (x1 t)2 + . . . ) . . . (1 + xn t + (xn t)2 + . . . )
1
1
=
,
=
(1 x1 t) . . . (1 xn t)
1 1 t + 2 t2 + (1)n n tn
i.e.,
p1 1 = 0
p2 p1 1 + 2 = 0
p3 p2 1 + p1 2 3 = 0
............
31
Therefore,
p1
p2
.
k = ..
pk1
pk
1
p1
..
.
0
1
..
.
pk2
pk1
and pk = ..
1
k1
k
pk
...
...
..
.
0
0
..
.
... ...
... ...
1
1
..
.
0
1
..
.
...
...
..
.
k2
k1
...
...
...
...
k
0
0
..
.
To get relations between pk and sk is a bit more difficult. Consider the function
f (t) = (1 x1 t) . . . (1 xn t). Then
f 0 (t)
=
2
f (t)
1
f (t)
0
1
1
=
...
1 x1 t
1 xn t
x1
xn
1
=
+ +
.
1 x1 t
1 xn t f (t)
Therefore,
f 0 (t)
x1
xn
=
+ +
= s1 + s2 t + s3 t2 + . . .
f (t)
1 x1 t
1 xn t
f 0 (t)
=
f (t)
1
f (t)
0
1
1
p1 + 2p2 t + 3p3 t2 + . . .
=
,
f (t)
1 + p1 t + p2 t2 + p3 t3 + . . .
i.e.,
(1 + p1 t + p2 t2 + p3 t3 + . . . )(s1 + s2 t + s3 t2 + . . . ) = p1 + 2p2 t + 3p3 t2 + . . .
Therefore,
p1
2p2
..
sk = (1)k1
.
(k 1)pk1
kpk
and
s1
s2
1 .
.
pk =
k! .
s
k1
sk
1
p1
..
.
0
1
..
.
...
...
..
.
0
0
..
.
0
0
..
.
pk2
pk1
...
...
...
...
p2
p3
p1
p2
1
s1
..
.
0
2
..
.
...
...
..
.
0
0
..
.
0
0
..
.
sk2
sk1
...
...
...
...
s2
s3
s1
s2
p1
k + 1
s1
0
0
..
.
0
0
..
.
32
DETERMINANTS
1 k n2
n2
1
Sn1 (k) =
n3
n! .
..
..
.
.
.
.
k
0
0
...
n
1
n1
1
n2
1
..
.
0
1
1 .
..
.
(x + 1) x =
n1
X
i=0
We get
kn =
n i
x for x = 1, 2, . . . , k 1.
i
n1
X
i=0
n
Si (k).
i
[n(n 1)] =
n1
X
x=1
r P 2r1
r P 2r3
r P 2r5
x
+
x
+
x
+ ... ,
1
3
5
P
i+1
i.e., [n(n 1)]i+1 =
2(ij)+1 S2j+1 (n). For i = 1, 2, . . . these equalities can be
expressed in the matrix form:
S (n)
[n(n 1)]2
2 0 0 ...
3
3
[n(n 1)]
1 3 0 . . . S5 (n)
The principal minors of finite order of the matrix obtained are all nonzero and,
therefore,
S (n)
[n(n 1)]2
3
= aij
4 , where aij =
.
S7 (n) 2
[n(n 1)]
2(i j) + 1
..
..
.
.
33
The formula obtained implies that S2k+1 (n) can be expressed in terms of n(n1) =
2u(n) and is divisible by [n(n 1)]2 .
To get an expression for S2k let us make use of the identity
nr+1 (n 1)r =
n1
X
x=1
r+1
r
r+1
r
+
+
x2r1
1
1
2
2
X
r+1
r
r+1
r
+
x2r2
+
+
x2r3
+ ...
3
3
4
4
X
r+1
r
r+1
r
=
+
x2r +
+
x2r2 + . . .
1
1
3
3
X
X
r
r
+
x2r1 +
x2r3 + . . .
1
3
=
x2r
The sums of odd powers can be eliminated with the help of (1). As a result we get
X
r+1
r
(nr (n 1)r )
r+1
r
+
+
x2r
n (n 1) =
2
1
1
X
r+1
r
+
+
x2r3 ,
3
3
i.e.,
ni (n 1)i
2n 1
2
i+1
i
+
S2j (n).
2(i j) + 1
2(i j) + 1
S6 (n) =
[n(n 1)]3 ,
2
..
..
.
.
i
+ 2(ij)+1
.
2n 1 n(n 1)
where bij =
i+1
2(ij)+1
4.5. In many theorems of calculus and number theory we encounter the following
Bernoulli numbers Bk , defined from the expansion
X
tk
t
=
B
(for |t| < 2).
k
et 1
k!
k=0
34
DETERMINANTS
Pm m+1
Bk nm+1k .
k=0
k
et
t
(ent 1) in two ways.
1
X
Bk tk X (nt)s
t
nt
(e
1)
=
et 1
k! s=1 s!
k=0
m
X
X
tm+1
m+1
Bk nm+1k
= nt +
.
k
(m + 1)!
m=1
k=0
X
X
X
ent 1
tm+1
rt
t t
=t
e = nt +
rm
e 1
m!
r=0
m=1 r=1
= nt +
(m + 1)Sm (n)
m=1
tm+1
.
(m + 1)!
X
x2
x3
k
x
bk x
= (x +
x = (e 1)
+
+ . . . )(1 + b1 x + b2 x2 + b3 x3 + . . . ),
2!
3!
k=0
i.e.,
1
2!
b1
1
+ b2 =
2!
3!
b1
b2
1
+
+ b3 =
3!
2!
4!
..................
b1 =
k
Bk = k!bk = (1) k!
1/2!
1/3!
1/4!
..
.
1
1/2!
1/3!
..
.
0
1
1/2!
..
.
...
...
...
..
.
0
0
0
..
.
1/(k + 1)!
1/k!
...
...
1/2!
x
x
= + f (x). Then
ex 1
2
x
x
+
+ x = 0,
ex 1 ex 1
SOLUTIONS
35
B2k
. Then
(2k)!
x2
x3
x
x= x+
+
+ ...
1 + c1 x2 + c2 x4 + c3 x6 + . . . .
2!
3!
2
Equating the coefficients of x3 , x5 , x7 , . . . and taking into account that
1
2n 1
=
we get
(2n + 1)!
2(2n + 1)!
2(2n)!
1
2 3!
3
c1
+ c2 =
3!
2 5!
c2
5
c1
+
+ c3 =
5!
3!
2 7!
............
c1 =
Therefore,
B2k
1/3!
3/5!
k+1
(1)
(2k)! 5/7!
= (2k)!ck =
..
2k . 1
(2k + 1)!
1
1/3!
1/5!
..
.
1
(2k 1)!
0
1
1/3!
..
.
...
...
...
..
.
...
...
1/3!
0
0
0
..
.
Solutions
T
1.1. Since A = A and n is odd, then |AT | = (1)n |A| = |A|. On the other
hand, |AT | = |A|.
1.2. If A is a skew-symmetric matrix of even order, then
0
1
..
.
1
...
|A| =
1
x
..
.
x
...
0
+
0
x
..
.
x
...
1
=
1
x
..
.
x
...
In the last matrix, subtracting the first column from all other columns we get the
desired.
1.3. Add the first row to and subtract the second row from the rows 3 to 2n. As
a result, we get |An | = |An1 |.
36
DETERMINANTS
1.4. Suppose that all terms of the expansion of an nth order determinant are
positive. If theintersection
of two rows and two columns of the determinant singles
x y
out a matrix
then the expansion of the determinant has terms of the
u v
form xv and yu and, therefore, sign(xv) = sign(yu). Let ai , bi and ci be
the first three elements of the ith row (i = 1, 2). Then sign(a1 b2 ) = sign(a2 b1 ),
sign(b1 c2 ) = sign(b2 c1 ), and sign(c1 a2 ) = sign(c2 a1 ). By multiplying these
identities we get sign p = sign p, where p = a1 b1 c1 a2 b2 c2 . Contradiction.
1.5. For all i 2 let us subtract the (i 1)st row multiplied by a from the ith
row. As a result we get an upper triangular matrix with diagonal elements a11 = 1
and aii = 1 a2 for i > 1. The determinant of this matrix is equal to (1 a2 )n1 .
1.6. Expanding the determinant n+1 with respect to the last column we get
n+1 = xn + hn = (x + h)n .
1.7. Let us prove that the desired determinant is equal to
!
Y
X ai bi
(xi ai bi ) 1 +
xi ai bi
i
by induction on n. For n = 2 this statement is easy to verify. We will carry out
the proof of the inductive step for n = 3 (in the general case the proof is similar):
x1 a1 b2 a1 b3 x1 a1 b1 a1 b2 a1 b3 a1 b1 a1 b2 a1 b3
a2 b1 x2 a2 b3 =
0
x2 a2 b3 + a2 b1 x2 a2 b3 .
a3 b1 a3 b2 x3
0
a3 b2 x3 a3 b1 a3 b2 x3
The first determinant is computed by inductive hypothesis and to compute the
second one we have to break out from the first row the factor a1 and for all i 2
subtract from the ith row the first row multiplied by ai .
1.8. It is easy to verify that det(I A) = 1 c. The matrix A is the matrix of
the transformation Aei = ci1 ei1 and therefore, An = c1 . . . cn I. Hence,
(I + A + + An1 )(I A) = I An = (1 c)I
and, therefore,
i,j
m
1.10. For a fixed m consider the matrices An = aij 0 , aij = n+i
j . The matrix
A0 is a triangular matrix with diagonal (1, . . . , 1). Therefore, |A0 | = 1. Besides,
SOLUTIONS
37
An+1 = An B, where bi,i+1 = 1 (for i m 1), bi,i = 1 and all other elements bij
are zero.
1.11. Clearly, points A, B, . . . , F with coordinates (a2 , a), . . . , (f 2 , f ), respectively, lie on a parabola. By Pascals theorem the intersection points of the pairs of
straight lines AB and DE, BC and EF , CD and F A lie on one straight line. It is
not difficult to verify that the coordinates of the intersection point of AB and DE
are
(a + b)de (d + e)ab
de ab
.
,
d+eab
d+eab
It remains to note that if points (x1 , y1 ), (x2 , y2 ) and (x3 , y3 ) belong to one straight
line then
x1 y1 1
x2 y2 1 = 0.
x3 y3 1
Remark. Recall that Pascals theorem states that the opposite sides of a hexagon
inscribed in a 2nd order curve intersect at three points that lie on one line. Its proof
can be found in books [Berger, 1977] and [Reid, 1988].
1.12. Let s = x1 + + xn . Then the kth element of the last column is of the
form
n2
X
(s xk )n1 = (xk )n1 +
pi xik .
i=0
Therefore, adding to the last column a linear combination of the remaining columns
with coefficients p0 , . . . , pn2 , respectively, we obtain the determinant
.
..
x1
..
.
xn
...
...
xn2
1
..
.
xn2
n
(x1 )n1
..
= (1)n1 V (x1 , . . . , xn ).
.
(xn )n1
1.13. Let be the required determinant. Multiplying the first row of the
corresponding matrix by x1 , . . . , and the nth row by xn we get
x1
= ...
xn
x21
..
.
x2n
...
...
x1n1
..
.
xnn1
.. , where = x . . . x .
1
n
.
38
DETERMINANTS
On the other hand, expanding W with respect to the first row we get
det W = det V0 + x det V1 + + xn det Vn1 .
1.16. Let xi = in. Then
ai1 = xi , ai2 =
xi (xi 1)
xi (xi 1) . . . (xi r + 1)
, . . . , air =
,
2
r!
i.e., in the kth column there stand identical polynomials of kth degree in xi . Since
the determinant does not vary if to one of its columns we add a linear combination
of its other columns, the determinant can be reduced to the form |bik |r1 , where
xk
nk k
bik = i =
i . Therefore,
k!
k!
|aik |r1 = |bik |r1 = n
nr
n2
. . . r!V (1, 2, . . . , r) = nr(r+1)/2 ,
2!
r!
Q
because 1j<ir (i j) = 2!3! . . . (r 1)!
n
1.17. For i = 1, . . . , n let us multiply the ith row of the matrix aij 1 by mi !,
where mi = ki + n i. We obtain the determinant |bij |n1 , where
bij =
(ki + n i)!
= mi (mi 1) . . . (mi + j + 1 n).
(ki + j i)!
n
The elements of the jth row of bij 1 are identical polynomials of degree n j
in mi and the coefficients of the highest terms of these polynomials are equal
to 1. Therefore, subtracting from every column linear combinations of the preceding columns we can reduce the determinant |bij |n1 to a determinant with rows
Q
(mn1
, mn2
, . . . , 1). This determinant is equal to i<j (mi mj ). It is also clear
i
i
that |aij |n1 = |bij |n1 (m1 !m2 ! . . . mn !)1 .
1.18. For n = 3 it is easy to verify that
1
2
aij = x1
0
x21
1
x2
x22
p1
1
x3 p2
x23
p3
p1 x1
p2 x2
p3 x3
p1 x21
p2 x22 .
p3 x23
1 1 x1 . . . xn1
1 ... 1
1
x1 . . . xn y 1 x2 . . . xn1
2
2
..
..
x1 . . . x2n y 2 ..
.
.
.
.
.
.
.
..
.. 1 x
n1
.
.
.
.
x
n
n
n
x1 . . . xnn y n 0 0 . . .
0
and, therefore, it is equal to
(y xi )
i>j (xi
xj )2 .
form of a product of
0
0
..
.
0
1
SOLUTIONS
39
1 2x0
2
aij = 1 2x1
0
1 2x2
2
x20
y0
x21 y0
x22
1
y12
y1
1
y22
y2 ;
1
and in the general case the elements of the first matrix are the numbers nk xki .
1.21. Let us suppose that there exists a nonzero solution such that the number
of pairwise distinct numbers i is equal to r. By uniting the equal numbers i into
r groups we get
m1 k1 + + mr kr = 0 for k = 1, . . . , n.
Let x1 = m1 1 , . . . , xr = mr r , then
xr = 0 for k = 1, . . . , n.
k1
x1 + + k1
r
1
Taking the first r of these equations we get a system of linear equations for x1 , . . . , xr
and the determinant of this system is V (1 , . . . , r ) 6= 0. Hence, x1 = = xr = 0
and, therefore, 1 = = r = 0. The contradiction obtained shows that there is
only the zero solution.
1.22. Let us carry out the proof by induction on n. For n = 1 the statement is
obvious.
n
Subtracting the first column of aij 0 from every other column we get a matrix
n
bij , where bij = i (b
xj ) i (b
x0 ) for j 1.
0
Now, let us prove that
k (b
xi ) k (b
xj ) = (xj xi )k1 (b
xi , x
bj ).
Indeed,
k (x1 , . . . , xn ) = k (b
xi ) + xi k1 (b
xi ) = k (b
xi ) + xi k1 (b
xi , x
bj ) + xi xj k2 (b
xi , x
bj )
and, therefore,
k (b
xi ) + xi k1 (b
xi , x
bj ) = k (b
xj ) + xj k1 (b
xi , x
bj ).
Hence,
, where cij = i (b
x0 , x
bj ).
|bij |n0 = (x0 x1 ) . . . (x0 xn )|cij |n1
0
Let k = [n/2]. Let us multiply by 1 the rows 2, 4, . . . , 2k of the matrix
1.23.
a1 a2 0 0 c1 0 c2 0
a3 a4 0 0 0 d1 0 d2
.
0 0 b1 b2 c3 0 c4 0
0 0 b3 b4
0 d3 0 d4
40
DETERMINANTS
.
.
.
..
..
..
..
..
..
.
.
1
...
1
1
1
...
1
1n
a11 . . . a1n s1
s1 a11 . . . s1 a1n s1
.
.
.
.
.
.
..
..
..
..
..
.
=
(n
1)
= (n 1)
.
an1 . . . ann sn
sn an1 . . . sn ann sn
0
...
0
1
1
...
1
1
p
1.27. Since pq + q1
= p+1
adding columns of a matrix
q , then by suitably
n
n
n
whose rows are of the form m m1 . . . mk we can get a matrix whose rows
n+1
n n+1
are of the form m
. And so on.
m . . . mk+1
1.28. In the determinant n (k) subtract from the (i + 1)st row the ith row for
every i = n1, . . . , 1. As a result, we get n (k) = 0n1 (k), where 0m (k) = |a0ij |m
0 ,
k+i
k+i k1+i
k+i
a0ij =
. Since 2j+1
=
, it follows that
2j + 1
2j + 1
2j
0n1 (k) =
k(k + 1) . . . (k + n 1)
n1 (k 1).
1 3 . . . (2n 1)
n+1+i
, i.e.,
2j
(n + 1)(n + 2) . . . 2n
n1 (n) = 2n Dn1 ,
1 3 . . . (2n 1)
(2n)!
(2n)!
and 1 3 . . . (2n 1) =
.
n!
2 4 . . . 2n
1.30. Let us carry out the proof for n = 2. By Problem 1.23 |aij |20 = |a0ij |20 ,
2
where a0ij = (1)i+j aij . Let us add to the last column of a0ij 0 its penultimate
column and to the last row of the matrix obtained add its penultimate row. As a
result we get the matrix
a0
a1 1 a1
a1
a2
1 a2 ,
1 a1 1 a2 2 a2
since (n + 1)(n + 2) . . . 2n =
SOLUTIONS
41
a0
1 a 0 2 a 0
1 a 0 2 a 0 3 a 0 .
2 a 0 3 a 0 4 a 0
By induction on k it is easy to verify that bk = k a0 . In the general case the proof
is similar.
1.31. We can represent the matrices A and B in the form
P
YP
A=
PX
Y PX
and B =
W QV
QV
WQ
Q
P + W QV P X + W Q P
=
|A + B| =
Y P + QV
Y PX + Q Y P
W Q I X
Q V I
P
1
W Q P
=
Q QV
|P | |Q| Y P
P X
.
Q
0
..
.
0
C=
a
11
...
an1
a12
..
.
an2
0
..
.
0
...
...
...
...
a1n
..
.
b11
..
.
ann
0
..
.
bn1
b11
..
.
bn1
...
...
...
...
b1n
..
.
bnn
b1n
..
.
bnn
.
. . . ann bnk
an1 bn1 . . .
b1k a12 . . . a1n b11 . . .
n
X
.
..
.. ..
=
(1)k +k +k ..
.
. .
k=1
bnk an2 . . . ann
bn1 . . .
a12
k ..
|C| =
(1) .
k=1
an2
n
X
...
a1n
..
.
b1k
..
.
bb1k
..
.
bbnk
a11
..
.
an1
b1n
..
.
. . . bnn
. . . b1n
..
. ,
. . . bnn
...
42
DETERMINANTS
bi1 i1 . . . bi1 ik
i1 . . . ik
..
B
= ...
i1 . . . ik
bi i
. . . bik ik
k 1
a i1 1
a i 1 1 . . . ai 1 n
.. ..
= det ...
. .
aik 1 . . . aik n
ai1 n
...
...
aik 1
..
.
ai k n
I A AB C
0
I
B .
0
0
I
2.6. If i < j then deleting out the ith row and the jth column of the upper
triangular matrix we get an upper triangular matrix with zeros on the diagonal at
all places i to j 1.
2.7. Consider the unit matrix of order n 1. Insert a column of zeros between
its (i 1)st and ith columns and then insert a row of zeros between the (j 1)st
and jth rows of the matrix obtained . The minor Mji of the matrix obtained is
equal to 1 and all the other minors are equal to zero.
2.8. Since x(y T x)y T I = xy T I(y T x), then
(I xy T )(xy T + I(1 y T x)) = (1 y T x)I.
Hence,
(I xy T )1 = xy T (1 y T x)1 + I.
Besides, according to Problem 8.2
det(I xy T ) = 1 tr(xy T ) = 1 y T x.
2.9. By definition Aij = (1)i+j det B, where B is a matrix of order n 1. Since
A = A, then Aji = (1)i+j det(B) = (1)n1 Aij .
2.10. The answer depends on the parity of n. By Problem 1.3 we have |A2k | = 1
and, therefore, adj A2k = A1
2k . For n = 4 it is easy to verify that
T
0
1
1
1
1
0
1
1
1 1
0
1 1 1
0 1
1
1 0
1
1
0
1
1
1
1
0
1
1
1
= I.
1
0
SOLUTIONS
43
of the matrix B = adj A2k+1 are of the form v. Besides, b11 = |A2k | = 1 and,
therefore, B is a symmetric matrix (cf. Problem 2.9). Therefore,
1
1
B=
1
..
.
1 1
1 1
1 1
..
..
.
.
...
...
....
..
.
!
k
Ak
(A I) =
n1
X
k Ak A
k=0
n
X
k Ak1
k=1
n
= A0 A An1 +
n1
X
k (Ak A Ak1 )
k=1
A uT
= |A|(1 + vA1 uT ).
3.1. |A + uT v| =
v
1
I
A
3.2. T
= |I AT A| = (1)n |AT A I|. It remains to apply the results
A
I
of Problem 2.1 (for = 1) and of Problem 2.4.
44
DETERMINANTS
CHAPTER II
LINEAR SPACES
The notion of a linear space appeared much later than the notion of determinant. Leibnizs share in the creation of this notion is considerable. He was not
satisfied with the fact that the language of algebra only allowed one to describe
various quantities of the then geometry, but not the positions of points and not the
directions of straight lines. Leibniz began to consider sets of points A1 . . . An and
assumed that {A1 , . . . , An } = {X1 , . . . , Xn } whenever the lengths of the segments
Ai Aj and Xi Xj are equal for all i and j. He, certainly, used a somewhat different notation, namely, something like A1 . . . An X1 . . . Xn ; he did not use indices,
though.
In these terms the equation AB AY determines the sphere of radius AB and
center A; the equation AY BY CY determines a straight line perpendicular to
the plane ABC.
Though Leibniz did consider pairs of points, these pairs did not in any way
correspond to vectors: only the lengths of segments counted, but not their directions
and the pairs AB and BA were not distinguished.
These works of Leibniz were unpublished for more than 100 years after his death.
They were published in 1833 and for the development of these ideas a prize was
assigned. In 1845 Mobius informed Grassmann about this prize and in a year
Grassmann presented his paper and collected the prize. Grassmanns book was
published but nobody got interested in it.
An important step in moulding the notion of a vector space was the geometric representation of complex numbers. Calculations with complex numbers
urgently required the justification of their usage and a sufficiently rigorous theory
of them. Already in 17th century John Wallis tried to represent the complex
numbers geometrically, but he failed. During 17991831 six mathematicians independently published papers containing a geometric interpretation of the complex
numbers. Of these, the most influential on mathematicians thought was the paper
by Gauss published in 1831. Gauss himself did not consider a geometric interpretation (which appealed to the Euclidean plane) as sufficiently convincing justification
of the existence of complex numbers because, at that time, he already came to the
development of nonEuclidean geometry.
The decisive step in the creation of the notion of an n-dimensional space was
simultaneously made by two mathematicians Hamilton and Grassmann. Their
approaches were distinct in principle. Also distinct was the impact of their works
on the development of mathematics. The works of Grassmann contained deep
ideas with great influence on the development of algebra, algebraic geometry, and
mathematical physics of the second half of our century. But his books were difficult
to understand and the recognition of the importance of his ideas was far from
immediate.
The development of linear algebra took mainly the road indicated by Hamilton.
Sir William Rowan Hamilton (18051865)
The Irish mathematician and astronomer Sir William Rowan Hamilton, member
of many an academy, was born in 1805 in Dublin. Since the age of three years old
Typeset by AMS-TEX
SOLUTIONS
45
he was raised by his uncle, a minister. By age 13 he had learned 13 languages and
when 16 he read Laplaces Mechanique Celeste.
In 1823, Hamilton entered Trinity College in Dublin and when he graduated
he was offered professorship in astronomy at the University of Dublin and he also
became the Royal astronomer of Ireland. Hamilton gained much publicity for his
theoretical prediction of two previously unknown phenomena in optics that soon
afterwards were confirmed experimentally. In 1837 he became the President of the
Irish Academy of Sciences and in the same year he published his papers in which
complex numbers were introduced as pairs of real numbers.
This discovery was not valued much at first. All mathematicians except, perhaps,
Gauss and Bolyai were quite satisfied with the geometric interpretation of complex
numbers. Only when nonEuclidean geometry was sufficiently wide-spread did the
mathematicians become interested in the interpretation of complex numbers as
pairs of real ones.
Hamilton soon realized the possibilities offered by his discovery. In 1841 he
started to consider sets {a1 , . . . , an }, where the ai are real numbers. This is precisely the idea on which the most common approach to the notion of a linear
space is based. Hamilton was most involved in the study of triples of real numbers: he wanted to get a three-dimensional analogue of complex numbers. His
excitement was transferred to his children. As Hamilton used to recollect, when
he would join them for breakfast they would cry: Well, Papa, can you multiply
triplets? Whereto I was always obliged to reply, with a sad shake of the head: No,
I can only add and subtract them .
These frenzied studies were fruitful. On October 16, 1843, during a walk, Hamilton almost visualized the symbols i, j, k and the relations i2 = j 2 = k 2 = ijk = 1.
The elements of the algebra with unit generated by i, j, k are called quaternions.
For the last 25 years of his life Hamilton worked exclusively with quaternions and
their applications in geometry, mechanics and astronomy. He abandoned his brilliant study in physics and studied, for example, how to raise a quaternion to a
quaternion power. He published two books and more than 100 papers on quaternions. Working with quaternions, Hamilton gave the definitions of inner and vector
products of vectors in three-dimensional space.
Hermann G
unther Grassmann (18091877)
The public side of Hermann Grassmanns life was far from being as brilliant as
the life of Hamilton.
To the end of his life he was a gymnasium teacher in his native town Stettin.
Several times he tried to get a university position but in vain. Hamilton, having
read a book by Grassmann, called him the greatest German genius. Concerning
the same book, 30 years after its publication the publisher wrote to Grassmann:
Your book Die Ausdehnungslehre has been out of print for some time. Since your
work hardly sold at all, roughly 600 copies were used in 1864 as waste paper and
the remaining few odd copies have now been sold out, with the exception of the
one copy in our library.
Grassmann himself thought that his next book would enjoy even lesser success.
Grassmanns ideas began to spread only towards the end of his life. By that time he
lost his contacts with mathematicians and his interest in geometry. The last years
of his life Grassmann was mainly working with Sanscrit. He made a translation of
46
LINEAR SPACES
Rig-Veda (more than 1,000 pages) and made a dictionary for it (about 2,000 pages).
For this he was elected a member of the American Orientalists Society. In modern
studies of Rig-Veda, Grassmanns works is often cited. In 1955, the third edition
of Grassmanns dictionary to Rig-Veda was issued.
Grassmann can be described as a self-taught person. Although he did graduate
from the Berlin University, he only studied philology and theology there. His father
was a teacher of mathematics in Stettin, but Grassmann read his books only as
a student at the University; Grassmann said later that many of his ideas were
borrowed from these books and that he only developed them further.
In 1832 Grassmann actually arrived at the vector form of the laws of mechanics;
this considerably simplified various calculations. He noticed the commutativity and
associativity of the addition of vectors and explicitly distinguished these properties.
Later on, Grassmann expressed his theory in a quite general form for arbitrary
systems with certain properties. This over-generality considerably hindered the
understanding of his books; almost nobody could yet understand the importance
of commutativity, associativity and the distributivity in algebra.
Grassmann defined the geometric product of two vectors as the parallelogram
spanned by these vectors. He considered parallelograms of equal size parallel to
one plane and of equal orientation equivalent. Later on, by analogy, he introduced
the geometric product of r vectors in n-dimensional space. He considered this
product as a geometric object whose coordinates are minors of order r of an r n
matrix consisting of coordinates of given vectors.
In Grassmanns works, the notion of a linear space with all its attributes was
actually constructed. He gave a definition of a subspace and of linear dependence
of vectors.
In 1840s, mathematicians were unprepared to come to grips with Grassmanns
ideas. Grassmann sent his first book to Gauss. In reply he got a notice in which
Gauss thanked him and wrote to the effect that he himself had studied similar
things about half a century before and recently published something on this topic.
Answering Grassmanns request to write a review of his book, Mobius informed
Grassmann that being unable to understand the philosophical part of the book
he could not read it completely. Later on, Mobius said that he knew only one
mathematician who had read through the entirety of Grassmanns book. (This
mathematician was Bretschneider.)
Having won the prize for developing Leibnizs ideas, Grassmann addressed the
Minister of Culture with a request for a university position and his papers were
sent to Kummer for a review. In the review, it was written that the papers lacked
clarity. Grassmanns request was turned down.
In the 1860s and 1870s various mathematicians came, by their own ways, to ideas
similar to Grassmanns ideas. His works got high appreciation by Cremona, Hankel,
Clebsh and Klein, but Grassmann himself was not interested in mathematics any
more.
5. The dual space. The orthogonal complement
Warning. While reading this section the reader should keep in mind that here,
as well as throughout the whole book, we consider finite dimensional spaces only.
For infinite dimensional spaces the majority of the statements of this section are
false.
47
5.1. To a linear space V over a field K we can assign a linear space V whose
elements are linear functions on V , i.e., the maps f : V K such that
f (1 v1 + 2 v2 ) = 1 f (v1 ) + 2 f (v2 ) for any 1 , 2 K and v1 , v2 V.
The space V is called the dual to V .
hk , Aej i =
aij hk , i i = akj .
apk hp , ej i = ajk .
P
P
5.3. Let {e } and { } be two bases such that j =
aij ei and p = bqp eq .
Then
X
X
X
pj = p (j ) =
aij p (ei ) =
aij bqp qi =
aij bip , i.e., AB T = I.
The maps f, g : V P
V constructed
from bases {e } and { } coincide if
P
f (j ) = g(j ) for all j, i.e.,
aij ei = bij ei and, therefore A = B = (AT )1 .
2 As is customary nowadays, we will, by abuse of language, briefly write {e } to denote the
i
complete set {ei : i I} of vectors of a basis and hope this will not cause a misunderstanding.
48
LINEAR SPACES
f1 (x) = b1 ,
(1)
............
fm (x) = bm .
We may assume that the covectors f1 , . . . , fk are linearly independent and fi =
Pk
Pk
j=1 ij fj for i > k. If x0 is a solution of (1) then fi (x0 ) =
j=1 ij fj (x0 ) for
i > k, i.e.,
(2)
bi =
k
X
ij bj for i > k.
j=1
Let us prove that if conditions (2) are verified then the system (1) is consistent.
Let us complement the set of covectors f1 , . . . , fk to a basis and consider the dual
basis e1 , . . . , en . For a solution we can take x0 = b1 e1 + + bk ek . The general
solution of the system (1) is of the form x0 +t1 ek+1 + +tnk en where t1 , . . . , tnk
are arbitrary numbers.
5.4.1. Theorem. If the system (1) is consistent, then it has a solution x =
Pk
(x1 , . . . , xn ), where xi = j=1 cij bj and the numbers cij do not depend on the bj .
To prove it, it suffices to consider the coordinates of the vector x0 = b1 e1 + +
bk ek with respect to the initial basis.
Pn
5.4.2. Theorem. If fi (x) =
j=1 aij xj , where aij Q and the covectors
f1 , . . . , fm constitute a basis
(in
particular
it follows that m = n), then the system
Pn
(1) has a solution xi = j=1 cij bj , where the numbers cij are rational and do not
depend on bj ; this solution is unique.
Proof. Since Ax = b, where A = aij , then x = A1 b. If the elements of A
are rational numbers, then the elements of A1 are also rational ones.
The results of 5.4.1 and 5.4.2 have a somewhat unexpected application.
5.4.3. Theorem. If a rectangle with sides a and b is arbitrarily cut into squares
with sides x1 , . . . , xn then xai Q and xbi Q for all i.
Proof. Figure 1 illustrates the following system of equations:
x1 + x2 = a
(3)
x3 + x2 = a
x4 + x2 = a
x4 + x5 + x6 = a
x6 + x7 = a
x1 + x3 + x4 + x7 = b
x2 + x5 + x7 = b
x2 + x6 = b.
49
Figure 1
A similar system of equations can be written for any other partition of a rectangle
into squares. Notice also that if the system corresponding to a partition has another
solution consisting of positive numbers, then to this solution a partition of the
rectangle into squares can also be assigned, and for any partition we have the
equality of areas x21 + . . . x2n = ab.
First, suppose that system (3) has a unique solution. Then
xi = i a + i b and i , i Q.
Substituting these values into all equations of system (3) we get identities of the
form pj a + qj b = 0, where pj , qj Q. If pj = qj = 0 for all j then system (3)
is consistent for all a and b. Therefore, for any sufficiently small variation of the
numbers a and b system (3) has a positive solution xi = i a + i b; therefore, there
exists the corresponding partition of the rectangle. Hence, for all a and b from
certain intervals we have
P 2 2
P 2 2
P
i a + 2 ( i i ) ab +
i b = ab.
P 2 P 2
i = 0 and, therefore, i = i = 0 for all i. We got a contradicThus,
i =
tion; hence, in one of the identities pj a + qj b = 0 one of the numbers pj and qj is
nonzero. Thus, b = ra, where r Q, and xi = (i + ri )a, where i + ri Q.
Now, let us prove that the dimension of the space of solutions of system (3)
cannot be greater than zero. The solutions of (3) are of the form
xi = i a + i b + 1i t1 + + ki tk ,
where t1 , . . . , tk can take arbitrary values. Therefore, the identity
X
(4)
(i a + i b + 1i t1 + + ki tk )2 = ab
should be true for all t1 , . . . , tk from certain intervals. The left-hand
P 2 2side of (4) is
a quadratic function of t1 , . . . , tk . This function is of the form
pi tp + . . . , and,
therefore, it cannot be a constant for all small changes of the numbers t1 , . . . , tk .
5.5. As we have already noted, there is no canonical isomorphism between V
and V . There is, however, a canonical one-to-one correspondence between the set
50
LINEAR SPACES
ek+1 , . . . , en does not depend on the choice of a basis in V , and only depends on
the subspace W itself. Contrarywise, the linear span of the vectors e1 , . . . , ek does
depend on the choice of the basis e1 , . . . , en ; it can be any k-dimensional subspace of
V whose intersection with W is 0. Indeed, let W1 be a k-dimensional subspace of
V and W1 W = 0. Then (W1 ) is an (n k)-dimensional subspace of V whose
intersection with W is 0. Let ek+1 , . . . , ek be a basis of (W1 ) . Let us complement
it with the help of a basis of W to a basis e1 , . . . , en . Then e1 , . . . , ek is a basis of
W1 .
Theorem. If A : V V is a linear operator and AW W then A W
W .
51
6.2. The kernel and the image of A and of the adjoint operator A are related
as follows.
6.2.1. Theorem. Ker A = (Im A) and Im A = (Ker A) .
Proof. The equality A f = 0 means that f (Ax) = A f (x) = 0 for any x V ,
i.e., f (Im A) . Therefore, Ker A = (Im A) and since (A ) = A, then Ker A =
(Im A ) . Hence, (Ker A) = ((Im A ) ) = Im A .
52
LINEAR SPACES
Ax = y
A f =g
for x, y V,
for f, g V ,
Let A : V V be a linear
(3)
Ax = 0,
(4)
A f = 0.
Then either equations (1) and (2) are solvable for any right-hand side and in this
case the solution is unique, or equations (3) and (4) have the same number of
linearly independent solutions x1 , . . . , xk and f1 , . . . , fk and in this case the equation (1) (resp. (2)) is solvable if and only if f1 (y) = = fk (y) = 0 (resp.
g(x1 ) = = g(xk ) = 0).
Proof. Let us show that the Fredholm alternative is essentially a reformulation
of Theorem 6.2.1. Solvability of equations (1) and (2) for any right-hand sides
means that Im A = V and Im A = V , i.e., (Ker A ) = V and (Ker A) = V
and, therefore, Ker A = 0 and Ker A = 0. These identities are equivalent since
rank A = rank A .
If Ker A 6= 0 then dim Ker A = dim Ker A and y Im A if and only if y
(Ker A ) , i.e., f1 (y) = = fk (y) = 0. Similarly, g Im A if and only if
g(x1 ) = = g(xk ) = 0.
6.3. The image of a linear map A is connected with the solvability of the linear
equation
(1)
Ax = b.
C = AXB.
53
6.3.2. Theorem. Let a = rank A. Then there exist invertible matrices L and
R such that LAR = Ia , where Ia is the unit matrix of order a enlarged with the
help of zeros to make its size same as that of A.
Proof. Let us consider the map A : V n V m corresponding to the matrix
A taken with respect to bases e1 , . . . , en and 1 , . . . , m in the spaces V n and V m ,
respectively. Let ya+1 , . . . , yn be a basis of Ker A and let vectors y1 , . . . , ya complement this basis to a basis of V n . Define a map R : V n V n setting R(ei ) = yi .
Then AR(ei ) = Ayi for i a and AR(ei ) = 0 for i > a. The vectors x1 = Ay1 ,
. . . , xa = Aya form a basis of Im A. Let us complement them by vectors xa+1 , . . . ,
xm to a basis of V m . Define a map L : V m V m by the formula Lxi = i . Then
(
i for 1 i a;
LAR(ei ) =
0 for i > a.
Therefore, the matrices of the operators L and R with respect to the bases e and
, respectively, are the required ones.
6.3.3. Theorem. Equation (2) is solvable if and only if one of the following
equivalent conditions holds
a) there exist matrices Y and Z such that C =AY
and C = ZB;
b) rank A = rank(A, C) and rank B = rank B
(A, C) is
C , where the matrix
formed from the columns of the matrices A and C and the matrix B
is
formed
C
from the rows of the matrices B and C.
Proof. The equivalence of a) and b) is proved along the same lines as Theorem 6.3.1. It is also clear that if C = AXB then we can set Y = XB and Z = AX.
Now, suppose that C = AY and C = ZB. Making use of Theorem 6.3.2, we can
rewrite (2) in the form
1
D = Ia W Ib , where D = LA CRB and W = RA
XL1
B .
1
Conditions C = AY and C = ZB take the form D = Ia (RA
Y RB ) and D =
1
(LA ZLB )Ib , respectively. The first identity implies that the last n a rows of D
are zero and the second identity implies that the last m b columns of D are zero.
Therefore, for W we can take the matrix D.
54
LINEAR SPACES
n
X
dim(Im Ak Ker A)
k=1
and
dim Im A = dim Im An+1 +
n
X
k=1
and1 , . . . , m .
aij such that
Let x be a column (x1 , . . . , xn )T , and let e and be the rows (e1 , . . . , en ) and
(1 , . . . , m ). Then f (ex) = Ax. In what follows a map and the corresponding
matrix will be often denoted by the same letter.
How does the matrix of a map vary under a change of bases? Let e0 = eP and
0
= Q be other bases. Then
f (e0 x) = f (eP x) = AP x = 0 Q1 AP x,
i.e.,
A0 = Q1 AP
is the matrix of f with respect to e0 and 0 . The most important case is that when
V = W and P = Q, in which case
A0 = P 1 AP.
Theorem. For a linear operator A the polynomial
|I A| = n + an1 n1 + + a0
does not depend on the choice of a basis.
Proof. |I P 1 AP | = |P 1 (I A)P | = |P |1 |P | |I A| = |I A|.
The polynomial
p() = |I A| = n + an1 n1 + + a0
is called the characteristic polynomial of the operator A, its roots are called the
eigenvalues of A. Clearly, |A| = (1)n a0 and tr A = an1 are invariants of A.
55
7.2. The majority of general statements on bases are quite obvious. There are,
however, several not so transparent theorems on a possibility of getting a basis by
sorting vectors of two systems of linearly independent vectors. Here is one of such
theorems.
Theorem ([Green, 1973]). Let x1 , . . . , xn and y1 , . . . , yn be two bases, 1 k
n. Then k of the vectors y1 , . . . , yn can be swapped with the vectors x1 , . . . , xk so
that we get again two bases.
Proof. Take the vectors y1 , . . . , yn for a basis of V . For any set of n vectors z1 ,
. . . , zn from V consider the determinant M (z1 , . . . , zn ) of the matrix whose rows
are composed of coordinates of the vectors z1 , . . . , zn with respect to the basis
y1 , . . . , yn . The vectors z1 , . . . , zn constitute a basis if and only if M (z1 , . . . , zn ) 6=
0. We can express the formula of the expansion of M (x1 , . . . , xn ) with respect to
the first k rows in the form
(1)
M (x1 , . . . , xn ) =
AY
56
LINEAR SPACES
Therefore,
(2)
i (g()) =
n1
X
k=0
If () 6= 0 then system (2) of linear equations for k () can be solved with the
help of Cramers rule. Therefore, k () is a rational function for all C \ ,
where is a (finite) set of roots of ().
The identity (1) can be expressed in the form p (T )f0 () = 0, where
p (T ) = T n n1 ()T n1 0 ()I.
Let 1 (), . . . , n () be the roots of p(). Then
(T 1 ()I) . . . (T n ()I)f0 () = 0.
If 6 , then the vectors f0 (), . . . , fn1 () are linearly independent, in other
words, h(T )f0 () 6= 0 for any nonzero polynomial h of degree n 1. Hence,
w = (T 2 ()I) . . . (T n ()I)f0 () 6= 0
and (T 1 ()I)w = 0, i.e., 1 () is an eigenvalue of T . The proof
of the fact that
2 (), . . . , n () are eigenvalues of T is similar. Thus, |i ()| T s (cf. 35.1).
The rational functions 0 (), . . . , n1 () are symmetric functions in the functions 1 (), . . . , n (); the latter are uniformly bounded on C \ and, therefore,
they themselves are uniformly bounded on C \ . Hence, the functions 0 (), . . . ,
n1 () are bounded on C; by Liouvilles theorem3 they are constants: i () = i .
Let p(T ) = T n n1 T n1 0 I. Then p(T )f0 () = 0 for C \ ;
hence, p(T )f0 () = 0 for all . In particular, p(T )0 = 0. Hence, p = p0 and
p0 (T ) = 0.
Problems
7.1. In V n there are given vectors e1 , . . . , em . Prove that if m P
n + 2 then
therePexist numbers 1 , . . . , m not all of them equal to zero such that
i ei = 0
and
i = 0.
7.2. A convex linear combination of P
vectors v1 , . . . , vm is an arbitrary vector
x = t1 v1 + + tm vm , where ti 0 and
ti = 1.
Prove that in a real space of dimension n any convex linear combination of m
vectors is also a convex linear combination of no more than n + 1 of the given
vectors.
n
P
7.3. Prove that if |aii | >
k6=i |aik | for i = 1, . . . , n, then A = aij 1 is an
invertible matrix.
7.4. a) Given vectors e1 , . . . , en+1 in an n-dimensional Euclidean space, such
that (ei , ej ) < 0 for i 6= j, prove that any n of these vectors form a basis.
b) Prove that if e1 , . . . , em are vectors in Rn such that (ei , ej ) < 0 for i 6= j
then m n + 1.
3 See
57
x11
..
= (A1 . . . Ak )
.
xk1
...
...
x1n
..
.
.
xkn
58
LINEAR SPACES
8.3. Let Mn,m be the space of matrices of size n m. In this space we can
indicate a subspace of dimension nr, the rank of whose elements does not exceed r.
For this it suffices to take matrices in the last n r rows of which only zeros stand.
Theorem ([Flanders, 1962]). Let r m n, let U Mn,m be a linear subspace
and let the maximal rank of elements of U be equal to r. Then dim U nr.
Proof. Complementing, if necessary, the matrices by zeros let us assume that
all matrices are of size n n. In U , select a matrix A of rank r. The transformation
Ir 0
X 7 P XQ, where P and Q are invertible matrices, sends A to
(see
0 0
Theorem 6.3.2). We now perform the same transformation over all matrices of U
and express them in the corresponding block form.
B11 B12
, where B21 B12 = 0.
8.3.1. Lemma. If B U then B =
B21
0
B11 B12
Proof. Let B =
U , where the matrix B21 consists of rows
B21 B22
u1 , . . . , unr and the matrix B12 consists of columns v1 , . . . , vnr . Any minor of
order r + 1 of the matrix tA + B vanishes and, therefore,
tIr + B11 vj
= 0.
(t) =
ui
bij
The coefficient of tr is equal to bij and, therefore, bij = 0. Hence, (see Theorem 3.1.3)
(t) = ui adj(tIr + B11 )vj .
Since adj(tIr + B11 ) = tr1 Ir + . . . , then the coefficient of tr1 of the polynomial
(t) is equal to ui vj . Hence, ui vj = 0 and, therefore B21 B12 = 0.
8.3.2. Lemma. If B, C U , then B21 C12 + C21 B12 = 0.
Proof. Applying Lemma 8.3.1 to the matrix B + C U we get (B21 +
C21 )(B12 + C12 ) = 0, i.e., B21 C12 + C21 B12 = 0.
We now turn to the proof of Theorem
8.3. Let us consider the map f : U
0
0
the form
and by Lemma 8.3.2 B21 C12 = 0 for all matrices C U .
B21 0
Further, consider the map g : Ker f Mr,n given by the formula
This map is a monomorphism (see 5.6) and therefore, the space g(Ker f ) Mr,n
59
Problems
n
8.1. Let aij = xi + yj . Prove that rankaij 1 2.
8.2. Let A be a square matrix such that rank A = 1. Prove that |A + I| =
(tr A) + 1.
8.3. Prove that rank(A A) = rank A.
A B
8.4. Let A be an invertible matrix. Prove that if rank
= rank A then
C D
D = CA1 B.
8.5. Let the sizes of matrices A1 and A2 be equal, and let V1 and V2 be the
spaces spanned by the rows of A1 and A2 , respectively; let W1 and W2 be the
spaces spanned by the columns of A1 and A2 , respectively. Prove that the following
conditions are equivalent:
1) rank(A1 + A2 ) = rank A1 + rank A2 ;
2) V1 V2 = 0;
3) W1 W2 = 0.
8.6. Prove that if A and B are matrices of the same size and B T A = 0 then
rank(A + B) = rank A + rank B.
8.7. Let A and B be square matrices of odd order. Prove that if AB = 0 then
at least one of the matrices A + AT and B + B T is not invertible.
8.8 (Generalized Ptolemy theorem). Let X1 . . . Xn be a convex
polygon inscribn
able in a circle. Consider a skew-symmetric matrix A = aij 1 , where aij = Xi Xj
for i > j. Prove that rank A = 2.
9. Subspaces. The Gram-Schmidt orthogonalization process
9.1. The dimension of the intersection of two subspaces is related with the
dimension of the space spanned by them via the following relation.
Theorem. dim(V + W ) + dim(V W ) = dim V + dim W .
Proof. Let e1 , . . . , er be a basis of V W ; it can be complemented to a basis
e1 , . . . , er , v1 , . . . , vnr of V n and to a basis e1 , . . . , er , w1 , . . . , wmr of W m . Then
e1 , . . . , er , v1 , . . . , vnr , w1 , . . . , wmr is a basis of V + W . Therefore,
dim(V +W )+dim(V W ) = (r+(nr)+(mr))+r = n+m = dim V +dim W.
9.2. Let V be a subspace over R. An inner product in V is a map V V R
which to a pair of vectors u, v V assigns a number (u, v) R and has the following
properties:
1) (u, v) = (v, u);
2) (u + v, w) = (u, w) + (v, w);
p
3) (u, u) > 0 for any u 6= 0; the value |u| = (u, u) is called the length of u.
A basis e1 , . . . , en of V is called an orthonormal (respectively, orthogonal) if
(ei , ej ) = ij (respectively, (ei , ej ) = 0 for i 6= j).
A matrix of the passage from an orthonormal basis to another orthonormal
basis is called an orthogonal matrix. The columns of such a matrix A constitute an
orthonormal system of vectors and, therefore,
AT A = I; hence, AT = A1 and AAT = I.
60
LINEAR SPACES
61
Figure 2
If w and w are orthogonal projections of a unit vector v on W and W ,
respectively, then cos (v, w) = |w| and cos (v, w ) = |w |, see Figure 2, and
therefore,
cos (v, W ) = sin (v, W ).
Let e1 , . . . , en be an orthonormal basis and v = x1 e1 + + xnP
en a unit vector.
n
Then xi = cos i , where i is the angle between v and ei . Hence, i=1 cos2 i = 1
and
n
n
X
X
sin2 i =
(1 cos2 i ) = n 1.
i=1
i=1
62
LINEAR SPACES
(1)
m X
n
X
k=1 i=1
2
yki
=
n X
m
X
2
2
2 2
2
yki
= d2 (d2
1 + + dn ) di (d1 + + dn ).
i=1 k=1
2 2
2
Now, suppose that
. , n and construct an
m
n di (d1 + + dn ) for 2i = 1, . .2
1
orthogonal matrix yki 1 with property (1), where d = m(d1 + + d2
. We
n )
can now construct the subspace W in an obvious way.
Let us prove by induction on n that if 0 i 1 for i = 1, . . . , n and 1 + +
n
2
2
n = m, then there exists an orthogonal matrix yki 1 such that y1i
+ +ymi
= i .
For n = 1 the statement is obvious. Suppose the statement holds for n 1 and
prove it for n. Consider two cases:
a) m n/2. We can assume that 1 n . Then n1 + n 2m/n 1
n1
and, therefore, there exists an orthogonal matrix A = aki 1
such that a21i +
+ a2mi = i for i = 1, . . . , n 2 and a21,n1 + + a2m,n1 = n1 + n . Then
the matrix
a
11
n ..
yki = .
1
an1,1
0
...
...
...
a1,n2
..
.
1 a1,n1
..
.
an1,n2
0
1 an1,n1
2
2 a1,n1
..
.
,
2 an1,n1
1
n1
and 2 =
n1 + n
columns; besides,
where 1 =
m
X
63
n
, is orthogonal with respect to its
n1 + n
2
yki
= i for i = 1, . . . , n 2
k=1
2
2
y1,n1
+ + ym,n1
= 12 (n1 + n ) = n1 ,
2
2
y1n
+ + ymn
= n
b) Let
therefore, there exists an orthogonal
m >n n/2. Then n2 m < n/2, and,
2
= 1 i for i = 1, . . . , n; hence,
+ + yn,i
matrix yki 1 such that ym+1,i
2
2
y1i
+ + ymi
= i .
9.6.1. Theorem. Suppose a set of k-dimensional subspaces in a space V is
given so that the intersection of any two of the subspaces is of dimension k 1.
Then either all these subspaces have a common (k 1)-dimensional subspace or all
of them are contained in the same (k + 1)-dimensional subspace.
Proof. Let Vijk1 = Vik Vjk and Vijl = Vik Vjk Vlk . First, let us prove that
k1
k1
k1
k1
then V12
if V123 6= V12
and V23
then V3k V1k + V2k . Indeed, if V123 6= V12
k1
k1
k
are distinct subspaces of V2 and the subspace V123 = V12 V23 is of dimension
k 2. In V123 , select a basis and complement it by vectors e13 and e23 to bases of
V13 and V23 , respectively. Then V3 = Span(e13 , e23 , ), where e13 V1 and e23 V2 .
Suppose the subspaces V1k , V2k and V3k have no common (k 1)-dimensional
k1
k1
subspace, i.e., the subspaces V12
and V23
do not coincide. The space Vi could
not be contained in the subspace spanned by V1 , V2 and V3 only if V12i = V12 and
V23i = V23 . But then dim Vi dim(V12 + V23 ) = k + 1 which is impossible.
If we consider the orthogonal complements to the given subspaces we get the
theorem dual to Theorem 9.6.1.
9.6.2. Theorem. Let a set of m-dimensional subspaces in a space V be given
so that any two of them are contained in a (m + 1)-dimensional subspace. Then
either all of them belong to an (m + 1)-dimensional subspace or all of them have a
common (m 1)-dimensional subspace.
Problems
9.1. In an n-dimensional space V , there are given m-dimensional subspaces U
and W so that u W for some u U \ 0. Prove that w U for some w W \ 0.
9.2. In an n-dimensional Euclidean space two bases x1 , . . . , xn and y1 , . . . , yn are
given so that (xi , xj ) = (yi , yj ) for all i, j. Prove that there exists an orthogonal
operator U which sends xi to yi .
10. Complexification and realification. Unitary spaces
10.1. The complexification of a linear space V over R is the set of pairs (a, b),
where a, b V , with the following structure of a linear space over C:
(a, b) + (a1 , b1 ) = (a + a1 , b + b1 )
(x + iy)(a, b) = (xa yb, xb + ya).
64
LINEAR SPACES
Such pairs of vectors can be expressed in the form a + ib. The complexification of
V is denoted by V C .
To an operator A : V V there corresponds an operator AC : V C V C given
by the formula AC (a+ib) = Aa+iAb. The operator AC is called the complexification
of A.
10.2. A linear space V over C is also a linear space over R. The space over R
obtained is called a realification of V . We will denote it by VR .
A linear map A : V W over C can be considered as a linear map AR : VR
WR over R. The map AR is called the realification of the operator A.
If e1 , . . . , en is a basis of V over C then e1 , . . . , en , ie1 , . . . , ien is a basis of VR . It
is easy to verify that if A = B + iC is the matrix of a linear map A : V W with
respect to bases e1 , . . . , en and 1 , . . . , m and the matrices B and C are real, then
the matrix of the linear map AR with respect
to thebases e1 , . . . , en , ie1 , . . . , ien
B C
.
and 1 , . . . , m , i1 , . . . , im is of the form
C B
Theorem. If A : V V is a linear map over C then det AR = | det A|2 .
Proof.
I
iI
0
I
B
C
C
B
I
iI
0
I
B iC
0
C
B + iC
A linear operator A is called unitary if (Ax, Ay) = (x, y), i.e., a unitary operator
preserves the Hermitian product. If an operator A is unitary then
(x, y) = (Ax, Ay) = (x, A Ay).
Therefore, A A = I = AA , i.e., the rows and the columns of the matrix of A
constitute an orthonormal systems of vectors.
A linear operator A is called Hermitian (resp. skew-Hermitian ) if A = A (resp.
A = A). Clearly, a linear operator is Hermitian if and only if its matrix A is
65
Hermitian with respect to an orthonormal basis, i.e., A = A; and in this case its
matrix is Hermitian with respect to any orthonormal basis.
Hermitian matrices are, as a rule, analogues of real symmetric matrices in the
complex case. Sometimes complex symmetric or skew-symmetric matrices (that
is such that satisfy the condition AT = A or AT = A, respectively) are also
considered.
10.3.1. Theorem. Let A be a complex operator such that (Ax, x) = 0 for all
x. Then A = 0.
Proof. Let us write the equation (Ax, x) = 0 twice: for x = u+v and x = u+iv.
Taking into account that (Av, v) = (Au, u) = 0 we get (Av, u) + (Au, v) = 0 and
i(Av, u) i(Au, v) = 0. Therefore, (Au, v) = 0 for all u, v V .
Remark. For real operators the identity (Ax, x) = 0 means that A is a skewsymmetric operator (cf. Theorem 21.1.2).
10.3.2. Theorem. Let A be a complex operator such that (Ax, x) R for any
x. Then A is an Hermitian operator.
Proof. Since (Ax, x) = (Ax, x) = (x, Ax), then
((A A )x, x) = (Ax, x) (A x, x) = (Ax, x) (x, Ax) = 0.
By Theorem 10.3.1 A A = 0.
10.3.3. Theorem. Any complex operator is uniquely representable in the form
A = B + iC, where B and C are Hermitian operators.
Proof. If A = B + iC, where B and C are Hermitian operators, then A =
B iC = B iC and, therefore 2B = A + A and 2iC = A A . It is easy to
1
verify that the operators 21 (A + A ) and 2i
(A A ) are Hermitian.
66
LINEAR SPACES
SOLUTIONS
67
(y1 ). If the vectors y and y1 are linearly independent then the vectors B T y T and
B T y1T are also linearly independent and, therefore, the equalities
(y + y1 )(B T y T + B T y1T ) = B(y T + y1T ) = (y)B T y T + (y1 )B T y1T
imply (y) = (y1 ). Thus, x (y) = B(x, y) and B(x, y) = B(y, x) = 2 B(x, y)
and, therefore, = 1.
6.1. By Theorem 6.1
dim(Im Ak Ker A) = dim Ker Ak+1 dim Ker Ak for any k.
Therefore,
n
X
k=1
68
LINEAR SPACES
Contradiction.
7.4. a) Suppose that the vectors e1 , . . . , ek are linearly dependent for k < n + 1.
We may assume that this set of vectors is minimal, i.e., 1 e1 + +k ek = 0, where
all the numbers i are nonzero. Then
X
P
0 = (en+1 , i ei ) =
i (en+1 , ei ), where (en+1 , ei ) < 0.
Therefore, among the numbers i there are both positive and negative ones. On
the other hand, if
1 e1 + + p ep = 0p+1 ep+1 + + 0k ek ,
where all numbers i , 0j are positive, then taking the inner product of this equality
with the vector in its right-hand side we get a negative number in the left-hand side
and the inner product of a nonzero vector by itself, i.e., a nonnegative number, in
the right-hand side.
b) Suppose that vectors e1 , . . . , en+2 in Rn are such that (ei , ej ) < 0 for i 6= j.
On the one hand, if 1 e1 + + n+2 en+2 = 0 then all the numbers i are of the
same sign (cf. solution P
to heading a). On the other hand, we can select the numbers
1 , . . . , n+2 so that
i = 0 (see Problem 7.1). Contradiction.
8.1. Let
x1 1
1 ... 1
.
..
X = ..
,
Y
=
.
.
y1 . . . yn
xn 1
n
Then aij 1 = XY .
8.2. Let e1 be a vector that generates Im A. Let us complement it to a basis
e1 , . . . , en . The matrix A with respect to this basis is of the form
a1 . . . an
0 ... 0
A=
.
..
...
.
0
...
Therefore, tr A = a1 and |A + I| = 1 + a1 .
8.3. It suffices to show that Ker A Im A = 0. If A v = 0 and v = Aw, then
(v, v) = (Aw, v) = (w, A v) = 0 and, therefore, v = 0.
8.4. The rows of the matrix (C, D) are linear combinations of the rows of the
matrix (A, B) and, therefore, (C, D) = X(A, B) = (XA, XB), i.e., D = XB =
(CA1 )B.
8.5. Let ri = rank Ai and r = rank(A1 + A2 ). Then dim Vi = dim Wi = ri
and dim(V1 + V2 ) = dim(W1 + W2 ) = r. The equality r1 + r2 = r means that
dim(V1 + V2 ) = dim V1 + dim V2 , i.e., V1 V2 = 0. Similarly, W1 W2 = 0.
8.6. The equality B T A = 0 means that the columns of the matrices A and B are
pair-wise orthogonal. Therefore, the space spanned by the columns of A has only
zero intersection with the space spanned by the columns of B. It remains to make
use of the result of Problem 8.5.
8.7. Suppose A and B are matrices of order 2m + 1. By Sylvesters inequality,
rank A + rank B rank AB + 2m + 1 = 2m + 1.
SOLUTIONS
69
Figure 3
Only the factor aj2 is negative in (1) and, therefore, (1) is equivalent to Ptolemys
theorem for the quadrilateral X1 X2 Xi Xj .
9.1. Let U1 be the orthogonal complement of u in U . Since
dim U1 + dim W = n (m 1) + m = n + 1,
then dim(U1 W ) 1. If w W U1 then w U1 and w u; therefore, w U .
9.2. Let us apply the orthogonalization process with the subsequent normalization to vectors x1 , . . . , xn . As a result we get an orthonormal basis e1 , . . . , en . The
vectors x1 , . . . , xn are expressed in terms of e1 , . . . , en and the coefficients only
depend on the inner products (xi , xj ). Similarly, for the vectors y1 , . . . , yn we get
an orthonormal basis 1 , . . . , n . The map that sends ei to i (i = 1, . . . , n) is the
required one.
10.1. det(I AR ) = | det(I A)|2 .
10.2. Let a = a1 +ia2 , b = b1 +ib2 , where
of the given map
ai , bi R. The matrix
a1 + b1 a2 + b2
with respect to the basis 1, i is equal to
and its determinant
a2 + b2 a1 b1
2
2
is equal to |a| |b| .
10.3. Let p = [n/2]. The complex subspace spanned by the vectors e1 + ie2 ,
e3 + ie4 , . . . , e2p1 + ie2p possesses the required property.
70
LINEAR
CHAPTER
SPACES
III
Therefore,
tr P AP 1 = tr P 1 P A = tr A,
i.e., the trace of the matrix of a linear operator does not depend on the choice of a
basis.
take A=
always true. For instance,
= tr ACB
is not
The equalitytr ABC
1 0
0 0
1 0
0 1
.
,B =
and C =
; then ABC = 0 and ACB =
1 0
0 0
0 0
0 0
For the trace of an operator in a Euclidean space we have the following useful
formula.
Theorem. Let e1 , . . . , en be an orthonormal basis. Then
tr A =
n
X
(Aei , ei ).
i=1
P
j
Remark. The trace of an operator is invariant but the above definition of the
trace makes use of a basis and, therefore, is not invariant. One can, however, give
an invariant definition of the trace of an operator (see 27.2).
11.2. A nonzero vector v V is called an eigenvector of the linear operator
A : V V if Av = v and this number is called an eigenvalue of A. Fix and
consider the equation Av = v, i.e., (A I)v = 0. This equation has a nonzero
solution v if and only if |A I| = 0. Therefore, the eigenvalues of A are roots of
the polynomial p() = |I A|.
The polynomial p() is called the characteristic polynomial of A. This polynomial only depends on the operator itself and does not depend on the choice of the
basis (see 7.1).
Theorem. If Ae1 = 1 e1 , . . . , Aek = k ek and the numbers 1 , . . . , k are
distinct, then e1 , . . . , ek are linearly independent.
Proof. Assume the contrary. Selecting a minimal linearly independent set of
vectors we can assume that ek = 1 e1 + + k1 ek1 , where 1 . . . k1 6= 0
and the vectors e1 , . . . , ek1 are linearly independent. Then Aek = 1 1 e1 + +
k1 k1 ek1 and Aek = k ek = 1 k e1 + + k1 k ek1 . Hence, 1 = k .
Contradiction.
Typeset by AMS-TEX
71
cos sin
.
sin cos
Proof. If 1 is an eigenvalue of A we can make use of the same arguments as
for the complex case and therefore, let us assume that the vectors x and Ax are
not parallel for all x. The function (x) = (x, Ax) the angle between x and
Ax is continuous on a compact set, the unit sphere.
Let 0 = (x0 , Ax0 ) be the minimum of (x) and e the vector parallel to the
bisector of the angle between x0 and Ax0 .
Then
0
0
0 (e, Ae) (e, Ax0 ) + (Ax0 , Ae) =
+
2
2
and, therefore, Ae belongs to the plane Span(x0 , e). This plane is invariant with
respect to A since Ax0 , Ae Span(x0 , e). An orthogonal transformation of a plane
is either a rotation or a symmetry through a straight line; the eigenvalues of a
symmetry, however, are equal to1 and, therefore,
the matrix of the restriction of
cos sin
A to Span(x0 , e) is of the form
, where sin 6= 0.
sin cos
11.4. The eigenvalues of the tridiagonal matrix
a1
c1
0
.
J =
..
0
0
0
b1
a2
c2
..
.
0
b2
a3
..
.
...
...
...
..
.
0
0
0
..
.
0
0
0
..
.
0
0
0
0
0
0
...
...
...
an2
cn2
0
bn2
an1
cn1
..
.
0
bn1
an
, where bi ci > 0,
72
n
have interesting properties. They are real and of multiplicity one. For J = aij 1 ,
consider the sequence of polynomials
Dk () = |ij aij |k1 ,
D0 () = 1.
(cf. 1.6) and, therefore, the characteristic polynomial Dn () depends not on the
numbers bk , ckthemselves, but on their products. By replacing in J the elements
bk and ck by bk ck we get a symmetric matrix J1 with the same characteristic
polynomial. Therefore, the eigenvalues of J are real.
A symmetric matrix has a basis of eigenvectors and therefore, it remains to
demonstrate that to every eigenvalue of J1 there corresponds no more than one
eigenvector (x1 , . . . , xn ). This is also true even for J, i.e without the assumption
that bk = ck . Since
( a1 )x1 b1 x2 = 0
c1 x1 + ( a2 )x2 b2 x3 = 0
...............
cn2 xn2 + ( an1 )xn1 bn1 xn = 0
cn1 xn1 + ( an )xn = 0,
it follows that the change
y1 = x1 , y2 = b1 x2 , . . . , yk = b1 . . . bk1 xk ,
yields
y2 = ( a1 )y1
y3 = ( a2 )y2 c1 b1 y1
..................
yn = ( an1 )yn1 cn2 bn2 yn2 .
These relations for yk coincide with relations (1) for Dk and, therefore, if y1 = c =
cD0 () then yk = cDk (). Thus the eigenvector (x1 , . . . , xk ) is uniquely determined
up to proportionality.
11.5. Let us give two examples of how to calculate eigenvalues and eigenvectors.
First, we observe that if is an eigenvalue of a matrix A and f an arbitrary
polynomial, then f () is an eigenvalue of the matrix f (A). This follows from the
fact that f (I) f (A) is divisible by I A.
a) Consider the matrix
0 0 0 ...
0
0 1
1 0 0 ...
0
0 0
0 1 0 ...
0
0 0.
P =
. . . .
.. .. ..
. . . . . ... ...
0 0 0 ...
0
1 0
73
A=
0
..
.
0
..
.
1
..
.
...
..
.
..
.
0
p1
0
p2
0
p3
...
...
0
..
.
.
1
pn
!
X
X
X
X
aij xj =
xj
aij =
xj
i,j
since
P
i
aij = 1. Thus,
xj =
xj , where
xj 6= 0. Therefore, = 1.
74
11.7.2. Theorem. If the sum of the absolute values of the elements of every
column of a square matrix A does not exceed 1, then all its eigenvalues do not exceed
1.
Proof. Let (x1 , . . . , xn ) be an eigenvector corresponding to an eigenvalue .
Then
X
X
|xi | = |
aij xj |
|aij ||xj |,
i = 1, . . . , n.
Adding up these inequalities we get
||
|xi |
|aij ||xj | =
i,j
|xj |
X
i
!
|aij |
|xj |
P
since i |aij | 1. Dividing both sides of this inequality by the nonzero number
P
|xj | we get || 1.
Remark. Theorem 11.7.2 remains valid also when certain of the columns of A
are zero ones.
n
Pn
Pn
11.7.3. Theorem. Let A = aij 1 , Sj = i=1 |aij |; then j=1 Sj1 |ajj |
rank A and the summands corresponding to zero values of Sj can be replaced by
zeros.
Proof. Multiplying the columns of A by nonzero numbers we can always make
the numbers Sj for the new matrix to be either 0 or 1 and, besides, ajj 0.
The rank of the matrix is not effected by these transformations. Applying Theorem 11.7.2 to the new matrix we get
X
|ajj | =
ajj = tr A =
|i | rank A.
Problems
11.1. a) Are there real matrices A and B such that AB BA = I?
b) Prove that if AB BA = A then |A| = 0.
n
11.2. Find the eigenvalues and the eigenvectors of the matrix A = aij 1 , where
aij = i /j .
11.3. Prove that any square matrix A is the sum of two invertible matrices.
11.4. Prove that the eigenvalues
of a matrix continuously depend on its elements.
n
More precisely, let A = aij 1 be a given matrix. For any > 0 there exists > 0
such that
if |aij
nbij | < and is an eigenvalue of A, then there exists an eigenvalue
A
|,
where
A
is
the
matrix
obtained
from
A
by
striking
out
the
ith
row
i
i
i=1
and the ith column.
11.8. Let 1 , . . . , nQbe the eigenvalues
of a matrix A. Prove that the eigenvalues
Q
of adj A are equal to i6=1 i , . . . , i6=n i .
75
XA = BX
there is an invertible complex matrix P , then among the solutions there is also an
invertible real matrix Q. The solutions over C of the linear equation (1) form a
linear space W over C with a basis C1 , . . . , Cn . The matrix Cj can be represented in
the form Cj = Xj + iYj , where Xj and Yj are real matrices. Since A and B are real
matrices, Cj A = BCj implies Xj A = BXj and Yj A = BYj . Hence, Xj , Yj W
for all j and W is spanned over C by the matrices X1 , . . . , Xn , Y1 , . . . , Yn and
therefore, we can select in W a basis D1 , . . . , Dn consisting of real matrices.
Let P (t1 , . . . , tn ) = |t1 D1 + + tn Dn |. The polynomial P (t1 , . . . , tn ) is not
identically equal to zero over C by the hypothesis and, therefore, it is not identically
equal to zero over R either, i.e., the matrix equation (1) has a nondegenerate real
solution t1 D1 + + tn Dn .
12.2. A Jordan block of size r r is
1
0
. .
.. ..
Jr () =
0 0
0 0
0
a matrix
0 ...
1 ...
..
..
.
.
0
0
0
of the form
... 0
... 0
.
..
. ..
.
... 1 0
... 1
... 0
76
A Jordan matrix is a block diagonal matrix with Jordan blocks Jri (i ) on the
diagonal.
A Jordan basis for an operator A : V V is a basis of the space V in which the
matrix of A is a Jordan matrix.
Theorem (Jordan). For any linear operator A : V V over C there exists a
Jordan basis and the Jordan matrix of A is uniquely determined up to a permutation
of its Jordan blocks.
Proof (Following [Valiaho, 1986]). First, let us prove the existence of a Jordan
basis. The proof will be carried out by induction on n = dim V .
For n = 1 the statement is obvious. Let be an eigenvalue of A. Consider a
noninvertible operator B = A I. A Jordan basis for B is also a Jordan basis for
A = B + I. The sequence Im B 0 Im B 1 Im B 2 . . . stabilizes and, therefore,
there exists a positive integer p such that Im B p+1 = Im B p 6= Im B p1 . Then
Im B p Ker B = 0 and Im B p1 Ker B 6= 0. Hence, B p (Im B p ) = Im B p .
Figure 4
Let Si = Im B i1 Ker B. Then Ker B = S1 S2 Sp 6= 0 and Sp+1 = 0.
Figure 4 might help to follow the course of the proof. In Sp , select a basis x1i
(i = 1, . . . , np ). Since x1i Im B p1 , then x1i = B p1 xpi for a vector xpi . Consider
the vectors xki = B pk xpi (k = 1, . . . , p). Let us complement the set of vectors x1i to
a basis of Sp1 by vectors yj1 . Now, find a vector yjp1 such that yj1 = B p2 yjp1 and
consider the vectors yjl = B pl1 yjp1 (l = 1, . . . , p1). Further, let us complement
the set of vectors x1i and yj1 to a basis of Sp2 by vectors zk1 , etc. The cardinality
Pp
of the set of all chosen vectors xki , yjl , . . . , b1t is equal to i=1 dim Si since every
x1i contributes with the summand p, every yj1 contributes with p 1, etc. Since
dim(Im B i1 Ker B) = dim Ker B i dim Ker B i1
Pp
(see 6.1), then i=1 dim Si = dim Ker B p .
Let us complement the chosen vectors to a basis of Im B p and prove that we
have obtained a basis of V . The number of these vectors indicates that it suffices
to demonstrate their linear independence. Suppose that
X
X
X
X
(1)
f+
i xpi +
i xip1 + +
j yjp1 + +
t b1t = 0,
77
k=0
The formula holds since IN = N I. The only nonzero elements of N m are the
1s in the positions (1, m + 1), (2, m + 2), . . . , (r m, r), where r is the order of
N . If m r then N m = 0.
12.4. Jordan bases always exist over an algebraically closed field only; over R
a Jordan basis does not always exist. However, over R there is also a Jordan form
which is a realification of the Jordan form over C. Let us explain how it looks.
First, observe that the part of a Jordan basis corresponding to real eigenvalues of
A is constructed over R along the same lines as over C. Therefore, only the case of
nonreal eigenvalues is of interest.
Let AC be the complexification of a real operator A (cf. 10.1).
12.4.1. Theorem. There is a one-to-one correspondence between the Jordan
blocks of AC corresponding to eigenvalues and .
Proof. Let B = P + iQ, where P and Q are real operators. If x and y are
real vectors then the equations (P + iQ)(x + iy) = 0 and (P iQ)(x iy) =
0 are equivalent, i.e., the equations Bz = 0 and Bz = 0 are equivalent. Since
(A I)n = (A I)n , the map z 7 z determines a one-to-one correspondence
between Ker(AI)n and Ker(AI)n . The dimensions of these spaces determine
the number and the sizes of the Jordan blocks.
Let Jn () be the 2n 2n matrix obtained from
block Jn () by
the Jordan
a b
replacing each of its elements a + ib by the matrix
.
b a
78
12.4.2. Theorem. For an operator A over R there exists a basis with respect
to which its matrix is of block diagonal form with blocks Jm1 (t1 ), . . . , Jmk (tk ) for
real eigenvalues ti and blocks Jn1 (1 ), . . . , Jns (s ) for nonreal eigenvalues i and
i .
Proof. If is an eigenvalue of A then by Theorem 12.4.1 is also an eigenvalue
of A and to every Jordan block Jn () of A there corresponds the Jordan block Jn ().
Besides, if e1 , . . . , en is the Jordan basis for Jn () then e1 , . . . , en is the Jordan basis
for Jn (). Therefore, the real vectors x1 , y1 , . . . , xn , yn , where ek = xk + iyk , are
linearly independent. In the basis x1 , y1 , . . . , xn , yn the matrix of the restriction of
A to Span(x1 , y1 , . . . , xn , yn ) is of the form Jn ().
12.5. The Jordan decomposition shows that any linear operator A over C can
be represented in the form A = As + An , where As is a semisimple (diagonalizable)
operator and An is a nilpotent operator such that As An = An As .
12.5.1. Theorem. The operators As and An are uniquely defined; moreover,
As = S(A) and An = N (A), where S and N are certain polynomials.
Proof.
Pm First, consider one Jordan block A = I + Nk of size k k. Let
S(t) = i=1 si ti . Then
i
X
i j ij
S(A) =
si
Nk .
j
i=1
j=0
m
X
p
p!
i
where S (p) is the pth derivative of S. Therefore, we have to select a polynomial S
so that S() = and S (1) () = = S (k1) () = 0, where k is the order of the
Jordan block. If 1 , . . . , n are distinct eigenvalues of A and k1 , . . . , kn are the sizes
of the maximal Jordan blocks corresponding to them, then S should take value i
at i and have at i zero derivatives from order 1 to order ki 1 inclusive. Such
a polynomial can always be constructed (see Appendix 3). It is also clear that if
As = S(A) then An = A S(A), i.e., N (A) = A S(A).
Now, let us prove the uniqueness of the decomposition. Let As + An = A = A0s +
0
An , where As An = An As and A0s A0n = A0n A0s . If AX = XA then S(A)X = XS(A)
and N (A)X = XN (A). Therefore, As A0s = A0s As and An A0n = A0n An . The operator B = A0s As = An A0n is a difference of commuting diagonalizable operators
and, therefore, is diagonalizable itself, cf. Problem 39.6 b). On the other hand,
the operator B is the difference of commuting nilpotent operators and therefore, is
nilpotent itself, cf. Problem 39.6 a). A diagonalizable nilpotent operator is equal
to zero.
The additive Jordan decomposition A = As + An enables us to get for an invertible operator A a multiplicative Jordan decomposition A = As Au , where Au is a
unipotent operator, i.e., the sum of the identity operator and a nilpotent one.
79
80
81
Theorem ([Farahat, Lederman, 1958]). The characteristic polynomial of a matrix A of order n coincides with its minimal polynomial if and only if for any vector
(x1 , . . . , xn ) there exist columns P and Q of length n such that xk = QT Ak P .
Proof. First, suppose that the degree of the minimal polynomial of A is equal
to n. Then there exists a column P such that the columns P , AP , . . . , An1 P
are linearly independent, i.e., the matrix K formed by these columns is invertible.
Any vector X = (x1 , . . . , xn ) can be represented in the form X = (XK 1 )K =
(QT P, . . . , QT An1 P ), where QT = XK 1 .
Now, suppose that for any vector (x1 , . . . , xn ) there exist columns P and Q such
that xk = QT Ak P . Then there exist columns P1 , . . . , Pn , Q1 , . . . , Qn such that the
matrix
T
Q1 P1 . . . QT1 An1 P1
..
B = ...
QTn Pn
...
QTn An1 Pn
is invertible. The matrices I, A, . . . , An1 are linearly independent because otherwise the columns of B would be linearly dependent.
13.4. The Cayley-Hamilton theorem has several generalizations. We will confine
ourselves to one of them.
13.4.1. Theorem ([Greenberg, 1984]). Let pA (t) be the characteristic polynomial of a matrix A, and let a matrix X commute with A. Then pA (X) = M (AX),
where M is a matrix that commutes with A and X.
Proof. Since B adj B = |B| I (see 2.4),
pA () I = [adj(I A)](I A) = (
n1
X
Ak k )(I A) =
k=0
n
X
k A0k .
k=0
Pn
All matrices A0k are diagonal, since so is pA ()I. Hence, pA (X) = k=0 X k A0k . If X
Pn1
commutes with A and Ak , then pA (X) = ( k=0 Ak X k )(X A). But the matrices
Ak can be expressed as polynomials of A (see Problem 2.11) and, therefore, if X
commutes with A then X commutes with Ak .
Problems
13.1. Let A be a matrix of order n and
f1 (A) = A (tr A)I, fk+1 (A) = fk (A)A
1
tr(fk (A)A)I.
k+1
82
0 0
1 0
0 1
. .
.. ..
0 0
0 ...
0 ...
0 ...
.. . .
.
.
0 ...
0
0
0
..
.
a0
a1
a2
..
.
an1
83
14.2. Let us prove that the characteristic polynomial of the cyclic block
0
1
0
.
A=
..
0
0
0
0
1
..
.
...
...
...
..
.
0
0
0 0
...
...
...
0 0 0
0 0 0
0 0 0
.. ..
. .
1 0 0
0 1 0
0 0 1
a0
a1
a2
an3
a
n2
an1
Pn1
is equal to n + k=0 ak k . Indeed, since Ae1 = e2 , . . . , Aen1 = en , and
Pn1
Aen = k=0 ak ek+1 , it follows that
A +
n1
X
!
ak A
e1 = 0.
k=0
Pn1
Taking into account that ei = Ai1 e1 we see that n + k=0 ak k is an annihilating
polynomial of A. It remains to notice that the vectors e1 , Ae1 , . . . , An1 e1 are
linearly independent and, therefore, the degree of the minimal polynomial of A is
no less than n.
As a by product we have proved that the characteristic polynomial of a cyclic
block coincides with its minimal polynomial.
Problems
14.1. The matrix of an operator A is block diagonal and consists of two cyclic
blocks with relatively prime characteristic polynomials, p and q. Prove that it is
possible to select a basis so that the matrix becomes one cyclic block.
14.2. Let A be a Jordan block, i.e., there exists a basis e1 , . . . , en such that
Ae1 = e1 and Aek = ek1 + ek for k = 2, . . . , n. Prove that there exists a vector
v such that the vectors v, Av, . . . , An1 v constitute a basis (then the matrix of
the operator A with respect to the basis v, Av, . . . , An1 v is a cyclic block).
14.3. For a cyclic block A indicate a symmetric matrix S such that A = SAT S 1 .
15. How to reduce the diagonal to a convenient form
15.1. The transformation A 7 XAX 1 preserves the trace and, therefore, the
diagonal elements of the matrix XAX 1 cannot be made completely arbitrary. We
can, however, reduce the diagonal of A to a, sometimes, more convenient form;
for example, a matrix A 6= I is similar to a matrix whose diagonal elements are
(0, . . . , 0, tr A); any matrix is similar to a matrix all diagonal elements of which are
equal.
Theorem ([Gibson, 1975]). Let A 6= I. Then A is similar to a matrix with the
diagonal (0, . . . , 0, tr A).
Proof. The diagonal of a cyclic block is of the needed form. Therefore, the
statement is true for any matrix whose characteristic and minimal polynomials
coincide (cf. 14.1).
84
For a matrix of order 2 the characteristic polynomial does not coincide with the
minimal one only for matrices of the form I. Let now A be a matrix of order 3
such that A 6= I and the characteristic polynomial of A does not coincide with its
minimal polynomial. Then the minimal polynomial of A is of the form (x)(x)
whereas the characteristic polynomial is (x )2 (x ) and the
case = is not
0 a 0
excluded. Therefore, the matrix A is similar to the matrix C = 1 b 0 and
0 0
0 a
the characteristic polynomial of
is divisible by x , i.e., 2 b a = 0.
1 b
If b = = 0, then the theorem holds.
If b = 6= 0, then b2 b2 a = 0, i.e., a = 0. In this case
0 0 0
b
1 b 0 1
0 0 b
b
b b
0 0
b b = 1 0
b 0
0 b2
0
b
0 = 0
b
b2
b
0
0
b
0 b b
0 b 0 b ,
b
b
2b
b
b b
0 b b
0 0 6= 0; therefore, A is similar to b 0 b .
0 b
b
b
2b
b 6= . Then for the matrix D = diag(b, )
is true and,
the theorem
0
1
therefore, there exists a matrix P such that P DP =
. The matrix
b
and det 1
b
Let, finally,
1
0
0
P
1
0
0
P 1
1 0
0 P
1
0
0
P 1
P DP 1
A1
A of order m + 1 is of the form
, where A1 is a matrix of order m. Since
A 6= I, we can assume that A1 6= I (otherwise we perform a permutation of rows
and columns, cf. Problem 12.2). By the inductive hypothesis there exists a matrix
P such that the diagonal of the matrix P A1 P 1 is of the form (0, 0, . . . , 0, ) and,
therefore, the diagonal of the matrix
X=
P
0
0
1
A1
P 1
0
0
1
P A1 P 1
QCQ
1 0
0
1
0
the diagonal of
is of the required form.
0 Q
C1
0 Q1
Remark. The proof holds for a field of any characteristic.
85
u v
a1 b
u v
v u
c a2
v u
there stands
a1 cos2 + a2 sin2 + (bei + cei ) cos sin , where = .
When varies from 0 to 2 the points bei + cei form an ellipse (or an interval)
centered at 0 C. Indeed, the points ei belong to the unit circle and the map z 7
bz + cz determines a (possibly singular) R-linear transformation of C. Therefore,
the number
p = (bei + cei )/(a1 a2 )
is real for a certain . Hence, t = cos2 + p sin cos is also real and
a1 cos2 + a2 sin2 + (bei + cei ) cos sin = ta1 + (1 t)a2 .
As varies from 0 to 2 , the variable t varies from 1 to 0. In particular, t takes
the value 12 . In this case the both diagonal elements of the transformed matrix are
equal to 12 (a11 + a22 ).
Let us treat matrices of size n n, where n 3, as follows. Select a pair of
diagonal elements the absolute value of whose difference is maximal (there could be
several such pairs). With the help of a permutation matrix this pair can be placed in
2
the positions (1, 1) and (2, 2) thanks to Problem 12.2. For the matrix A0 = aij 1
there exists a unitary matrix U such that the diagonal elements of U A0 U 1 are
1
equal to 21 (a11 + a22 ). It is
also clear
that the transformation A 7 U1 AU1 , where
U 0
U1 is the unitary matrix
, preserves the diagonal elements a33 , . . . , ann .
0 I
Thus, we have managed to replace two fartherest apart diagonal elements a11 and
a22 by their arithmetic mean. We do not increase in this way the maximal distance
between points nor did we create new pairs the distance between which is equal to
|a11 a22 | since
|x a11 | |x a22 |
a11 + a22
|
+
.
|x
2
2
2
After a finite number of such steps we get rid of all pairs of diagonal elements the
distance between which is equal to |a11 a22 |.
Remark. If A is a real matrix, then we can assume that u = cos and v = sin .
The number p is real in such a case. Therefore, if A is real then U can be considered
to be an orthogonal matrix.
86
a11
0
1
0
1 0
a11 0
=
0 U U 1
0 U 1
0 U
0
are nonzero.
Now, suppose that a matrix A is not diagonal. We can assume that a12 = 1 and
the matrix C obtained from A by crossing out the first row and the first column
is a nonzero matrix. Let U be a matrix such that all diagonal elements of U CU 1
are nonzero. Consider the matrix
a11
1 0
1
0
.
=
D=
A
0 U CU 1
0 U
0 U 1
0
The only zero diagonal element of D could be a11 . If a11 = 0 then for
0 d22
0
select a matrix V such that the diagonal elements of V
V 1 are nonzero.
0
d
22
V 0
V
0
Then the diagonal elements of
D
are also nonzero.
0 I
0
I
Problem
15.1. Prove that for any nonzero square matrix A there exists a matrix X such
that the matrices X and A + X have no common eigenvalues.
16. The polar decomposition
16.1. Any complex number z can be represented in the form z = |z|ei . An
analogue of such a representation is the polar decomposition of a matrix, A = SU ,
where S is an Hermitian and U is a unitary matrix.
Theorem. Any square matrix A over R (or C) can be represented in the form
A = SU , where S is a symmetric (Hermitian) nonnegative definite matrix and U is
an orthogonal (unitary) matrix. If A is invertible such a representation is unique.
Proof. If A = SU , where S is an Hermitian nonnegative definite matrix and U
is a unitary matrix, then AA = SU U S = S 2 . To find S, let us do the following.
87
Problems
16.1. Prove that any linear transformation of Rn is the composition of an orthogonal transformation and a dilation along perpendicular directions (with distinct
coefficients).
16.2. Let A : Rn Rn be a contraction operator, i.e., |Ax| |x|. The space Rn
can be considered as a subspace of R2n . Prove that A is the restriction to Rn of
the composition of an orthogonal transformation of R2n and the projection on Rn .
17. Factorizations of matrices
17.1. The Schur decomposition.
88
Theorem (Schur). Any square matrix A over C can be represented in the form
A = U T U , where U is a unitary and T a triangular matrix; moreover, A is normal
if and only if T is a diagonal matrix.
Proof. Let us prove by induction on the order of A. Let x be an eigenvector of
A, i.e., Ax = x. We may assume that |x| = 1. Let W be a unitary matrix whose
first column is made of the coordinates of x (to construct such a matrix it suffices
to complement x to an orthonormal basis). Then
0.
W AW = .
.
A1
t11
0
T =
...
0
t12
t21
..
.
...
...
..
.
t1n
t1n
.
..
.
...
tnn
Then (T T )11 = |t11 |2 + |t12 |2 + + |t1n |2 and (T T )11 = |t11 |2 . Therefore, the
identity T T = T T implies that t12 = = t1n = 0.
Now, strike out the first row and the first column in T and repeat the arguments.
17.2. The Lanczos decomposition.
Theorem ([Lanczos, 1958]). Any real m n-matrix A of rank p > 0 can be
represented in the form A = XY T , where X and Y are matrices of size m p
and n p with orthonormal columns and is a diagonal matrix of size p p.
Proof (Following [Schwert, 1960]). The rank of AT A is equal to the rank
of A; see Problem 8.3. Let U be an orthogonal matrix such that U T AT AU =
diag(1 , . . . , p , 0, . . . , 0), where i > 0. Further, let y1 , . . . , yp be the first p columns
of U and Y the matrix formed by these columns. The columns xi = 1
i Ayi , where
X T
T T
Remark. Since AU = (X, 0), then U A =
. Multiplying this
0
equality by U , we get AT = Y X T . Hence, AT X = Y X T X = Y , since
X T X = Ip . Therefore, (X T A)(AT X) = (Y T )(Y ) = 2 , since Y T Y = Ip . Thus,
the columns of X are eigenvectors of AAT .
89
0
0
0
0
E= 0
0
E
0
E
0
0 0
0
0
0
0
E
0
E ,
0
a b
0 1
b a
0
case (i.e., = a + bi, b 6= 0) =
,E =
and =
.
b a
1 0
a b
For a Jordan block of an arbitrary size a similar decomposition also holds.
Problems
17.1 (The Gauss factorization). All minors |aij |p1 , p = 1, . . . , n of a matrix A of
order n are nonzero. Prove that A can be represented in the form A = T1 T2 , where
T1 is a lower triangular and T2 an upper triangular matrix.
17.2 (The Gram factorization). Prove that an invertible matrix X can be represented in the form X = U T , where U is an orthogonal matrix and T is an upper
triangular matrix.
17.3 ([Ramakrishnan,
1972]). Let B = diag(1, , . . . , n1 ), where = exp( 2i
n ),
n
and C = cij 1 , where cij = i,j1 (here j 1 is considered modulo n). Prove
that
M over C is uniquely representable in the form M =
Pn1any n k n-matrix
l
a
B
C
.
kl
k,l=0
17.4. Prove that any skew-symmetric matrix A can be represented in the form
A = S1 S2 S2 S1 , where S1 and S2 are symmetric matrices.
18. The Smith normal form. Elementary factors of matrices
18.1. Let A be a matrix whose elements are integers or polynomials (we may
assume that the elements of A belong to a commutative ring in which the notion
of the greatest common divisor is defined). Further, let fk (A) be the greatest
common divisor of minors of order k of A. The formula for determinant expansion
with respect to a row indicates that fk is divisible by fk1 .
The formula A1 = (adj A)/ det A shows that the elements of A1 are integers
(resp. polynomials) if det A = 1 (resp. det A is a nonzero number). The other way
90
SOLUTIONS
91
|I Ai | =
i=1
n n1
X
X
i=1 k=0
92
0
B =
. The eigenvalues of B are equal to . If = 0 and B is
0
diagonalizable, then B = 0. Therefore, the matrix B is diagonalizable if and only
if both numbers and are simultaneously equal or not equal to zero.
Thus, the matrix A is diagonalizable if and only if the both numbers xi and
xni+1 are simultaneously equal or not equal to 0 for all i.
11.11. a) Suppose the columns x1 , . . . , xm correspond to real eigenvalues 1 ,
. . . , m . Let X = (x1 , . . . , xm ) and D = diag(1 , . . . , m ). Then AX = XD and
since D is a real matrix, then AXX = XDX = X(XD) = X(AX) = XX A .
If the vectors x1 , . . . , xm are linearly independent, then rank XX = rank X = m
(see Problem 8.3) and, therefore, for S we can take XX .
Now, suppose that AS = SA and S is a nonnegative definite matrix of rank m.
Then
there
B11 B12
B11 0
B11 B21
, i.e., B =
, where B11 is an Herthen
=
B21 0
0
0
0
B22
mitian matrix of order m. The matrix B11 has m linearly independent eigenvectors
z1 , . . . , zm with real eigenvalues.
AP = P B and P is an invertible matrix,
Since
then the vectors P z01 , . . . , P z0m are linearly independent and are eigenvectors
of A corresponding to real eigenvalues.
b) The proof is largely similar to that of a): in our case AXX A = AX(AX) =
XD(XD) = XDD X = XX .
If ASA = S and S = P N P , then P 1 AP N (P 1 AP ) = N , i.e.,
B11 B11
B21 B11
B11 B21
B21 B21
B11
0
Im
0
0
0
B12
, where B11 is unitary.
B22
SOLUTIONS
93
12.1. Let A be a Jordan block of order k. It is easy to verify that in this case
k
Sk A = AT Sk , where Sk = i,k+1j 1 is an invertible matrix. If A is the direct
sum of Jordan blocks, then we can take the direct sum of the matrices Sk .
1
to the permutation
1 and,
therefore, P 1 =
n The matrix P corresponds
n
12.2.
P
1
qij , where qij = (i)j . Let P AP = bij . Then bij =
s,t (i)s ast t(j) =
1
1
a(i)(j) .
12.3. Let 1 , . . . , m be distinct eigenvalues of A and pi the multiplicity of the
eigenvalue i . Then tr(Ak ) = p1 k1 + + pm km . Therefore,
Y
m1
bij
= p1 . . . pm (i j )2
0
i6=j
k
k
k
To compute |bij |m
0 we can, for example, replace pm m with m + (pm 1)m in the
k
expression for tr(A ).
12.4. If A0 = P 1 AP , then (A0 + I)1 A0 = P 1 (A + I)1 AP and, therefore,
it suffices to consider the case when A is a Jordan block. If A is invertible, then
lim (A + I)1 = A1 . Let A = 0 I + N = N be a Jordan block with zero
0
eigenvalue. Then
(N + I)1 N = 1 (I 1 N + 2 N 2 . . . )N = 1 N 2 N 2 + . . .
and the limit as 0 exists only if N = 0.
Thus, the limit indicated exists if and only if the matrix A does not have nonzero
blocks with zero eigenvalues. This condition is equivalent to rank A = rank A2 .
13.1. Let (1 , . . . , n ) be the diagonal
normal form of A and
Pn of thek Jordan
nk
(1)
.
k = k (1 , . . . , n ). Then |I
A|
=
k Therefore, it suffices to
k=0
Pm
demonstrate that fm (A) = k=0 (1)k Amk k for all m. For m = 1 this equation
coincides with the definition of f1 . Suppose the statement is proved for m; let us
prove it for m + 1. Clearly,
fm+1 (A) =
m
X
!
m
X
1
(1)k Amk+1 k
(1)k Amk+1 k I.
tr
m+1
k=0
Since
tr
k=0
m
X
where sp =
m
X
+ +
mk+1
(1) A
k=0
p1
pn ,
m
X
(1)k smk+1 k ,
k=0
(see 4.1).
k=0
13.2. According to the solution of Problem 13.1 the coefficients of the characteristic polynomial of X are functions of tr X, . . . , tr X n and, therefore, the
characteristic polynomials of A and B coincide.
13.3. Let f () be an arbitrary polynomial g() = n f (1 ) and B = A1 . If
0 = g(B) = B n f (A) then f (A) = 0. Therefore, the minimal polynomial of B
94
A
p
0
I
A
p(A) p0 (A)
0
p(A)
Q
If q(x) = (x i )ni is the minimal polynomial of A and p is an annihilating polyA I
nomial of
, then p and p0 are divisible by q; among all such polynomials
0 A
Q
p the polynomial (x i )ni +1 is of the minimal degree.
14.1. The minimal polynomial of a cyclic block coincides with the characteristic
polynomial. The minimal polynomial of A annihilates the given cyclic blocks since
it is divisible by both p and q. Since p and q are relatively prime, the minimal
polynomial of A is equal to pq. Therefore, there exists a vector in V whose minimal
polynomial is equal to pq.
14.2. First, let us prove that Ak en = enk + , where Span(en , . . . , enk+1 ).
We have Aen = en1 + en for k = 1 and, if the statement holds for k, then
Ak+1 en = enk+1 + enk + A and enk , A Span(en , . . . , enk ).
Therefore, expressing the coordinates of the vectors en , Aen , . . . , An1 en with
respect to the basis en , en1 , . . . , e1 we get the matrix
...
0
.
..
0
1
..
.
...
...
..
.
. .
..
. ..
0 1
This matrix is invertible and, therefore, the vectors en , Aen , . . . , An1 en form a
basis.
Remark. It is possible to prove that for v we can take any vector x1 e1 + +
xn en , where xn 6= 0.
14.3. Let
0 0 ... 0
a
an
an2 . . . a1 1
n1
1 0 . . . 0 an1
an2 an3 . . . 1 0
.
..
.. ..
0
1
.
.
.
0
a
n2
..
A=
, S=
.
. . .
.
.
.
.
.
. .
. . ..
..
. .
a1
1
... 0 0
0 0 ... 1
a1
1
0
... 0 0
Then
0
0
..
.
an2
an3
..
.
an3
an4
..
.
0
0
a1
1
1
0
AS =
...
...
...
...
...
0
a1
1
..
.
0
0
0
1
0
..
0
0
SOLUTIONS
95
15.1. By Theorem 15.3 there exists a matrix P such that the diagonal elements
of B = P 1 AP are nonzero. Consider a matrix Z whose diagonal elements are
all equal to 1, the elements above the main diagonal are zeros, and under the
diagonal there stand the same elements as in the corresponding places of B. The
eigenvalues of the lower triangular matrix Z are equal to 1 and the eigenvalues of
the upper triangular matrix B + Z are equal to 1 + bii 6= 1. Therefore, for X we
can take P ZP 1 .
16.1. The operator A can be represented in the form A = SU , where U is
an orthogonal operator and S is a positive definite symmetric operator. For a
symmetric operator there exists an orthogonal basis of eigenvectors, i.e., it is a
dilation along perpendicular directions.
16.2. If A = SU is the polar decomposition of A then for S there exists an
orthonormal eigenbasis e1 , . . . , en and all the eigenvalues do not exceed 1. Therefore,
Sei = (cos i )ei . Complement the basis e1 , . . . , en to a basis e1 , . . . , en , 1 , . . . ,
n of R2n and consider an orthogonal operator S1 which in every plane Span(e
i ,
i)
S
acts as the rotation through an angle i . The matrix of S1 is of the form
.
Since
I 0
S
U 0
SU
=
,
0 0
0 I
0 0
U 0
it follows that S1
is the required orthogonal transformation of R2n .
0 I
17.1. Let apq = be the only nonzero off-diagonal element of Xpq () and let the
diagonal elements of Xpq () be equal to 1. Then Xpq ()A is obtained from A by
adding to the pth row the qth row multiplied by . By the hypothesis, a11 6= 0 and,
therefore, subtracting from the kth row the 1st row multiplied by ak1 /a11 we get a
matrix with a21 = = an1 = 0. The hypothesis implies that a22 6= 0. Therefore,
we can subtract from the kth row (k 3) the 2nd row multiplied by ak2 /a22 and
get a matrix with a32 = = a3n = 0, etc.
Therefore, by multiplying A from the right by the matrices Xpq , where p > q,
we can get an upper triangular matrix T2 . Since p > q, then the matrices Xpq
are lower triangular and their product T is also a lower triangular matrix. The
equality T A = T2 implies A = T 1 T2 . It remains to observe that T1 = T 1 is a
lower triangular matrix (see Problem 2.6); the diagonal elements of T1 are all equal
to 1.
17.2. Let x1 , . . . , xn be the columns of X. By 9.2 there exists an orthonormal
set of vectors y1 , . . . , yn such that yi Span(x1 , . . . , xi ) for i = 1, . . . , n. Then
the matrix U whose columns are y1 , . . . , yn is orthogonal and U = XT1 , where T1
is an upper triangular matrix. Therefore, X = U T , where T = T11 is an upper
triangular matrix.
17.3. For every entry of the matrix M only one of the matrices I, C, C 2 , . . . ,
n1
C
has the same nonzero entry and, therefore, M is uniquely representable in
the form M = D0 + D1 C + + Dn1 C n1 , where the Dl are diagonal matrices.
For example,
a b
a 0
b 0
0 1
=
+
C, where C =
.
c d
0 d
0 c
1 0
The diagonal matrices I, B, B 2 , . . . , B n1 are linearly independent since their
diagonals constitute a Vandermonde determinant. Therefore, any matrix Dl is
96
n1
X
akl B k .
k=0
17.4. The matrix A/2 can be represented in the form A/2 = S1 S2 , where S1 and
S2 are symmetric matrices (see 17.3). Therefore, A = (A AT )/2 = S1 S2 S2 S1 .
18.1. Let A be either a Jordan or cyclic block of order n. In both cases the
matrix A xI has a triangular submatrix of order n 1 with units 1 on the main
diagonal. Therefore, f1 = = fn1 = 1 and fn = pA (x) is the characteristic
polynomial of A. Hence, g1 = = gn1 = 1 and gn = pA (x).
18.2. The cyclic normal form of A is of a block diagonal form with the diagonal
being formed by cyclic blocks corresponding to polynomials p1 , p2 , . . . , pk , where
p1 is the minimal polynomial of A and pi is divisible by pi+1 . Invariant factors of
these cyclic blocks are p1 , . . . , pk (Problem 18.1), and, therefore, the Smith normal
forms, are of the shape diag(1, . . . , 1, pi ). Hence, the Smith normal form of A is of
the shape diag(1, . . . , 1, pk , . . . , p2 , p1 ). Therefore, fn1 = p2 p3 . . . pk .
97
98
n
19.2.1. Theorem (Sylvesters criterion). Let A = aij 1 be an Hermitian
matrix. Then A is positive definite if and only if all minors |aij |k1 , k = 1, . . . , n,
are positive.
k
Proof. Let the matrix A be positive definite. Then the matrix aij 1 corresponds to the restriction of a positive definite Hermitian form x Ax to a subspace
n
and, therefore, |aij |k1 > 0. Now, let us prove by induction on n that if A = aij
1
is an Hermitian matrix and |aij |k1 > 0 for k = 1, . . . , n then A is positive definite.
n1
For n = 1 this statement is obvious. It remains to prove that if A0 = aij 1 is a
positive
matrix and |aij |n1 > 0 then the eigenvalues of the Hermitian matrix
definite
n
A = aij 1 are all positive. There exists an orthonormal basis e1 , . . . , en with respect to which x Ax is of the form 1 |y1 |2 + + n |yn |2 and 1 2 n .
If y Span(e1 , e2 ) then y Ay 2 |y|2 . On the other hand, if a nonzero vector
y belongs to an (n 1)-dimensional subspace on which an Hermitian form corresponding to A0 is defined then y Ay > 0. This (n1)-dimensional subspace and the
two-dimensional subspace Span(e1 , e2 ) belong to the same n-dimensional space and,
therefore, they have a common nonzero vector y. It follows that 2 |y|2 y Ay > 0,
i.e., 2 > 0; hence, i > 0 for i 2. Besides, 1 . . . n = |aij |n1 > 0 and therefore,
1 > 0.
19.2.2. Theorem (Sylvesters law of inertia).
reduced by a unitary transformation to the form
1 |x1 |2 + + n |xn |2 ,
(1)
a12 x2 + + a1n xn
a11
99
1 = max(x Ax),
x
xy1
.....................
n =
min
max
(x Ax)
xW1 W2
xW2
hence,
k
min
y1 ,...,yk1
max (x Ax).
xW2
min
max
(x Ax).
100
ex
Ax
dx = ( )n |A|1/2 ,
101
1 0
0 1
a b
corresponding to matrices
and
. Let P =
be an arbi0 0
c d
1 0
1 0
aa ab
0 1
trary invertible matrix. Then P
P =
and P
P =
0 0
ab bb
1 0
ac + ac ad + bc
. It remains to verify that the equalities ab = 0 and ad+bc = 0
ad + bc bd + bd
cannot hold simultaneously. If ab = 0 and P is invertible, then either a = 0 and
b 6= 0 or b = 0 and a 6= 0. In the first case 0 = ad + bc = bc and therefore, c = 0; in
the second case ad = 0 and, therefore, d = 0. In either case we get a noninvertible
matrix P .
20.2. Simultaneous diagonalization. If A and B are Hermitian matrices
and one of them is invertible, the following criterion for simultaneous reduction of
the forms x Ax and x Bx to diagonal form is known.
20.2.1. Theorem. Hermitian forms x Ax and x Bx, where A is an invertible
Hermitian matrix, are simultaneously reducible to diagonal form if and only if the
matrix A1 B is diagonalizable and all its eigenvalues are real.
Proof. First, suppose that A = P D1 P and B = P D2 P , where D1 and D2
are diagonal matrices. Then A1 B = P 1 D11 D2 P is a diagonalizable matrix. It
is also clear that the matrices D1 and D2 are real since y Di y R for any column
y = P x.
Now, suppose that A1 B = P DP 1 , where D = diag(1 , . . . , n ) and i R.
Then BP = AP D and, therefore, P BP = (P AP )D. Applying a permutation
matrix if necessary we can assume that D = diag(1 , . . . , k ) is a block diagonal
matrix, where i = i I and all numbers i are distinct. Let us represent in the
k
k
same block form the matrices P BP = Bij 1 and P AP = Aij 1 . Since they
102
Every matrix Ai can be represented in the form Ai = Ui Di Ui , where Ui is a unitary matrix and Di a diagonal matrix. Let U = diag(U1 , . . . , Uk ) and T = P U .
Then T AT = diag(D1 , . . . , Dk ) and T BT = diag(1 D1 , . . . , k Dk ).
There are also known certain sufficient conditions for simultaneous diagonalizability of a pair of Hermitian forms if both forms are singular.
20.2.2. Theorem ([Newcomb, 1961]). If Hermitian matrices A and B are
nonpositive or nonnegative definite, then there exists an invertible matrix T such
that T AT and T BT are diagonal.
Proof. Let rank A =
B = b and a b. There exists an invertible matrix
a, rank
Ia 0
I 0
c + x
I x
C c
=
.
x
0
c
c + x
||2
If 6= 0, then setting = 1/ and x = (1/)c we get a matrix whose offdiagonal elements in the last row and column are zero. These transformations
preserve A0 ; let us prove that these transformations reduce B1 to the form
Ba 0 0
B 0 = 0 Ik 0 ,
0
0 0
where Ba is a matrix of size a a and k = b rank Ba . Take a permutation matrix
P such that the transformation B1 7 P B1 P affects only the last n a rows and
columns of B1 and such that this transformation puts the nonzero diagonal elements
(from the last n a diagonal elements) first. Then with the help of transformations
indicated above we start with the last nonzero element and gradually shrinking the
size of the considered matrix we eventually obtain a matrix of size a a.
Let T2 be an invertible matrix such that T2 BT2 = B0 and T2 AT2 = A0 . There
exists a unitary matrix U of order a such
that U
Ba U is a diagonal matrix. Since
U
0
U Ia U = Ia , then T = T2 U1 , where U1 =
, is the required matrix.
0 I
20.2.3. Theorem ([Majindar, 1963]). Let A and B be Hermitian matrices and
let there be no nonzero column x such that x Ax = x Bx = 0. Then there exists
an invertible matrix T such that T AT and T BT are diagonal matrices.
Since any triangular Hermitian matrix is diagonal, Theorem 20.2.3 is a particular
case of the following statement.
20.2.4. Theorem. Let A and B be arbitrary complex square matrices and there
is no nonzero column x such that x Ax = x Bx = 0. Then there exists an invertible
matrix T such that T AT and T BT are triangular matrices.
Proof. If one of the matrices A and B, say B, is invertible then p() = |AB|
is a nonconstant polynomial. If the both matrices are noninvertible then |AB| =
103
x1 Ax1
D AD = 0
..
.
...
x1 Axn
x1 Bx1
and D BD = 0
..
.
...
x1 Bxn
Let us prove that D is invertible, i.e., that it is impossible to express the column x1
linearly in terms of x2 , . . . , xn . Suppose, contrarywise, that x1 = 2 x2 + + n xn .
Then
x1 Ax1 = (2 x2 + + n xn )Ax1 = 0.
Similarly, x1 Bx1 = 0; a contradiction. Hence, D is invertible.
Now, let us prove that the matrices A1 and B1 satisfy the hypothesis of the
theorem. Suppose there exists a nonzero column y1 = (2 , . . . , n )T such that
y1 A1 y1 = y1 B1 y1 = 0. As is easy to verify, A1 = D1 AD1 and B1 = D1 BD1 , where
D1 is the matrix formed by the columns x2 , . . . , xn . Therefore, y Ay = y By,
where y = D1 y1 = 2 x2 + + n xn 6= 0, since the columns x2 , . . . , xn are linearly
independent. Contradiction.
n
20.1. An Hermitian matrix A = aij 1 is nonnegative definite and aii = 0 for
some i. Prove that aij = aji = 0 for all j.
20.2 ([Albert, 1958]). Symmetric matrices Ai and Bi (i = 1, 2) are such that
the characteristic polynomials of the matrices xA1 + yA2 and xB1 + yB2 are equal
for all numbers x and y. Is there necessarily an orthogonal matrix U such that
U Ai U T = Bi for i = 1, 2?
21. Skew-symmetric matrices
A matrix A is said to be skew-symmetric if AT = A. In this section we consider
real skew-symmetric matrices. Recall that the determinant of a skew-symmetric
matrix of odd order vanishes since |AT | = |A| and | A| = (1)n |A|, where n is
the order of the matrix.
21.1.1. Theorem. If A is a skew-symmetric matrix then A2 is a symmetric
nonpositive definite matrix.
Proof. We have (A2 )T = (AT )2 = (A)2 = A2 and xT A2 x = xT AT Ax
= (Ax)T Ax 0.
104
X
i,j
aij xi xj =
ij
This quadratic form vanishes for all x if and only if all its coefficients are zero, i.e.,
aij + aji = 0.
P
21.2. A bilinear function B(x, y) = i,j aij xi yj is said to be skew-symmetric if
B(x, y) = B(y, x). In this case
X
i,j
0 1
0 1
T
A = P JP, where J = diag
,...,
1 0
1 0
and the elements of P are rational functions of aij . Taking into account that
0 1
0 1
1 0
=
1 0
1 0
0 1
we can represent J as the product of matrices J1 and J2 with equal determinants.
Therefore, A = (P T J1 )(J2 P ) = F G, where the elements of F and G are rational
functions of the elements of A and det F = det G.
105
0 i
Theorem. Let i =
. For a skew-symmetric operator A there exists
i
0
an orthonormal basis with respect to which its matrix is of the form
diag(1 , . . . , k , 0, . . . , 0).
Proof. The operator A2 is symmetric nonnegative definite. Let
V = {v V | A2 v = 2 v}.
Then V = V and AV V . If A2 v = 0 then (Av, Av) = (A2 v, v) = 0, i.e.,
Av = 0. Therefore, it suffices to select an orthonormal basis in V0 .
For > 0 the restriction of A to V has no real eigenvalues and the square of
this restriction is equal to 2 I. Let x V be a unit vector, y = 1 Ax. Then
(x, y) = (x, 1 Ax) = 0,
(y, y) = (
Ax, y) =
Ay = x,
(x, Ay) = (x, x) = 1.
106
sends orthogonal matrices to skew-symmetric ones and the other way round. This
map is called Cayley transformation and our expectations are largely true. Set
A# = (I A)(I + A)1 .
We can verify the identity (A# )# = A in a way similar to the proof of the identity
f (f (z)) = z; in the proof we should take into account that all matrices that we
encounter in the process of this transformation commute with each other.
Theorem. The Cayley transformation sends any skew-symmetric matrix to an
orthogonal one and any orthogonal matrix A for which |A + I| 6= 0 to a skewsymmetric one.
Proof. Since I A and I + A commute, it does not matter from which side
to divide and we can write the Cayley transformation as follows: A# = IA
I+A . If
AAT = I and |I + A| 6= 0 then
(A# )T =
I AT
I A1
AI
=
=
= A# .
T
1
I +A
I +A
A+I
If AT = A then
(A# )T =
I AT
I +A
=
= (A# )1 .
T
I +A
I A
A1 A2
n. Let us express A in the block form A =
, where A1 is a matrix of
A3 a
order n 1. By inductive hypothesis there exists a matrix J1 = diag(1, . . . , 1)
such that |A1 + J1 | 6= 0; then
A1 + J1
A2 A1 + J1
A2
= 2|A1 + J1 | 6= 0
A3
a + 1 A3
a 1
and, therefore, at least one of the determinants in the left-hand side is nonzero.
107
u v
, where |u|2 + |v|2 = 1.
v u
22.3. The determinant of an orthogonal matrix A of order 3 is equal to 1.
2
2
a) Prove that (tr
= 2 tr A.
P
PA) tr(A)
2
b) Prove that ( i aii 1) + i<j (aij aji )2 = 4.
22.4. Let J be an invertible matrix. A matrix A is said to be J-orthogonal
if AT JA = J, i.e., AT = JA1 J 1 and J-skew-symmetric if AT J = JA, i.e.,
AT = JAJ 1 . Prove that the Cayley transformation sends J-orthogonal matrices
into J-skew-symmetric ones and the other way around.
22.5 ([Djokovic, 1971]). Suppose the absolute values of all eigenvalues of an
operator A are equal to 1 and |Ax| |x| for all x. Prove that A is a unitary
operator.
22.6 ([Zassenhaus, 1961]). A unitary operator U sends some nonzero vector x to
a vector U x orthogonal to x. Prove that any arc of the unit circle that contains all
eigenvalues of U is of length no less than .
23. Normal matrices
A linear operator A over C is said to be normal if A A = AA ; the matrix of
a normal operator in an orthonormal basis is called a normal matrix. Clearly, a
matrix A is normal if and only if A A = AA .
The following conditions are equivalent to A being a normal operator:
1) A = B + iC, where B and C are commuting Hermitian operators (cf. Theorem 10.3.4);
2) A = U U , where U is a unitary and is a diagonal matrix, i.e., A has an
orthonormal
eigenbasis;
Pn
Pn cf. 17.1;
3) i=1 |i |2 = i,j=1 |aij |2 , where 1 , . . . , n are eigenvalues of A, cf. Theorem 34.1.1.
108
109
110
Figure 5
Young tableau consisting of n cells with ni cells in the ith row and the first cells of
all rows are situated in the first column, see Figure 5.
Clearly, nilpotent matrices are similar if and only if the same Young tableau
corresponds to them.
The dimension of Ker Am can be expressed in terms of the partition (n1 , . . . , nk ).
It is easy to check that
dim Ker A = k = Card {j|nj 1},
dim Ker A2 = dim Ker A + Card {j|nj 2},
....................................
dim Ker Am = dim Ker Am1 + Card {j|nj m}.
The partition (n01 , . . . , n0l ), where n0i = Card{j|nj i}, is called the dual to
the partition (n1 , . . . , nk ). Young tableaux of dual partitions of a number n are
obtained from each other by transposition similar to a transposition of a matrix. If
the partition (n1 , . . . , nk ) corresponds to a nilpotent matrix A then dim Ker Am =
n01 + + n0m .
Problems
24.1. Let A and B be two matrices of order n. Prove that if A+B is a nilpotent
matrix for n + 1 distinct values of , then A and B are nilpotent matrices.
24.2. Find matrices A and B such that A + B is nilpotent for any and but
there exists no matrix P such that P 1 AP and P 1 BP are triangular matrices.
25. Projections. Idempotent matrices
25.1. An operator P : V V is called a projection (or idempotent) if P 2 = P .
25.1.1. Theorem. In a certain basis, the matrix of a projection P is of the
form diag(1, . . . , 1, 0, . . . , 0).
Proof. Any vector v V can be represented in the form v = P v + (v P v),
where P v Im P and v P v Ker P . Besides, if x Im P Ker P , then x = 0.
Indeed, in this case x = P y and P x = 0 and, therefore, 0 = P x = P 2 y = P y = x.
Hence, V = Im P Ker P . For a basis of V select the union of bases of Im P and
Ker P . In this basis the matrix of P is of the required form.
111
25.1.1.1. Corollary. There exists a one-to-one correspondence between projections and decompositions V = W1 W2 . To every such decomposition there
corresponds the projection P (w1 + w2 ) = w1 , where w1 W1 and w2 W2 , and to
every projection there corresponds a decomposition V = Im P Ker P .
The operator P can be called the projection onto W1 parallel to W2 .
25.1.1.2. Corollary. If P is a projection then rank P = tr P .
25.1.2. Theorem. If P is a projection, then I P is also a projection; besides,
Ker(I P ) = Im P and Im(I P ) = Ker P .
Proof. If P 2 = P then (I P )2 = I 2P + P 2 = I P . According to the
proof of Theorem 25.1.1 Ker P consists of vectors v P v, i.e., Ker P = Im(I P ).
Similarly, Ker(I P ) = Im P .
Corollary. If P is the projection onto W1 parallel to W2 , then I P is the
projection onto W2 parallel to W1 .
25.2. Let P be a projection and V = Im P Ker P . If Im P Ker P , then P v
is an orthogonal projection of v onto Im P ; cf. 9.3.
25.2.1. Theorem. A projection P is Hermitian if and only if Im P Ker P .
Proof. If P is Hermitian then Ker P = (Im P ) = (Im P ) . Now, suppose
that P is a projection and Im P Ker P . The vectors x P x and y P y belong to
Ker P ; therefore, (P x, y P y) = 0 and (x P x, P y) = 0, i.e., (P x, y) = (P x, P y) =
(x, P y).
Remark. If a projection P is Hermitian, then (P x, y) = (P x, P y); in particular,
(P x, x) = |P x|2 .
25.2.2. Theorem. A projection P is Hermitian if and only if |P x| |x| for
all x.
Proof. If the projection P is Hermitian, then x P x x and, therefore,
|x|2 = |P x|2 + |P x x|2 |P x|2 . Thus, if |P x| |x|, then Ker P Im P .
Now, assume that v Im P is not perpendicular to Ker P and v1 is the projection
of v on Ker P . Then |vv1 | < |v| and v = P (vv1 ); therefore, |vv1 | < |P (vv1 )|.
Contradiction.
Hermitian projections P and Q are said to be orthogonal if Im P Im Q, i.e.,
P Q = QP = 0.
25.2.3. Theorem. Let P1 , . . . , Pn be Hermitian projections. The operator P =
P1 + + Pn is a projection if and only if Pi Pj = 0 for i 6= j.
Proof. If Pi Pj = 0 for i 6= j then
P 2 = (P1 + + Pn )2 = P12 + + Pn2 = P1 + + Pn = P.
Now, suppose that P = P1 + +Pn is a projection. This projection is Hermitian
and, therefore, if x = Pi x then
|x|2 = |Pi x|2 |P1 x|2 + + |Pn x|2
= (P1 x, x) + + (Pn x, x) = (P x, x) = |P x|2 |x|2 .
Hence, Pj x = 0 for i 6= j, i.e., Pj Pi = 0.
112
I
P21
I
0
of A is of the form
. Consider the matrix B =
.
P12
I
P12 I P12 P21
As is easy to verify, |I P12 P21 | = |B| = |A| > 0. Now, let us prove that the
absolute value of each of the eigenvalues of I P12 P21 (i.e., of the restriction B to
V2 ) does not exceed 1. Indeed, if x V2 then
|Bx|2 = (Bx, Bx) = (x P2 P1 x, x P2 P1 x)
= |x|2 (P2 P1 x, x) (x, P2 P1 x) + |P2 P1 x|2 .
Since
(P2 P1 x, x) = (P1 x, P2 x) = (P1 x, x) = |P1 x|2 ,
(1)
The absolute value of any eigenvalue of I P12 P21 does not exceed 1 and the
determinant of this operator is positive; therefore, 0 < |I P12 P21 | 1.
If |I P12 P21 | = 1 then all eigenvalues of I P12 P21 are equal to 1 and, therefore,
taking (1) into account we see that this operator is unitary; cf. Problem 22.1. Hence,
|Bx| = |x| for any x V2 . Taking (1) into account once again, we get |P1 x| = 0,
i.e., V2 V1 .
113
H1 H1 P21 H1
0
=
= |H1 | |I P12 P21 |.
P12
P12 I P12 P21
I
It remains to make use of Lemma 25.4.1.1.
Proof of Theorem 25.4.1. As in the proof of Lemma 25.4.1.1, we can show
that |A| > 0. The proof of the inequality |A| 1 will be carried out by induction
on k. For k = 1 the statement is obvious. For k > 1 consider the space W =
V1 Vk1 . Let
Qi = Pi |W (i = 1, . . . , k 1), H1 = Q1 + + Qk1 .
By the inductive hypothesis |H1 | 1; besides, |H1 | > 0. Applying Lemma 25.4.1.2
to H = P1 + + Pk1 we get 0 < |A| = |H + Pk | |H1 | 1.
If |A| = 1 then by Lemma 25.4.1.2 Vk W . Besides, |H1 | = 1, hence, Vi Vj
for i, j k 1.
25.4.2. Theorem ([Djokovic, 1971]). Let Ni be a normal operator in V whose
(i)
(i)
nonzero eigenvalues are equal to 1 , . . . , ri . Further, let r1 + + rk dim V . If
(i)
the nonzero eigenvalues of N = N1 + + Nk are equal to j , where j = 1, . . . , ri ,
then N is a normal operator, Im Ni Im Nj and Ni Nj = 0 for i 6= j.
Proof. Let Vi = Im Ni . Since rank N = rank N1 + + rank Nk , it follows that
W = V1 + + Vk is the direct sum of these subspaces. For a normal operator
Ker Ni = (Im Ni ) , and so Ker Ni W ; hence, Ker N W . It is also clear
that dim Ker N = dim W . Therefore, without loss of generality we may confine
ourselves to a subspace W and assume that r1 + + rk = dim V , i.e., det N 6= 0.
LetPMi = NP
basis of V take the union of bases of the spaces Vi . Since
i |Vi . For aP
N=
Ni = Ni Pi = Mi Pi , in this basis the matrix of N takes the form
M1 P11 . . . M1 Pk1
M1 . . . 0
P11 . . . Pk1
..
..
..
..
.. ..
..
..
= ...
.
.
.
.
.
.
.
.
.
Mk P1k . . . Mk Pkk
0 . . . Mk
P1k . . . Pkk
The condition on the eigenvalues of the operators Ni and N implies |N I| =
Qk
Qk
i=1 |Mi I|. In particular, for = 0 we have |N | =
i=1 |Mi |. Hence,
P11 . . . Pk1
.
..
..
.
.
. = 1, i.e., |P1 + + Pk | = 1.
.
P1k . . . Pkk
Applying Theorem 25.4.1 we see that V = V1 Vk is the direct sum of
orthogonal subspaces. Therefore, N is a normal operator, cf. 17.1, and Ni Nj = 0,
since Im Nj (Im Ni ) = Ker Ni .
114
Problems
25.1. Let P1 and P2 be projections. Prove that
a) P1 + P2 is a projection if and only if P1 P2 = P2 P1 = 0;
b) P1 P2 is a projection if and only if P1 P2 = P2 P1 = P2 .
25.2. Find all matrices of order 2 that are projections.
25.3 (The ergodic theorem). Let A be a unitary operator. Prove that
n1
1X i
A x = P x,
n n
i=0
lim
26. INVOLUTIONS
115
i ni , where 0 = 1
i ni = n1 n
i in = n1
i i .
116
Solutions
19.1. Let S = U U , where U is a unitary matrix, = diag(1 , . . . , r , 0, . . . , 0).
Then S = S1 + + Sr , where Si = U i U , i = diag(0, . . . , i , . . . , 0).
19.2. We can represent A in the form U U 1 , where = diag(1 , . . . , n ), i >
0. Therefore, adj A = U (adj )U 1 and adj = diag(2 . . . n , . . . , 1 . . . n1 ).
19.3. Let 1 , . . . , r be the nonzero eigenvalues of A. All of them are real and,
therefore, (tr A)2 = (1 + + r )2 r(21 + + 2r ) = r tr(A2 ).
19.4. Let U be an orthogonal matrix such that U 1 AU = and |U | = 1. Set
x = U y. Then xT Ax = y T y and dx1 . . . dxn = dy1 . . . dyn since the Jacobian of
this transformation is equal to |U |. Hence,
Z
Z
Z
T
2
2
ex Ax dx =
e1 y1 n yn dy
n Z
Y
i=1
ei yi dyi =
n r
Y
i=1
1
= ( )n |A| 2 .
a
. . . a1ik x11 . . . x1n
a11 . . . a1n
1i1
..
.. ..
..
..
..
.. ..
...
= .
.
.
.
.
.
.
.
.
an1 . . . ann
xik 1 . . . xik n
ani1 . . . anik
In particular, for the rows i1 , . . . , ir of A we get the expression
x1n
..
.
.
xi k n
Both for a symmetric matrix and for an Hermitian matrix the linear independence
of the columns i1 , . . . , ir implies the linear independence of the rows i1 , . . . , ik and,
therefore,
ai1 i1 . . . ai1 ik
..
..
.. 6= 0.
.
.
.
ai i . . . a i i
k 1
k k
19.6. The scalar product of the ith row of S by the jth column of S 1 vanishes
for i 6= j. Therefore, every column of S 1 contains a positive and a negative
element; hence, the number of nonnegative elements of S 1 is not less than 2n and
the number of zero elements does not exceed n2 2n.
An example of a matrix S 1 with precisely the needed number of zero elements
is as follows:
2 1
1
1 1 1 1 1 ...
1 0
1
1 2 2 2 2 ...
1
0 1
1
2
1
1
1
.
.
.
=
,
1 0
S 1 =
1 2 1 2 2 ...
..
1 2 1 2 1 ...
.. .. .. .. .. . .
0
s
.
. . . . .
s s
SOLUTIONS
117
where s = (1)n .
20.1. Let aii = 0 and aij 6= 0. Take a column x such that xi = taij , xj = 1, the
other elements being zero. Then x Ax = ajj + 2t|aij |2 . As t varies from to
+ the quantity x Ax takes both positive and negative values.
20.2. No, not necessarily. Let A1 = B1 = diag(0, 1, 1); let
0
A2 = 2
2
2 2
0 0
0 0
and
0 0
B2 = 0 0
2 2
2
2 .
0
0 0
0 0
2 2
0
2 2
2
2 = B2 = U A2 U T = 2
0
0 .
0
2
0
0
Contradiction.
21.1. The nonzero eigenvalues of A are purely imaginary and, therefore, 1
cannot be its eigenvalue.
21.2. Since (A)1 = A1 , it follows that (A1 )T = (AT )1 = (A)1 =
A1 .
21.3. We will repeatedly make use of the fact that for a skew-symmetric matrix A
of even order dim Ker A is an even number. (Indeed, the rank of a skew-symmetric
matrix is an even number, see 21.2.) First, consider the case of the zero eigenvalue,
i.e., let us prove that if dim Ker AB 1, then dim Ker AB 2. If |B| = 0, then
dim Ker AB dim Ker B 2. If |B| 6= 0, then Ker AB = B 1 Ker A, hence,
dim Ker AB 2.
Now, suppose that dim Ker(AB I) 1 for 6= 0. We will prove that
dim Ker(AB I) 2. If (ABA A)u = 0, then (AB I)Au = 0, i.e.,
AU Ker(AB I), where U = Ker(ABA A). Therefore, it suffices to prove
that dim AU 2. Since Ker A U , it follows that dim AU = dim U dim Ker A.
The matrix ABA is skew-symmetric; thus, the numbers dim U and dim Ker A are
even; hence, dim AU is an even number.
It remains to verify that Ker A 6= U . Suppose that (AB I)Ax = 0 implies
that Ax = 0. Then Im AKer(AB I) = 0. On the other hand, if (AB I)x = 0
then x = A(1 Bx) Im A, i.e., Ker(AB I) Im A and dim Ker(AB I) 1.
Contradiction.
1
z
22.1. The roots of p() are such that if z is a root of it then =
= z is also
z
zz
n
1
a root. Therefore, the polynomial q() = p( ) has the same roots as p (with
the same multiplicities). Besides, the constant term of p() is equal to 1 and,
therefore, the leading coefficients of p() and q() can differ only in sign.
118
a b
a b
22.2. Let
be a unitary matrix with determinant 1. Then
=
c d
c d
a c
d c
=
, i.e., a = d and b = c. Besides, ad bc = 1, i.e.,
b d
b a
|a|2 + |b|2 = 1.
22.3. a) A is a rotation through an angle and, therefore, tr A = 1 + 2 cos and
tr(A2 ) = 1 + 2 cos 2 = 4 cos2 1.
b) Clearly,
X
X
X
(aij aji )2 =
a2ij 2
aij aji
i<j
i<j
i6=j
and
tr(A2 ) =
X
i
a2ii + 2
aij aji .
i<j
aii 1)2 1.
P
P
P
P
Hence, i<j (aij aji )2 + ( i aii 1)2 1 = i6=j a2ij + i a2ii = 3.
A
22.4. Set B
= AB 1 ; then the cancellation rule takes the form:
T
1 1
A = JA J
then
(A# )T =
AB
CB
A
C.
If
I AT
I JA1 J 1
J(A I)A1 J 1
J(A I)
=
=
=
= JA# J 1 .
T
1
1
1
1
I +A
I + JA J
J(A + I)A J
J(A + I)
If AT = JAJ 1 then
(A# )T =
I AT
I + JAJ 1
J(I + A)J 1
=
=
= J(A# )1 J 1 .
T
1
I +A
I JAJ
J(I A)J 1
|x
|
.
Let
t
=
|x
|
|x|
.
Since
t
0,
t
=
1 and
i
i
i
i
i
i
P
ti i = 0, the origin belongs to the interior of the convex hull of 1 , . . . , n .
SOLUTIONS
119
D
0
0
0
D+
0
0
0
U1
U3
U2
U4
D+ U1
0
D+ U2
0
1
hold only if U1 = D+
D = diag(ei1 , . . . , eik ) and, therefore,
U1
U3
U2
U4
is a
D+
0
0
0
U1
0
0
U4
U1
0
0
U4
D+
0
0
0
P
23.5. A matrix X is normal if and only if tr(X X) =
|i |2 , where i are
eigenvalues of X; cf. 34.1. Besides, the eigenvalues of X = AB and Y = BA
coincide; cf. 11.7. It remains to verify that tr(X X) = tr(Y Y ). This is easy to do
if we take into account that A A = AA and B B = BB .
24.1. The matrix (A + B)n can be represented in the form
(A + B)n = An + C1 + + n1 Cn1 + n B n ,
where matrices C1 , . . . , Cn1 do not depend on . Let a, c1 , . . . , cn1 , b be the
elements of the matrices An , C1 , . . . , Cn1 , B occupying the (i, j)th positions.
Then a + c1 + + n1 cn1 + n b = 0 for n + 1 distinct values of . We
have obtained a system of n + 1 equations for n + 1 unknowns a, c1 , . . . , cn1 , b.
The determinant of this system is a Vandermonde determinant and, therefore, it
is nonzero. Hence, the system obtained has only the zero solution. In particular,
a = b = 0 and, therefore,
An =B n = 0.
0 1 0
0 0 0
24.2. Let A = 0 0 1 , B = 1 0 0 and C = A + B. As is easy
0 0 0
0 1 0
3
to verify, C = 0.
It is impossible to reduce A and B to triangular form simultaneously since AB
is not nilpotent.
120
1 X
1
2|y|
i
n
A xi = (y A y)
0 as n .
n
n
i=0
Since x2 Ker(I A ), it follows that x2 = A x2 = A1 x2 , i.e., Ax2 = x2 . Hence,
n1
n1
1X i
1X
A x2 = lim
x2 = x2 .
n n
n n
i=0
i=0
lim
121
MULTILINEAR ALGEBRA
122
MULTILINEAR ALGEBRA
q factors
the numbers Ti11...ipq are called the coordinates of the tensor T in the basis e1 , . . . , en .
Let us establish howP
coordinates of a tensor
change under the passage to another
P
basis. Let j = Aej = aij ei and j =
bij ei . It is easy to see that B = (AT )1 ,
cf. 5.3.
P
Introduce notations: aij = aij and bji = bij and denote the tensor (1) by T e
e for brevity. Then
X
X
X
T e e =
S =
S b a e e ,
i.e.,
j ...j
k ...k
(2)
(here summation over repeated indices is assumed). Formula (2) relates the coordinates S of the tensor in the basis {i } with the coordinates T in the basis
{ei }.
On tensors of type (1, 1) (which can be identified with linear operators) a convolution is defined; it sends v w to v (w). The convolution maps an operator to
its trace; cf. Theorem 27.2.2.
123
q1
Let 1 i p and 1 j q. Consider a linear map Tpq (V ) Tp1
(V ):
f1 fp v1 vq 7 fi (vj )f v,
where f and v are tensor products of f1 , . . . , fp and v1 , . . . , vq with fi and vj ,
respectively, omitted. This map is called the convolution of a tensor with respect
to its ith lower index and jth upper index.
27.4. Linear maps Ai : Vi Wi , (i = 1, . . . , k) induce a linear map
A1 Ak : V1 Vk W1 Wk ,
e1i ekj 7 A1 e1i Ak ekj .
As is easy to verify, this map sends v1 vk to A1 v1 Ak vk . The map
A1 AP
product of operators A1 , . . . , AkP
.
k is called the tensorP
If Aej =
aij i and Be0q =
bpq 0p then A B(ej e0q ) =
aij bpq i 0p .
0
0
Hence, by appropriately ordering the basis ei eq and i p we can express the
matrix A B in either of the forms
a11 B
...
am1 B
. . . a1n B
b11 A . . .
..
..
..
..
or
.
.
.
.
. . . amn B
bk1 A . . .
b1l A
..
.
.
bkl A
i
i
V k
V l V m
V n.
124
MULTILINEAR ALGEBRA
Proof. Clearly,
f (a1 , . . . , ak1 , x, ak , . . . , ap1 )
X
X
=
fi1 ...ip i1 (a1 ) . . . ik (x) . . . ip (ap1 ) =
cs s (x).
125
Problems
27.1. Prove that v w = v 0 w0 6= 0 if and only if v = v 0 and w0 = w.
27.2. Let Ai : Vi Wi (i = 1, 2) be linear maps. Prove that
a) Im(A1 A2 ) = (Im A1 ) (Im A2 );
b) Im(A1 A2 ) = (Im A1 W2 ) (W1 Im A2 );
c) Ker(A1 A2 ) = Ker A1 W2 + W1 Ker A2 .
27.3. Let V1 , V2 V and W1 , W2 W . Prove that
(V1 W1 ) (V2 W2 ) = (V1 V2 ) (W1 W2 ).
27.4. Let V be a Euclidean space and let V be canonically identified with V .
Prove that the operator A = I 2a a is a symmetry through a .
27.5. Let A(x, y) be a bilinear function on a Euclidean space such that if x y
then A(x, y) = 0. Prove that A(x, y) is proportional to the inner product (x, y).
28. Symmetric and skew-symmetric tensors
28.1. To every permutation Sq we can assign a linear operator
f : T0q (V ) T0q (V )
v1 vq 7 v(1) v(q) .
T0q (V
A tensor T
) said to be symmetric (resp. skew-symmetric) if f (T ) = T
(resp. f (T ) = (1) T ) for any . The symmetric tensors constitute a subspace S q (V ) and the skew-symmetric tensors constitute a subspace q (V ) in T0q (V ).
Clearly, S q (V ) q (V ) P
= 0 for q 2.
P
1
1
The operator S = q!
f is called the symmetrization and A = q!
(1) f
the skew-symmetrization or alternation.
28.1.1. Theorem. S is the projection of T0q (V ) onto S q (V ) and A is the projection onto q (V ).
Proof. Obviously, the symmetrization of any tensor is a symmetric tensor and
on symmetric tensors S is the identity operator.
Since
1 X
1 X
f (AT ) =
(1) f f (T ) = (1)
(1) f (T ) = (1) AT,
q!
q! =
it follows that Im A q (V ). If T is skew-symmetric then
1 X
1 X
AT =
(1) f (T ) =
(1) (1) T = T.
q!
q!
We introduce notations:
S(ei1 eiq ) = ei1 . . . eiq and A(ei1 eiq ) = ei1 eiq .
For example, ei ej = 12 (ei ej +ej ei ) and ei ej = 12 (ei ej ej ei ). If e1 , . . . , en
is a basis of V , then the tensors ei1 . . . eiq span S q (V ) and the tensors ei1 eiq
span q (V ). The tensor ei1 . . . eiq only depends on the number of times each ei
enters this product and, therefore, we can set ei1 . . . eiq = ek11 . . . eknn , where ki is
the multiplicity of occurrence of ei in ei1 . . . eiq . The tensor ei1 eiq changes sign
under the permutation of any two factors ei and ei and, therefore, ei1 eiq = 0
if ei = ei ; hence, the tensors ei1 eiq , where 1 i1 < < iq n, span the
space q (V ). In particular, q (V ) = 0 for q > n.
126
MULTILINEAR ALGEBRA
it follows that
X
1
A(A(T1 ) T2 ) = A
(1) x(1) x(p) xp+1 xp+q
p!
Sp
X
1
p!(p + q)!
Sp Sp+q
Sp
127
Clearly,
xp+1 xp+q x1 xp = x(1) x(p+q) ,
where = (p + 1, . . . , p + q, 1, . . . , p). To place 1 in the first position, etc. p in the
pth position in we have to perform pq transpositions. Hence, (1) = (1)pq
and A(T1 T2 ) = (1)pq A(T2 T1 ).
In (V ), the kth power of , i.e.,
} is denoted by k ; in particular,
| {z
kmany times
0 = 1.
28.3. A skew-symmetric function on V V is a multilinear function
f (v1 , . . . , vq ) such that f (v(1) , . . . , v(q) ) = (1) f (v1 , . . . , vq ) for any permutation .
Theorem. The space q (V ) is canonically isomorphic to the space (q V ) and
also to the space of skew-symmetric functions on V V .
Proof. As is easy to verify
(f1 fq )(v1 , . . . , vq ) = A(f1 fq )(v1 , . . . , vq )
1 X
=
(1) f1 (v(1) ), . . . , fq (v(q) )
q!
is a skew-symmetric function. If e1 , . . . , en is a basis of V , then the skew-symmetric
function f is given by its values f (ei1 , . . . , eiq ), where 1 i1 < < iq n, and
each such set of values corresponds to a skew-symmetric function. Therefore, the
dimension of the space of skew-symmetric functions is equal to the dimension of
q (V ); hence, these spaces are isomorphic.
Now, let us construct the canonical isomorphism q (V ) (q V ) . A linear
map V V (V V ) which sends (f1 , . . . , fq ) V V to
a multilinear function f (v1 , . . . , vq ) = f1 (v1 ) . . . fq (vq ) is a canonical isomorphism.
Consider the restriction of this map onto q (V ). The element f1 fq =
q
A(f
P1 fq ) (V ) turns into the multilinear function f (v1 , . . . , vq ) =
1
(1) f1 (v(1) ) . . . fq (v(q) ). The function f is skew-symmetric; therefore, we
q!
get a map q (V ) (q V ) . Let us verify that this map is an isomorphism. To
a multilinear function f on V V there corresponds, by 27.1, a linear function
f on V V . Clearly,
2 X
1
f (A(v1 vq )) =
(1) f1 (v (1) ) . . . fq (v (q) )
q!
,
f1 (v1 ) . . . f1 (vq )
1 X
1
..
.. .
=
(1) f1 (v(1) ) . . . fq (v(q) ) = ...
.
.
q!
q!
fq (v1 ) . . . fq (vq )
Let e1 , . . . , en and 1 , . . . , n be dual bases of V and V . The elements ei1 eiq
form a basis of q V . Consider the dual basis of (q V ) . The above implies that
under the restrictions considered the element i1 iq turns into a basis elements
dual to ei1 eiq with factor (q!)1 .
Remark. As a byproduct we have proved that
1
f(A(v1 vq )) = f(v1 vq ) for f q (V ).
q!
128
MULTILINEAR ALGEBRA
b) S (V W ) = qi=0 (S i V S qi W ).
Proof. Clearly, i V T0i (V W ) and qi W T0qi (V W ). Therefore,
there exists a canonical embedding i V qi W T0q (V W ). Let us project
T0q (V W ) to q (V W ) with the help of alternation. As a result we get a canonical
map
i V qi W q (V W )
that acts as follows:
(v1 vi ) (w1 wqi ) 7 v1 vi w1 wqi .
Selecting bases in V and W , it is easy to verify that the resulting map
q
M
(i V qi W ) q (V W )
i=0
is an isomorphism.
For S q (V W ) the proof is similar.
(1)
129
P
i ...i
28.5.1. Theorem. Let Bq (ej1 ejq ) = 1i1 <<iq n bj11 ...jqq ei1 eiq .
i ...i
iq
Then bj11 ...jqq is equal to the minor B ji11 ...
... jq of B.
Proof. Clearly,
Bej1 . . . Bejq =
bi1 j1 ei1
i1
iq
i1 ,...,iq
biq jq eiq
X
1i1 <<iq n
ei1 eiq .
n1
q1
and det(S q B) =
q n
q n+q1
q
r
and |S B| = |B| , where p = n q and r = n
.
q
Corollary (Sylvesters identity). Since q B = Cq(B) is the compound matrix
(see Corollary 28.5.1)., det(Cq (B)) = (det B)p , where p=n1
q1 .
To a matrix B of order n we can assign a polynomial
B (t) = 1 +
n
X
tr(q B)tq
q=1
and a series
SB (t) = 1 +
X
q=1
tr(S q B)tq .
130
MULTILINEAR ALGEBRA
SB (t) = (1 + t1 + t2 21 + . . . ) . . . (1 + tn + t2 2n + . . . ).
Problems
28.1. A trilinear function f is symmetric with respect to the first two arguments
and skew-symmetric with respect to the last two arguments. Prove that f = 0.
28.2. Let f : Rm Rm Rn be a symmetric bilinear map such that f (x, x) 6= 0
for x 6= 0 and (f (x, x), f (y, y)) |f (x, y)|2 . Prove that m n.
28.3. Let = e1 e2 + e3 e4 + + e2n1 e2n , where e1 , . . . , e2n is a basis
of a vector space. Prove that n = n!e1 e2n .
Pn
28.4. Let A be a matrix of order n. Prove that det(A + I) = 1 + q=1 tr(q A).
28.5. Let d be the determinant of a system of linear equations
n
X
j=1
!
n
X
aij xj
apq xq = 0, (i, p = 1, . . . , n),
q=1
0 1
0 1
diag
,...,
(see 21.2). Since det X = f (aij )/g(aij ), where
1 0
1 0
f and g are polynomials, it follows that
P = det(XJX T ) = (f /g)2 .
131
A = XJX ,
where J = diag
0 1
1 0
,...,
0 1
1 0
Hence, f (A) = f (XJX T ) = (det X)f (J) and det A = (det X)2 = (f (A)/f (J))2 .
Let us prove that
f (A) = n!
2n
where = i11 ...
... i2n and the summation runs over all partitions of {1, . . . , 2n} into
pairs {ik , ik+1 }, where ik < ik+1 (observe that the summation runs not over all
permutations , but over partitions!). Let ij = aij ei ej ; then ij kl = kl ij
and ij kl = 0 if some of the indices i, j, k, l coincide. Hence,
n
X
ij =
i1 i2 i2n1 i2n =
X
ai1 i2 . . . ai2n1 i2n ei1 ei2n =
X
(1) ai1 i2 . . . ai2n1 i2n e1 e2n
and precisely n! summands have ai1 i2 . . . ai2n1 i2n as the coefficient. Indeed, each
of the
P n elements i1 i2 , . . . , i2n1 i2n can be selected in any of the n factors in
n ( ij ) and in each factor we select exactly one such element. In particular,
f (J) = n!.
132
MULTILINEAR ALGEBRA
29.2. Let 1 1 < < k 2n. The set {1 , . . . , 2k } can be complemented to the set {1, 2, . . . , 2n} by the set { 1 , . . . , 2(nk) }, where 1 < <
2(nk) . As a result to the set {1 , . . . , 2k } we have assigned the permutation
= (1 . . . 2k 1 . . . 2(nk) ). It is easy to verify that (1) = (1)a , where
a = (1 1) + (2 2) + + (2k 2k).
2n
The Pfaffian of a submatrix of a skew-symmetric matrix M = mij 1 , where
mij = (1)i+j1 for i < j, possesses the following property.
2k
29.2.1. Theorem. Let P1 ...2k = Pf(M 0 ), where M 0 = mi j 1 . Then
P1 ...2k = (1) , where = (1 . . . 2k 1 . . . 2(nk) ) (see above).
Proof. Let us apply induction on k. Clearly, P1 2 = m1 2 = (1)1 +2 +1 .
The sign of the permutation corresponding to {1 , 2 } is equal to (1)a , where
a = (1 1) + (2 2) (1 + 2 + 1) mod 2.
Making use of the result of Problem 29.1 it is easy to verify that
P1 ...2k =
2k
X
i=2
2k
2k
X
X
(1)i (1)1 +2 +1 (1) (1)1 +i +1 = (1)
(1)i = (1) .
i=2
i=2
k=0
e
and
j
i<j mij ei ej , respectively, in V . Since A M = M A,
i<j ij i
the Newton binomial formula holds:
n
X
n 2k k
n (A + 2 M ) =
( M ) (nk A)
k
k=0
n
X
n 2k X
=
(k!P1 ...2k )((n k)!Pk )e1 ek . . .
k
k=0
Pn
k=0
n
X
2k Pk e1 en
k=0
2k
Pk .
133
Problems
29.1. Let Pf(A) = apq Cpq + f , where f does not depend on apq and let Apq
be the matrix obtained from A by crossing out its pth and qth columns and rows.
Prove that Cpq = (1)p+q+1 Pf(Apq ).
29.2. Let X be a matrix of order 2n whose
Pnrows are the coordinates of vectors
x1 , . . . , x2n and gij = hxi , xj i, where ha, bi = k=1 (a2k1 b2k a2k b2k1 ) for vectors
a = (a , . . . , a2n ) and b = (b1 , . . . , b2n ). Prove that det X = Pf(G), where G =
2n1
gij .
1
30. Decomposable skew-symmetric and symmetric tensors
30.1. A skew-symmetric tensor k (V ) said to be decomposable (or simple
or split) if it can be represented in the form = x1 xk , where xi V .
A symmetric tensor T S k (V ) said to be decomposable (or simple or split) if it
can be represented in the form T = S(x1 xk ), where xi V .
30.1.1. Theorem. If x1 xk = y1 yk 6= 0 then Span(x1 , . . . , xk ) =
Span(y1 , . . . , yk ).
Proof. Suppose for instance, that y1 6 Span(x1 , . . . , xk ). Then the vectors
e1 = x1 , . . . , ek = xk and ek+1 = y1 can be complemented to a basis. Expanding
the vectors y2 , . . . , yk with respect to this basis we get
P
e1 ek = ek+1 ( ai2 ...ik ei2 eik ) .
This equality contradicts the linear independence of the vectors ei1 eik .
h1 (y(1) ) . . . hk (y(k) ) = 0,
134
MULTILINEAR ALGEBRA
{j1 , . . . , jb }, then the map i(uj ) sends wi1 wikb uj1 ujb to
0
wi1 wikb uj10 ujb1
;
Pa
Let
=1 u be the component
P of an element from the space which
belongs to k1 W 1 U . Then i(u )( u ) = 0 and, therefore, for all f
we have
X
X
0 = hi(u )
u , f i = h
u , u f i = h u , u f i.
135
P
Corollary (Pl
ucker relations). Let = i1 <<ik ai1 ...ik ei1 eik be a
skew-symmetric tensor. It is decomposable if and only if
!
X
X
ai1 ...ik ei1 eik
aj1 ...jk1 j ej = 0
i1 <<ik
for any j1 < < jk1 . (To determine the coefficient aj1 ...jk1 j for jk1 > j we
assume that a...ij... = a...ji... ).
Proof. In our case
= {v |h, f v i = 0 for any f k1 (V )}.
Let 1 , . . . , n be the basis dual to e1 , . . . , en ; f = j1 jk1 and v =
Then
X
X
h, f v i = h
ai1 ...ik ei1 eik ,
vj j1 jk1 j i
i1 <<ik
vi i .
=
Therefore,
1 X
aj1 ...jk1 j vj .
n!
P
vj j | aj1 ...jk1 j vj = 0 for any j1 , . . . , jk1 };
P
hence, W = ( ) = {w = j aj1 ...jk1 j ej }. By Theorem 30.2.2 is decomposable if and only if w = 0 for all w W .
= {v =
!
X
X
aij ei ej
apq ep = 0.
i<j
In this relation the coefficient of ei ej eq is equal to aij apq aip apj + ajp api and
the relation
aij apq aiq apj + ajq api = 0
is nontrivial only if the numbers i, j, p, q are distinct.
Problems
30.1. Let k V and e1 er 6= 0 for some ei V . Prove that =
1 e1 er if and only if ei = 0 for i = 1, . . . , r.
30.2. Let dim V = n and n1 V . Prove that is a decomposable skewsymmetric tensor.
Pn
30.3. Let e1 , . . . , e2n be linearly independent, = i=1 e2i1 e2i , and =
Span(). Find the dimension of W = ( ) .
30.4. Let tensors z1 = x1 xr and z2 = y1 yr be nonproportional;
X = Span(x1 , . . . , xr ) and Y = Span(y1 , . . . , yr ). Prove that Span(z1 , z2 ) consists
of decomposable skew-symmetric tensors if and only if dim(X Y ) = r 1.
30.5. Let W k V consist of decomposable skew-symmetric tensors. To every
= x1 xk W assign the subspace [] = Span(x1 , . . . , xk ) V . Prove that
either all subspaces [] have a common (k 1)-dimensional subspace or all of them
belong to one (k + 1)-dimensional subspace.
136
MULTILINEAR ALGEBRA
then rank T k.
n
31.2. In the space of matrices of P
order n select thePbasis e = i j 1 and
let be the dual basis. Then A = i,j aij eij , B = i,j bij eij and
AB =
X
i,j,k
ik (A)kj (B)eij .
i,j,k
Thus, the calculation of the product of two matrices of order n reduces to calculation
of n3 products ik (A)kj (B) of linear functions. Is the number n3 the least possible
one?
It turns out that no, it is not. For example, for matrices of order 2 we can
indicate 7 pairs of linear functions fp and gp and 7 matrices Ep such that AB =
P7
p=1 fp (A)gp (B)Ep . This decomposition was constructed in [Strassen, 1969]. The
computation of the least number of such triples is equivalent to the computation of
the rank of the tensor
X
X
ik kj eij =
fp gp Ep .
i,j,k
Identify the space of vectors with the space of covectors, and introduce, for brevity,
the notation a = e11 , b = e12 , c = e21 and d = e22 . It is easy to verify that for
matrices of order 2
X
ik kj eij = (a a + b c) a + (a b + b d) b
i,j,k
+ (c a + d c) c + (c b + d d) d.
ik kj eij =
137
P7
p=1
Tp , where
T1 = (a d) (a d) (a + d),
T5 = (c d) a (c d),
T2 = d (a + c) (a + c),
T6 = (b d) (c + d) a,
T3 = (a b) d (a b),
T7 = (c a) (a + b) d.
T4 = a (b + d) (b + d),
This decomposition
leads
algorithm
for computing the product of
to the following
a1 b1
a2 b2
matrices A =
and B =
. Let
c1 d1
c2 d2
S1 = a1 d1 ,
S6 = a2 + c2 ,
S2 = a2 d2 ,
S7 = b2 + d2 ,
S3 = a1 b1 ,
S8 = c1 d1 ,
S4 = b1 d1 ,
S9 = c1 a1 ,
S5 = c2 + d2 ,
S10 = a2 + b2 ;
P1 = S1 S2 ,
P2 = S3 d2 ,
P3 = S4 S5 ,
P4 = d1 S6 ,
P5 = a1 S7 ,
P6 = S8 a2 ,
P7 = S9 S10 ; S11 = P1 + P2 , S12 = S11 + P3 , S13 = S12 + P4 ,
S14 = P5 P2 , S15 = P4 + P6 , S16 = P1 + P5 , S17 = S16 P6 , S18 = S17 + P7 .
S13 S14
. Strassens algorithm for computing AB requires just 7
Then AB =
S15 S18
multiplications and 18 additions (or subtractions)4 .
31.3. Let V be a two-dimensional space with basis {e1 , e2 }. Consider the tensor
T = e1 e1 e1 + e1 e2 e2 + e2 e1 e2 .
31.3.1. Theorem. The rank of T is equal to 3, but there exists a sequence of
tensors of rank 2 which converges to T .
Proof. Let
T = 1 [e1 e1 (e2 + e1 ) + (e1 + e2 ) (e1 + e2 ) e2 ].
Then T T = e2 e2 e2 and, therefore, lim |T T | = 0.
Suppose that
T =a b c + u v w = (1 e1 + 2 e2 ) b c + (1 e1 + 2 e2 ) v w
=e1 (1 b c + 1 v w) + e2 (2 b c + 2 v w).
Then
e1 e1 + e2 e2 = 1 b c + 1 v w and e1 e2 = 2 b c + 2 v w.
Hence, linearly independent tensors b c and v w of rank 1 belong to the space
Span(e1 e1 + e2 e2 , e1 e2 ). The latter space can be identified with the space
x y
of matrices of the form
. But all such matrices of rank 1 are linearly
0 x
dependent. Contradiction.
4 Strassens algorithm is of importance nowadays since modern computers add (subtract) much
faster than multiply.
138
MULTILINEAR ALGEBRA
139
140
MULTILINEAR ALGEBRA
SOLUTIONS
141
m
m
m
eigenvalues of Am are equal to m
1 , . . . , n and, therefore, tr(A ) = tr(T (A) ).
Both sides of this identity are polynomials in x of degree not exceeding m. Two
polynomials whose values are equal for all real x coincide and, therefore, their values
at x = i are also equal. Hence, tr(X m ) = tr(T (X)m ) for any X. It remains to
make use of the result of Problem 13.2.
142
MULTILINEAR ALGEBRA
27.3. Select a basis {vi } in V1 V2 and complement it to bases {vj1 } and {vk2 } of
V1 and V2 , respectively. The set {vi , vj1 , vk2 } is a basis of V1 +V2 . Similarly, construct
a basis {w , w1 , w2 } of W1 + W2 . Then {vi w , vi w1 , vj1 w , vj1 w1 } and
{vi w , vi w2 , vk2 w , vk2 w2 } are bases of V1 W1 and V2 W2 , respectively, and
the elements of these bases are also elements of a basis for (V1 +V2 )(W1 +W2 ), i.e.,
they are linearly independent. Hence, {vi w } is a basis of (V1 W1 ) (V2 W2 ).
27.4. Clearly, Ax = x 2(a, x)a, i.e., Aa = a and Ax = x for x a .
27.5. Fix a 6= 0; then A(a, x) is a linear function; hence, A(a, x) = (b, x), where
b = B(a) for some linear map B. If x a, then A(a, x) = 0, i.e., (b, x) = 0. Hence,
a b and, therefore, B(a) = b = (a)a. Since A(u + v, x) = A(u, x) + A(v, x),
it follows that
(u + v)(u + v) = (u)u + (v)v.
If the vectors u and v are linearly independent, then (u) = (v) = and any other
vector w is linearly independent of one of the vectors u or v; hence, (w) = . For
a one-dimensional space the statement is obvious.
28.1. Let us successively change places of the first two arguments and the second
two arguments:
f (x, y, z) = f (y, x, z) = f (y, z, x) = f (z, y, x)
= f (z, x, y) = f (x, z, y) = f (x, y, z);
hence, 2f (x, y, z) = 0.
28.2. Let us extend f to a bilinear map Cm Cm Cn . Consider the equation
f (z, z) = 0, i.e., the system of quadratic equations
f1 (z, z) = 0, . . . , fn (z, z) = 0.
Suppose n < m. Then this system has a nonzero solution z = x + iy. The second
condition implies that y 6= 0. It is also clear that
0 = f (z, z) = f (x + iy, x + iy) = f (x, x) f (y, y) + 2if (x, y).
Hence, f (x, x) = f (y, y) 6= 0 and f (x, y) = 0. This contradicts the first condition.
28.3. The elements i = e2i1 e2i belong to 2 (V ); hence, i j = j i
and i i = 0. Thus,
n =
i1 in = n! 1 n = n! e1 e2n .
i1 ,...,in
tion is equal to S 2 (A). Besides, det S 2 (A) = (det A)r , where r = n2 n+21
= n+1
2
(see Theorem 28.5.3).
28.6. It is easy to verify that k = tr(k A). If in a Jordan basis
P the diagonal of
A is of the form (1 , . . . , n ), then sk = k1 + + kn and k =
i1 . . . ik . The
required identity for the functions sk and k was proved in 4.1.
SOLUTIONS
143
P
aij ej and
28.7.
P Let ej and j , where 1 j m, be dual bases. Let vi =
fi =
bji j . The quantity n!hv1 vn , f1 fn i can be computed in two
ways. On the one hand, it is equal to
P
f1 (v1 ) . . . f1 (vn ) a1j bj1 . . .
anj bj1
..
.. =
..
.
..
..
.
= det AB.
.
.
. P ..
.
P
fn (v1 ) . . . fn (vn ) a1j bjn . . .
anj bjn
On the other hand, it is equal to
n! h
k1 ,...,kn
bl1 1 . . . bln n l1 ln i
l1 ,...,ln
k1 kn
k1 <<kn
P
29.1. Since Pf(A) =
(1) ai1 i2 . . . ai2n1 i2n , where the sum runs over all
partitions of {1, . . . , 2n} into pairs {i2k1 , i2k } with i2k1 < i2k , then
X
ai1 i2 Ci1 i2 = ai1 i2
(1) ai3 i4 . . . ai2n1 i2n .
It remains to observe that the signs of the permutations
1 2 . . . 2n
=
i1 i2 . . . i2n
and
i1
i1
i2
i2
1
i3
2
i4
...
...
i1
...
...
...
i2
...
...
...
2n
i2n
0 1
0 1
29.2. Let J = diag
,...,
. It is easy to verify that G =
1 0
1 0
XJX T . Hence, Pf(G) = det X.
30.1. Clearly, if = 1 e1 er , then ei = 0. Now, suppose that
ei = 0 for i = 1, . . . , r and e1 er 6= 0. Let us complement vectors
e1 , . . . , er to a basis e1 , . . . , en of V . Then
X
=
ai1 . . . aik ei1 eik ,
where
If the nonzero tensors ei1 eik ei are linearly dependent, then the tensors
ei1 eik are also linearly dependent. Hence, ai1 ...ik = 0 for i 6 {i1 , . . . , ik }. It
follows that ai1 ...ik 6= 0 only if {1, . . . , r} {i1 , . . . , ik } and, therefore,
X
=
bi1 ...ikr ei1 eikr e1 er .
144
MULTILINEAR ALGEBRA
!
X
X
X
i
i
i
u1 up =
ej
ij u2 uip .
j
Hence
P
j
ij ej ,
SOLUTIONS
145
n
and XB = xij j 1 . Hence, BX = i X only if all rows of X except the ith one
are zero and XB = j X only if all columns
of X except the jth are zero. Let
n
Eij be the matrix unit, i.e., Eij = apq 1 , where apq = pi qj . Then Bg(Eij ) =
i g(Eij ) and g(Eij )B = j g(Eij ) and, therefore, g(Eij ) = ij Eij . As is easy to
2
2
see, Eij = Ei1 E1j . Hence, ij = i1 1j . Besides, Eii
= Eii ; hence, ii
= ii and,
1
1
therefore, i1 1i = ii = 1, i.e., 1i = i1 . It follows that ij = i1 j1
. Hence,
1
g(X) = A2 XA1
,
where
A
=
diag(
,
.
.
.
,
),
and,
therefore,
f
(X)
=
AXA
,
2
11
n1
2
where A = A1 A2 .
146
MULTILINEAR
CHAPTERALGEBRA
VI
MATRIX INEQUALITIES
i1 x2i = (A1 x, x)
A1 B
33.2.1. Theorem. Let A =
> 0. Then det A det A1 det A2 .
B A2
Proof. The matrices A1 and A2 are positive definite. It is easy to verify (see
3.1) that
det A = det A1 det(A2 B A1
1 B).
1
The matrix B A1
1 B is positive definite; hence, det(A2 B A1 B) det A2 (see
Problem 33.1). Thus, det A det A1 det A2 and the equality is only attained if
B A1
1 B = 0, i.e., B = 0.
Typeset by AMS-TEX
147
n
33.2.1.1. Corollary (Hadamards inequality). If a matrix A = aij 1 is
positive definite, then det A a11 a22 . . . ann and the equality is only attained if A
is a diagonal matrix.
33.2.1.2. Corollary. If X is an arbitrary matrix, then
X
X
| det X|2
|x1i |2
|xni |2 .
i
A1 B
33.2.2. Theorem. Let A =
be a positive definite matrix, where B
B A2
is a square matrix. Then
| det B|2 det A1 det A2 .
Proof ([Everitt, 1958]). . Since
A1
0
I
T AT =
> 0 for T =
0 A2 B A1
B
0
1
A1
1 B
I
1
n1
an1
> Pn , where ak =
.
P1 > P2a2 > > Pn1
k1
Proof ([Mirsky, 1957]). The required inequality can be rewritten in the form
k
> Pk+1
(1 k n 1). For n = 2 the proof is obvious. For a diagonal
k
k
matrix we have Pknk = Pk+1
. Suppose that Pknk > Pk+1
(1 k n 1) for
some n 2. Consider a matrix A of order n + 1. Let Ar be the matrix obtained
from A by deleting the rth row and the rth column; let Pk,r be the product of all
principal k-minors of Ar . By the inductive hypothesis
Pknk
(1)
nk
k
Pk,r
Pk+1,r
for 1 k n 1 and 1 r n + 1,
where at least one of the matrices Ar is not a diagonal one and, therefore, at least
one of the inequalities (1) is strict. Hence,
n+1
Y
nk
Pk,r
>
r=1
(nk)(n+1k)
Pk
n+1
Y
k
Pk+1,r
for 1 k n 1,
r=1
(nk)k
Pk+1 .
i.e.,
>
Extracting the (n k)th root for n 6= k we get the
required conclusion.
n+1
For n = k consider the matrix adj A = B = bij 1 . Since A > 0, it follows
that B > 0 (see Problem 19.4). By Hadamards inequality
b11 . . . bn+1,n+1 > det B = (det A)n
n
i.e., Pn > Pn+1
.
148
MATRIX INEQUALITIES
|1 A1 + + k Ak | |A1 |1 . . . |Ak |k .
Proof ([Mirsky, 1955]). First, consider the case k = 2. Let A, B > 0. Then
A = P P and B = P P , where = diag(1 , . . . , n ). Hence,
|A + (1 )B| = |P P | | + (1 )I| = |B|
n
Y
(i + 1 ).
i=1
(i + 1 )
.
i = || = |A| |B|
The rest of the proof will be carried out by induction on k; we will assume that
k 3. Since
1 A1 + + k Ak = (1 k )B + k Ak
and the matrix B =
1
1k A1
+ +
|1 A1 + + k Ak | |
Since
1
1k
+ +
k1
1k Ak1
1
k1
A1 + +
Ak1 |1k |Ak |k .
1 k
1 k
k1
1k
= 1, it follows that
k1
1
1
k1
1k
A
+
+
A
. . . |Ak1 | 1k .
k1 |A1 |
1 k 1
1 k
Remark. It is possible to verify that the equality takes place if and only if
A1 = = Ak .
33.3.2. Theorem. Let i be arbitrary complex numbers and Ai 0. Then
| det(1 A1 + + k Ak )| det(|1 |A1 + + |k |Ak ).
Proof ([Frank, 1965]). Let k = 2; we can assume that 1 = 1 and 2 = .
There exists a unitary matrix U such that the matrix U A1 U 1 = D is a diagonal
one. Then M = U A2 U 1 0 and
n
X
X
i1 . . . ip
det(A1 + A2 ) = det(D + M ) =
p
M
dj1 . . . djnp ,
i1 . . . ip
p=0
i <<i
1
ip
are nonnegative definite, M ii11 ...
0
and
dj 0. Hence,
... ip
| det(A1 + A2 )|
n
X
p=0
||
X
i1 <<ip
i1
i1
...
...
ip
ip
dj1 . . . djnp
149
Now, let us prove the inductive step. Let us again assume that 1 = 1. Let A = A1
and A0 = 2 A2 + + k+1 Ak+1 . There exists a unitary matrix U such that the
matrix U AU 1 = D is a diagonal one; matrices Mj = U Aj U 1 and M = U A0 U 1
are nonnegative definite. Hence,
n
X
X
i1 . . . ip
0
| det (A + A )| = | det (D + M )|
M
dj1 . . . djnp .
i1 . . . ip
p=0 i <<i
1
!
k+1
P
i1 . . . ip
i1 . . . ip
M
det
|j |Mj
.
i1 . . . ip
i1 . . . ip
j=2
It remains to notice that
n
X
p=0
i1 <<ip
!
i1 . . . ip
dj1 . . . djnp det
|j |Mj
i1 . . . ip
j=2
k+1
P
P
|i |Ai ) .
33.4. Theorem. Let A and B be positive definite real matrices and let A1 and
B1 be the matrices obtained from A and B, respectively, by deleting the first row
and the first column. Then
|A|
|B|
|A + B|
+
.
|A1 + B1 |
|A1 | |B1 |
Proof ([Bellman, 1955]). If A > 0, then
(x, Ax)(y, A1 y) (x, y)2 .
(1)
1
(x, Ax)
= min
.
1
x
(y, A y)
(x, y)2
e1 adj AeT1
(adj A)11
|A1 |
=
=
.
|A|
|A|
|A|
and set
g(x) =
(x, Ax)
(x, Bx)
and h(x) =
.
(x, e1 )
(x, e1 )
150
MATRIX INEQUALITIES
Problems
33.1. Let A and B be matrices of order n (n > 1), where A > 0 and B 0.
Prove that |A + B| |A| + |B| and the equality is only attained for B = 0.
33.2. The matrices A and B are Hermitian and A > 0. Prove that det A
| det(A + iB)| and the equality is only attained when B = 0.
33.3. Let Ak and Bk be the upper left corner submatrices of order k of positive
definite matrices A and B such that A > B. Prove that
|Ak | > |Bk |.
33.4. Let A and B be real symmetric matrices and A 0. Prove that if
C = A + iB is not invertible, then Cx = 0 for some nonzero real vector x.
33.5. A real symmetric matrix A is positive definite. Prove that
0
x1 . . . xn
x1
0.
det
...
A
xn
33.6. Let A > 0 and let n be the order of A. Prove that |A|1/n = min n1 tr(AB),
where the minimum is taken over all positive definite matrices B with determinant
1.
34. Inequalities for eigenvalues
Theorem (Schurs inequality). Let 1 , . . . , n be eigenvalues of A =
34.1.1.
2
aij n . Then Pn |i |2 Pn
i,j=1 |aij | and the equality is attained if and only if
i=1
1
A is a normal matrix.
Proof. There exists a unitary matrix U such that T = U AU is an upper
triangular matrix and T is a diagonal matrix if and only if A is a normal matrix
(cf. 17.1). Since T = U A U , then T T = U AA U and, therefore, tr(T T ) =
tr(AA ). It remains to notice that
tr(AA ) =
n
X
|aij |2
and tr(T T ) =
i,j=1
n
X
|i |2 +
i=1
|tij |2 .
i<j
i,j=1
i=1
i,j=1
|bij |2 = tr(BB ) =
i,j=1
and
Pn
i,j=1
|cij |2 =
Pn
i=1
X
X |tij |2
tr(T + T )2
=
| Re i |2 +
4
2
i=1
i<j
| Im i |2 +
1
2
i<j 2 |tij | .
151
B C
be an Hermitian matrix. Let the
34.2.2. Theorem. Let A =
C D
eigenvalues of A and B form increasing sequences: 1 n , 1 m .
Then
i i i+nm .
Proof. For A and B take orthonormal eigenbases {ai } and {bi }; we can assume
that A and B act in the spaces V and U , where U V . Consider the subspaces
V1 = Span(ai , . . . , an ) and V2 = Span(b1 , . . . , bi ). The subspace V1 V2 contains a
unit vector x. Clearly,
i (x, Ax) = (x, Bx) i .
Applying this inequality to the matrix A we get ni+1 mi+1 , i.e., j
j+nm .
34.3. Theorem. Let A and B be Hermitian projections, i.e., A2 = A and
B = B. Then the eigenvalues of AB are real and belong to the segment [0, 1].
2
(x, Bx)
.
(x, x)
For B there exists an orthonormal basis such that (x, Bx) = 1 |x1 |2 + + n |xn |2 ,
where either i = 0 or 1. Hence, 1.
152
MATRIX INEQUALITIES
|uij |2 i j +
2
|vij |2 i j
2
The matrices
(i, j)th
are |uij |2 and |vP
ij | are doubly stochastic and,
P whose
Pelements P
2
2
therefore,
|uij | i j i i and |vij | i j i j (see Problem 38.1).
Problems
n
34.1 (Gershgorin discs). Prove thatP
every eigenvalue of aij 1 belongs to one
of the discs |akk z| k , where k = i6=j |akj |.
34.2. Prove that if U is a unitary matrix and S 0, then | tr(U S)| tr S.
34.3. Prove that if A and B are nonnegative definite matrices, then | tr(AB)|
tr A tr B.
34.4. Matrices A and B are Hermitian. Prove that tr(AB)2 tr(A2 B 2 ).
34.5 ([Cullen, 1965]). Prove that lim Ak = 0 if and only if one of the following
k
conditions holds:
a) the absolute values of all eigenvalues of A are less than 1;
b) there exists a positive definite matrix H such that H A HA > 0.
153
Singular values
34.6. Prove that if all singular values of A are equal, then A = U , where U is
a unitary matrix.
34.7. Prove that if the singular values
of A are
Q
Q equal to 1 , . . . , n , then the
singular values of adj A are equal to i6=1 i , . . . , i6=n i .
34.8. Let
1 , . . . , n be the singular values of A. Prove that the eigenvalues of
0 A
are equal to 1 , . . . , n , 1 , . . . , n .
A 0
35. Inequalities for matrix norms
35.1. The operator (or spectral) norm of a matrix A is kAks = sup
|x|6=0
|Ax|
|x| .
The
|x|
|x|
P
|i xi |2
= P
max |i |.
i
|xi |2
154
MATRIX INEQUALITIES
kA A+A
2 k kA Sk, where k.k is either the Euclidean or the operator norm.
Proof.
kA
A + A
AS
S A
kA Sk kS A k
k=k
+
k
+
.
2
2
2
2
2
155
kA Xks = kA1 ks .
Proof. Take a vector v such that Xv = 0 and v 6= 0. Then
kA Xks
|(A X)v|
|Av|
|Ax|
|y|
1
=
min
= min 1 = kA1 ks .
x
y |A
|v|
|v|
|x|
y|
2n
2
n1
kAke
A11 A12
36.1. Let A =
, where |A11 | 6= 0. Recall that Schurs complement
A21 A22
of A11 in A is the matrix (A|A11 ) = A22 A21 A1
11 A12 (see 3.1).
36.1.1. Theorem. If A > 0, then (A|A11 ) > 0.
I A1
B
11
Proof. Let T =
, where B = A12 = A21 . Then
0
I
A11
0
T AT =
,
0
A22 B A1
11 B
is a positive definite matrix, hence, A22 B A1
11 B > 0.
Remark. We can similarly prove that if A 0 and |A11 | 6= 0, then (A|A11 ) 0.
36.1.2. Theorem ([Haynsworth,1970]). If H and K are arbitrary positive definite matrices of order n and X and Y are arbitrary matrices of size n m, then
X H 1 X + Y K 1 Y (X + Y ) (H + K)1 (X + Y ) 0.
Proof. Clearly,
H 0
H
X
In H 1 X
A=T
T =
> 0, where T =
.
0 0
X X H 1 X
0
Im
K
Y
Similarly, B =
0. It remains to apply Theorem 36.1.1 to the
Y Y K 1 Y
Schur complement of H + K in A + B.
156
MATRIX INEQUALITIES
36.1.3. Theorem ([Haynsworth, 1970]). Let A, B 0 and A11 , B11 > 0. Then
(A + B|A11 + B11 ) (A|A11 ) + (B|B11 ).
Proof. By definition
(A + B|A11 + B11 ) = (A22 + B22 ) (A21 + B21 )(A11 + B11 )1 (A12 + B12 ),
and by Theorem 36.1.2
1
1
(A12 + B12 ).
A21 A1
11 A12 + B21 B11 B12 (A21 + B21 )(A11 + B11 )
Hence,
(A + B|A11 + B11 )
1
(A22 + B22 ) (A21 A1
11 A12 + B21 B11 B12 ) = (A|A11 ) + (B|B11 ).
We can apply the obtained results to the proof of the following statement.
36.1.4. Theorem ([Haynsworth, 1970]). Let Ak and Bk be upper left corner
submatrices of order k in positive definite matrices A and B of order n, respectively.
Then
!
n1
n1
X |Bk |
X |Ak |
|A + B| |A| 1 +
+ |B| 1 +
.
|Ak |
|Bk |
k=1
k=1
Proof. First, observe that by Theorem 36.1.3 and Problem 33.1 we have
|(A + B|A11 + B11 )| |(A|A11 ) + (B|B11 )|
|(A|A11 )| + |(B|B11 )| =
|A|
|B|
+
.
|A11 | |B11 |
For n = 2 we get
|A + B| = |A1 + B1 | |(A + B|A1 + B1 )|
|B|
|B1 |
|A1 |
|A|
+
= |A| 1 +
+ |B| 1 +
.
(|A1 | + |B1 |)
|A1 | |B1 |
|A1 |
|B1 |
Now, suppose that the statement is proved for matrices of order n 1 and let
us prove it for matrices of order n. By the inductive hypothesis we have
!
!
n2
n2
X |Bk |
X |Ak |
+ |Bn1 | 1 +
.
|An1 + Bn1 | |An1 | 1 +
|Ak |
|Bk |
k=1
k=1
|A|
|B|
+
.
|An1 | |Bn1 |
Therefore,
|A + B|
"
|An1 | 1 +
n2
X
k=1
|Bk |
|Ak |
|A| 1 +
n2
X
k=1
+ |Bn1 | 1 +
n2
X
k=1
|Bk | |Bn1 |
+
|Ak | |An1 |
|Ak |
|Bk |
!#
+ |B| 1 +
|A|
|B|
+
|An1 | |Bn1 |
n2
X
k=1
|Ak | |An1 |
+
|Bk | |Bn1 |
!
.
157
n
n
36.2. If A = aij 1 and B = bij 1 are square matrices, then their Hadamard
n
product is the matrix C = cij 1 , where cij = aij bij . The Hadamard product is
denoted by A B.
36.2.1. Theorem (Schur). If A, B > 0, then A B > 0.
n
Proof. Let U = uij 1 be aPunitary matrix such that A = U U , where
= diag(1 , . . . , n ). Then aij = p upi p upj and, therefore,
X
aij bij xi xj =
i,j
bij yip y pj ,
i,j
where yip = xi upi . All the numbers p are positive and, therefore, it remains to
prove that if not all numbers xi are zero, then not all numbers yip are zero. For this
it suffices to notice that
X
i,p
|yip |2 =
|xi upi |2 =
i,p
(|xi |2
|upi |2 ) =
|xi |2 .
a11 A12
b11 B12
A=
, B=
,
A21 A22
B21 B22
where a11 and b11 are numbers. Then
det(A B) = a11 b11 det(A B|a11 b11 )
and
1
(A B|a11 b11 ) = A22 B22 A21 B21 a1
11 b11 A12 B12
158
MATRIX INEQUALITIES
Problems
36.1. Prove that if A and B are positive definite matrices of order n and A B,
then |A + B| |A| + n|B|.
36.2. [Djokovic, 1964]. Prove that any positive definite matrix A can be represented in the form A = B C, where B and C are positive definite matrices.
36.3. [Djokovic, 1964]. Prove that if A > 0 and B 0, then rank(A B)
rank B.
37. Nonnegative matrices
n
37.1. A real matrix A = aij 1 is said to be positive (resp. nonnegative) if
aij > 0 (resp. aij 0).
In this section in order to denote positive matrices we write A > 0 and the
expression A > B means that A B > 0.
Observe that in all other sections the notation A > 0 means that A is an Hermitian (or real symmetric) positive definite matrix.
A vector x = (x1 , . . . , xn ) is called positive and we write x > 0 if xi > 0.
A matrix A of order n is called reducible if it is possible to divide the set {1, . . . , n}
into two nonempty subsets I and J such that aij = 0 for i I and j J, and
irreducible otherwise. In other words, A is reducible
if bya permutation of its rows
A11 A12
and columns it can be reduced to the form
, where A11 and A22 are
0
A22
square matrices.
Theorem. If A is a nonnegative irreducible matrix of order n, then (I+A)n1 >
0.
Proof. For every nonzero nonnegative vector y consider the vector z = (I +
A)y = y + Ay. Suppose that not all coordinates of y are positive.
Renumbering
u
the vectors of the basis, if necessary, we can assume that y =
, where u > 0.
0
A11 A12
u
A11 u
Then Ay =
=
. Since u > 0, A21 0 and A21 6= 0,
A21 A22
0
A21 u
we have A21 u 6= 0. Therefore, z has at least one more positive coordinate than y.
Hence, if y 0 and y 6= 0, then (I + A)n1 y > 0. Taking for y, first, e1 , then e2 ,
etc., en we get the required solution.
37.2. Let A be a nonnegative matrix of order n and x a nonnegative vector.
Further, let
(
)
n
P
xj
rx = min
aij
= sup{ 0|Ax x}.
i
xi
j=1
and r = sup rx . It suffices to take the supremum over the compact set P =
x0
{x 0||x| = 1}, and not over all x 0. Therefore, there exists a nonzero
nonnegative vector z such that Az rz and there is no positive vector w such that
Aw > rw.
A nonnegative vector z is called an extremal vector of A if Az rz.
37.2.1. Theorem. If A is a nonnegative irreducible matrix, then r > 0 and an
extremal vector of A is its eigenvector.
159
0
0
..
.
0
Ak1
A12
0
..
.
0
A23
..
.
0
0
0
0
...
...
..
.
..
.
...
0
0
..
.
Ak1,k
0
160
MATRIX INEQUALITIES
j = r exp( 2ji
k ). Let y1 be the eigenvector corresponding to the eigenvalue 1 =
r exp( 2i
).
Then
y1+ > 0 and y1 = D1 y1+ (see the proof of Theorem 37.2.2). There
k
exists a permutation matrix P such that
P D1 P T = diag(ei1 I1 , . . . , eis Is ),
where the numbers ei1 , . . . , eis are distinct and I1 , . . . , Is are unit matrices. If
instead of y1 we take ei1 y1 , then we may assume that 1 = 0.
Let us divide the matrix P AP T into blocks Apq in accordance with the division
of the matrix P D1 P T . Since A = exp(ij )Dj ADj1 , it follows that
P AP T = exp(i1 )(P D1 P T )(P AP T )(P D1 P T )1 ,
i.e.,
Apq = exp[i(p q +
2
)]Apq .
k
Therefore, if 2
k + p 6 q (mod 2), then Apq = 0. In particular s > 1 since
otherwise A = 0.
The numbers i are distinct and, therefore, for any p there exists no more than
one number q such that Apq 6= 0 (in which case q 6= p). The irreducibility of A
implies that at least one such q exists.
Therefore, there exists a map p 7 q(p) such that Ap,q(p) 6= 0 and 2
k + p q(p)
(mod 2).
For p = 1 we get q(1) 2
k (mod 2). After permutations of rows and columns
of P AP T we can assume that q(1) = 2 . By repeating similar arguments we can
get
2(j 1)
q(j1) = j =
for 2 j min(k, s).
k
Let us prove that s = k. First, suppose that 1 < s < k. Then 2
k + s r 6 0
mod 2 for 1 r s 1. Therefore, Asr = 0 for 1 r s 1, i.e., A is reducible.
Now, suppose that s > k. Then i = 2(i1)
for 1 i k. The numbers j are
k
distinct for 1 j s and for any i, where 1 i k, there exists j(1 j k)
2
such that 2
k + i j (mod 2). Therefore, k + i 6 r (mod 2) for 1 i k
and k < r s, i.e., Air = 0 for such k and r. In either case we get contradiction,
hence, k = s.
Now, it is clear that for the indicated choice of P the matrix P AP T is of the
required form.
Corollary. If A > 0, then the maximal positive eigenvalue of A is strictly
greater than the absolute value of any of its other eigenvalues.
37.4. A nonnegative matrix A is called primitive if it is irreducible and there is
only one eigenvalue whose absolute value is maximal.
37.4.1. Theorem. If A is primitive, then Am > 0 for some m.
Proof ([Marcus, Minc, 1975]). Dividing, if necessary, the elements of A by the
eigenvalue whose absolute value is maximal we can assume that A is an irreducible
matrix whose maximal eigenvalue is equal to 1, the absolute values of the other
eigenvalues being less than 1.
161
1 0
be the Jordan normal form of A. Since the absolute
0 B
values of all eigenvalues of B are less than 1, it follows that lim B n = 0 (see
Let S 1 AS =
1 0
1 0
1
lim An = lim S
S
=
S
S 1 = xT y > 0
n
n
0 Bn
0 0
and, therefore, Am > 0 for some m.
0 a12 0 . . .
0
.
0 a23 . .
0
.
..
..
..
..
.
;
.
.
.
.
.
..
0
. an1,n
0
0
an1 0
0 ...
0
162
MATRIX INEQUALITIES
0 1 0 ... 0
0 0 1 ... 0
. . .
. .
. . . . . ...
A=
. .
0 0 0 ... 1
1 1 0 ... 0
of order n, where n 3. To this matrix we can assign the operator that acts as
follows:
Ae1 = en , Ae2 = e1 + en , Ae3 = e2 , . . . , Aen = en1 .
Let B = An1 . It is easy to verify that
Be1 = e2 , Be2 = e2 + e3 , Be3 = e3 + e4 , . . . , Ben = en + e1 .
Therefore, the matrix B n1 has just one zero element situated on the (1, 1)th
2
position and the matrix AB n1 = An 2n+2 is positive.
Problems
37.1. Prove that if A 0 and Ak > 0, then Ak+1 > 0.
37.2. Prove that a nonnegative eigenvector of an irreducible nonnegative matrix
is positive.
B C
37.3. Let A =
be a nonnegative irreducible matrix and B a square
D E
matrix. Prove that if and are the maximal eigenvalues of A and B, then < .
37.4. Prove that if A is a nonnegative irreducible matrix, then its maximal
eigenvalue is a simple root of its characteristic polynomial.
37.5. Prove that if A is a nonnegative irreducible matrix and a11 > 0, then A is
primitive.
ak, 1964]). A matrix A is primitive. Can the number of positive
37.6 ([Sid
elements of A be greater than that of A2 ?
38. Doubly stochastic matrices
n
Pn
38.1. A nonnegative matrix A = aij 1 is called doubly stochastic if i=1 aik =
Pn
1 and j=1 akj = 1 for all k.
38.1.1. Theorem. The product of doubly stochastic matrices is a doubly stochastic matrix.
Proof. Let A and B be doubly stochastic matrices and C = AB. Then
n
X
cij =
i=1 p=1
i=1
Similarly,
Pn
j=1 cij
n X
n
X
= 1.
aip bpj =
n
X
p=1
bpj
n
X
i=1
aip =
n
X
p=1
bpj = 1.
163
n
38.1.2. Theorem. If A = aij 1 is a unitary matrix, then the matrix B =
n
bij , where bij = |aij |2 , is doubly stochastic.
1
Pn
Pn
Proof. It suffices to notice that i=1 |aij |2 = j=1 |aij |2 = 1.
38.2.1. Theorem (Birkhoff). The set of all doubly stochastic matrices of order
n is a convex polyhedron with permutation matrices as its vertices.
Let i1 , . . . , ik be numbers of some
of the rows of A and j1 , . . . , jl numbers of
some of its columns. The matrix aij , where i {i1 , . . . , ik } and j {j1 , . . . , jl },
is called a submatrix of A. By a snake in A we will mean the set of elements
a1(1) , . . . , an(n) , where is a permutation. In the proof of Birkhoffs theorem we
will need the following statement.
38.2.2. Theorem (Frobenius-Konig). Each snake in a matrix A of order n
contains a zero element if and only if A contains a zero submatrix of size s t,
where s + t = n + 1.
Proof. First, suppose that on the intersection of rows i1 , . . . , is and columns
j1 , . . . , jt there stand zeros and s + t = n + 1. Then at least one of the s numbers
(i1 ), . . . , (is ) belongs to {j1 , . . . , jt } and, therefore, the corresponding element of
the snake is equal to 0.
Now, suppose that every snake in A of order n contains 0 and prove that then
A contains a zero submatrix of size s t, where s + t = n + 1. The proof will be
carried out by induction on n. For n = 1 the statement is obvious.
Now, suppose that the statement is true for matrices of order n 1 and consider
a nonzero matrix of order n. In it, take a zero element and delete the row and
the column which contain it. In the resulting matrix of order n 1 every snake
contains a zero element and, therefore, it has a zero submatrix of size s1 t1 , where
s1 + t1 = n. Hence, the initial matrix A can be reduced by permutation of rows
and columns to the block form plotted on Figure 6 a).
Figure 6
Suppose that a matrix X has a snake without zero elements. Every snake in
the matrix Z can be complemented by this snake to a snake in A. Hence, every
snake in Z does contain 0. As a result we see that either all snakes of X or all
snakes of Z contain 0. Let, for definiteness sake, all snakes of X contain 0. Then
164
MATRIX INEQUALITIES
i=1
kA Bke = kW (V a V W b W )W ke = kU a U b ke ,
where U = W V . Besides,
2
kU a U b ke = tr(U a U b )(U a U b )
= tr(a a + b b ) 2 Re tr(U a U b )
=
n
X
i=1
(|i |2 + |i |2 ) 2
n
X
i,j=1
|uij |2 Re( i j ).
165
Since the matrix cij , where cij = |uij |2 , is doubly stochastic, then
kA
2
Bke
n
n
X
X
2
2
cij Re( i j ),
(|i | + |i | ) 2 min
i=1
i,j=1
where the minimum is taken over all doubly stochastic matrices C. For fixed sets
of numbers i , j we have to find the minimum of a linear function on a convex
polyhedron whose vertices are permutation matrices. This minimum is attained at
one of the vertices, i.e., for a matrix cij = i,(i) . In this case
2
n
X
i,j=1
cij Re( i j ) = 2
n
X
Re( i (i) ).
i=1
Hence,
2
kA Bke
n
X
i=1
n
X
|(i) |2 + |i |2 2 Re( i (i) ) =
|(i) i |2 .
i=1
1
acts by the matrix
. If 0 < < 1, then the matrix S1 of this
1
166
MATRIX INEQUALITIES
Now, fix a vector u with positive coordinates and consider the function g(S) =
f (Su) defined on the set of doubly stochastic matrices. If 0 1, then
g(S + (1 )T ) = f (Su + (1 )T u)
f (SU ) + (1 )f (T u) = g(S) + (1 )g(T ),
i.e., g is a convex function. A convex function defined on a convex polyhedron takes
its maximal value at one of the polyhedrons vertices. Therefore, g(S) g(P ),
where P is the matrix of permutation (see Theorem 38.2.1). As the result we get
f (x) = f (Sy) = g(S) g(P ) = f (y(1) , . . . , y(n) ).
It remains to notice that
f (x) = exp(s ln 1 ) + + exp(s ln k ) = 1s + + ks
and
s
s
+ + (k)
1s + + ks .
f (y(1) , . . . , y(n) ) = (1)
Problems
n
38.1 ([Mirsky, 1975]). Let A = aij 1 be a doubly stochastic
P matrix; x1
P
xn 0 and y1 yn 0. Prove that r,s ars xr ys r xr yr .
38.2 ([Bellman, Hoffman, 1954]). Let 1 , . . . , n be eigenvalues of an Hermitian
matrix H. Prove that the point with coordinates (h11 , . . . , hnn ) belongs to the
convex hull of the points whose coordinates are obtained from 1 , . . . , n under all
possible permutations.
Solutions
33.1. Theorem 20.1 shows that there exists a matrix P such that PQ AP = I
and P BP = diag(1 ,Q
. . . , n ), where i 0. Therefore, |A + B| = d2 (1 + i ),
2
2
|A| = d and |B| = d
i , where d = | det P |. It is also clear that
Q
Q
Q
(1 + i ) = 1 + (1 + + n ) + + i 1 + i .
The inequality is strict if 1 + + n > 0, i.e., at least one of the numbers
1 , . . . , n is nonzero.
Q
33.2. As in the preceding problem, det(A + iB) = d2 (k + ik ) and det A =
Q
d2 k , where k > 0 and k R. Since |k + ik |2 = |k |2 + |k |2 , then
|k + ik | |k | and the inequality is strict if k 6= 0.
33.3. Since A B = C > 0, then Ak = Bk + Ck , where Ak , Bk , Ck > 0.
Therefore, |Ak | > |Bk | + |Ck | (cf. Problem 33.1).
33.4. Let x + iy be a nonzero eigenvector of C corresponding to the zero eigenvalue. Then
(A + iB)(x + iy) = (Ax By) + i(Bx + Ay) = 0,
i.e., Ax = By and Ay = Bx. Therefore,
0 (Ax, x) = (By, x) = (y, Bx) = (y, Ay) 0,
SOLUTIONS
167
n
X
i=1
P
34.1. Let be an eigenvalue of the given matrix. Then the system aij xj = xi
(i = 1, . . . , n) has a nonzero solution (x1 , . . . , xn ). Among the numbers x1 , . . . , xn
select the one with the greatest absolute value; let this be xk . Since
akk xk xk =
akj xj ,
j6=k
we have
|akk xk xk |
|akj xj | k |xk |,
j6=k
i.e., |akk | k .
34.2. Let S = V DV , where D = diag(1 , . . . , n ), and V is a unitary matrix.
Then
tr(U S) = tr(U V DV ) = tr(V U V D).
n
P
Let V U V = W = wij 1 ; then tr(U S) =
wii i . Since W is a unitary matrix,
it follows that |wii | 1 and, therefore,
P
P
P
| wii i | |i | = i = tr S.
If S > 0, i.e., i 6= 0 for all i, then tr S = tr(U S) if and only if wii = 1, i.e., W = I
and, therefore, U = I. The equality tr S = | tr(U S)| for a positive definite matrix
S can only be satisfied if wii = ei , i.e., U = ei I.
34.3. Let 1 n 0 and 1 n 0 be the eigenvalues of A
and B. For nonnegative definite matrices the eigenvalues coincide with the singular
values and, therefore,
P
P
P
| tr(AB)| i i ( i ) ( i ) = tr A tr B
168
MATRIX INEQUALITIES
N + +
N ,
0
1
n
since N n+1 = 0. Each summand tends to zero since kp = k(k1) . . . (kp+1) k p
and lim k p k = 0.
k
A
= |2 I A A| (cf. 3.1).
34.8. It suffices to notice that
A I
35.1. Suppose that Ax = x, x 6= 0. Then A1 x = 1 x; therefore, max |Ay|
|y|
y
|Ax|
|x|
= and
1
|A1 y|
|x|
|y|
max
= min 1 1 = .
y
y
|y|
|A y|
|A x|
|ABx|
|ABx0 |
=
,
|x|
|x0 |
kAks kBks
|x0 |
|y|
|x0 |
SOLUTIONS
169
To prove the inequality kABke kAke kBke it suffices to make use of the inequality
n
n
n
P
P
P
| aik bkj |2
|aik |2
|bkj |2 .
k=1
k=1
k=1
kadj Ake =
i + +
i6=1
i .
i6=n
2(n1)
Both parts of this inequality depend continuously on the elements of A and, therefore, the inequality holds for noninvertible matrices as well. The inequality turns
into equality if 1 = = n , i.e., if A is proportional to a unitary matrix (see
Problem 34.6).
36.1. By Theorem 36.1.4
!
!
n1
n1
X |Bk |
X |Ak |
|A + B| |A| 1 +
+ |B| 1 +
.
|Ak |
|Bk |
k=1
Besides,
k=1
|Ak |
|Bk |
170
MATRIX INEQUALITIES
Ax =
B
D
C
E
y
By
0
=
+
= x + z,
0
0
Dy
0
where z =
0. The equality Ax = x cannot hold since the eigenvector of
Dy
an indecomposable matrix is positive (cf. Problem 37.2). Besides,
sup{t 0 | Ax tx 0}
and if = , then x is an extremal vector (cf. Theorem 37.2.1); therefore, Ax = x.
The contradiction obtained means that < .
Pn
37.4. Let f () = |I A|. It is easy to verify that f 0 () = i=1 |I Ai |,
where Ai is a matrix obtained from A by crossing out the ith row and the ith
column (see Problem 11.7). If r and ri are the greatest eigenvalues of A and Ai ,
respectively, then r > ri (see Problem 37.3). Therefore, all numbers |rI Ai | are
positive. Hence, f 0 (r) 6= 0.
37.5. Suppose that A is not primitive. Then for a certain permutation matrix
P the matrix P AP T is of the form indicated in the hypothesis of Theorem 37.3.
On the other hand, the diagonal elements of P AP T are obtained from the diagonal
elements of A under a permutation. Contradiction.
37.6. Yes, it can. For instance consider a nonnegative matrix A corresponding
to the directed graph
1 (1, 2), 2 (3, 4, 5), 3 (6, 7, 8), 4 (6, 7, 8),
5 (6, 7, 8), 6 (9), 7 (9), 8 (9), 9 (1).
It is easy to verify that the matrix A is indecomposable and, since a11 > 0, it is
primitive (cf. Problem 37.5). The directed graph
1 (1, 2, 3, 4, 5), 2 (6, 7, 8), 3 (9), 4 (9),
5 (9), 6 (1), 7 (1), 8 (1), 9 (1, 2).
corresponds to A2 . The first graph has 18 edges, whereas the second one has 16
edges.
38.1. There exist nonnegative numbers i and i such that xr = r + + n
and yr = r + + n . Therefore,
X
r
xr yr
ars xr ys =
r,s
r,s
X
r,s
(rs ars )
X
ir
X
js
j =
X
i,j
i j
XX
ri sj
(rs ars ).
SOLUTIONS
171
P
P
P
P
It suffices to verify that ri sj (rs ars ) 0. If i j, then ri sj rs =
P
Pn
ri
s=1 rs and, therefore,
n
XX
XX
(rs ars )
(rs ars ) = 0.
ri sj
ri s=1
172
MATRIX
CHAPTER
INEQUALITIES
VII
k
X = Xij 1 . The equation AX = XA is then equivalent to the system of equations
Ji Xij = Xij Jj .
It is not difficult to verify that if the eigenvalues of the matrices Ji and Jj are
distinct then the equation Ji Xij = Xij Jj has only the zero solution and, if Ji and
Jj are Jordan blocks of order m and n, respectively, corresponding to the same
eigenvalue,
then any solution of the equation Ji Xij = Xij Jj is of the form ( Y 0 )
Y
or
, where
0
y y ...
yk
1
2
0 y1 . . . yk1
Y =
.. . .
..
..
.
.
.
.
0 0 ...
y1
and k = min(m, n). The dimension of the space of such matrices Y is equal to k.
Thus, we have obtained the following statement.
39.1.1. Theorem. Let Jordan blocks of size a1 (), . . . , ar () correspond to an
eigenvalue of a matrix A. Then the dimension of the space of solutions of the
equation AX = XA is equal to
XX
i,j
XX
i,j
XX
ai () = n
Typeset by AMS-TEX
173
with equality if and only if the Jordan blocks of A correspond to distinct eigenvalues,
i.e., the characteristic polynomial coincides with the minimal polynomial.
b) = c) If the characteristic polynomial of A coincides with the minimal polynomial then the dimension of Span(I, A, . . . , An1 ) is equal to n and, therefore, it
coincides with the space of solutions of the equation AX = XA, i.e., any matrix
commuting with A is a polynomial in A.
c) = a) If every matrix commuting with A is a polynomial in A, then, thanks
to the CayleyHamilton theorem, the space of solutions of the equation AX = XA
is contained in the space Span(I, A, . . . , Ak1 ) and k n. On the other hand,
k m n and, therefore, m = n.
39.2.1. Theorem. Commuting operators A and B in a space V over C have
a common eigenvector.
Proof. Let be an eigenvalue of A and W V the subspace of all eigenvectors
of A corresponding to . Then BW W . Indeed if Aw = w then A(Bw) =
BAw = (Bw). The restriction of B to W has an eigenvector w0 and this vector
is also an eigenvector of A (corresponding to the eigenvalue ).
39.2.2. Theorem. Commuting diagonalizable operators A and B in a space V
over C have a common eigenbasis.
Proof. For every eigenvalue of A consider the subspace V consisting of
all eigenvectors of A corresponding to the eigenvalue . Then V = V and
BV V . The restriction of the diagonalizable operator B to V is a diagonalizable
operator. Indeed, the minimal polynomial of the restriction of B to V is a divisor
of the minimal polynomial of B and the minimal polynomial of B has no multiple
roots. For every eigenvalue of the restriction of B to V consider the subspace
V, consisting of all eigenvectors of the restriction of B to V corresponding to
the eigenvalue . Then V = V, and V = , V, . By selecting an arbitrary
basis in every subspace V, , we finally obtain a common eigenbasis of A and B.
We can similarly construct a common eigenbasis for any finite family of pairwise
commuting diagonalizable operators.
39.3. Theorem. Suppose the matrices A and B are such that any matrix commuting with A commutes also with B. Then B = g(A), where g is a polynomial.
Proof. It is possible to consider the matrices A and B as linear operators in
a certain space V . For an operator A there exists a cyclic decomposition V =
V1 Vk with the following property (see 14.1): AVi Vi and the restriction
Ai of A to Vi is a cyclic block; the characteristic polynomial of Ai is equal to pi ,
where pi is divisible by pi+1 and p1 is the minimal polynomial of A.
Let the vector ei span Vi , i.e., Vi = Span(ei , Aei , A2 ei , . . . ) and Pi : V Vi
be a projection. Since AVi Vi , then APi v = Pi Av and, therefore, Pi B = BPi .
Hence, Bei = BPi ei = Pi Bei Vi , i.e., Bei = gi (A)ei , where gi is a polynomial.
Any vector vi Vi is of the form f (A)ei , where f is a polynomial. Therefore,
Bvi = gi (A)vi . Let us prove that gi (A)vi = g1 (A)vi , i.e., we can take g1 for the
required polynomial g.
Let us consider an operator Xi : V V that sends vector f (A)ei to (f ni )(A)e1 ,
where ni = p1 p1
i , and that sends every vector vj Vj , where j 6= i, into itself.
174
First, let us verify that the operator Xi is well defined. Let f (A)ei = 0, i.e., let f
be divisible by pi . Then ni f is divisible by ni pi = p1 and, therefore, (f ni )(A)e1 = 0.
It is easy to check that Xi A = AXi and, therefore, Xi B = BXi .
On the other hand, Xi Bei = (ni gi )(A)e1 and BXi ei = (ni g1 )(A)e1 ; hence,
ni (A)[gi (A) g1 (A)]e1 = 0. It follows that the polynomial ni (gi g1 ) is divisible
by p1 = ni pi , i.e., gi g1 is divisible by pi and, therefore, gi (A)vi = g1 (A)vi for any
v i Vi .
Problems
39.1. Let A = diag(1 , . . . , n ), where the numbers i are distinct, and let a
matrix X commute with A.
a) Prove that X is a diagonal matrix.
b) Let, besides, the numbers i be nonzero and let X commute with N A, where
N = |i+1,j |n1 . Prove that X = I.
39.2. Prove that if X commutes with all matrices then X = I.
39.3. Find all matrices commuting with E, where E is the matrix all elements
of which are equal to 1.
39.4. Let P be the matrix corresponding to a permutation . Prove that if
AP = P A for all then A = I + E, where E is the matrix all elements of
which are equal to 1.
39.5. Prove that for any complex matrix A there exists a matrix B such that
AB = BA and the characteristic polynomial of B coincides with the minimal
polynomial.
39.6. a) Let A and B be commuting nilpotent matrices. Prove that A + B is a
nilpotent matrix.
b) Let A and B be commuting diagonalizable matrices. Prove that A + B is
diagonalizable.
39.7. In a space of dimension n, there are given (distinct) commuting with each
other involutions A1 , . . . , Am . Prove that m 2n .
39.8. Diagonalizable operators A1 , . . . , An commute with each other. Prove
that all these operators can be polynomially expressed in terms of a diagonalizable
operator.
39.9. In the space of matrices of order 2m, indicate a subspace of dimension
m2 + 1 consisting of matrices commuting with each other.
40. Commutators
40.1. Let A and B be square matrices of the same order. The matrix
[A, B] = AB BA
is called the commutator of the matrices A and B. The equality [A, B] = 0 means
that A and B commute.
It is easy to verify that tr[A, B] = 0 for any A and B; cf. 11.1.
It is subject to an easy direct verification that the following Jacobi identity holds:
[A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0.
An algebra (not necessarily matrix) is called a Lie algebraie algebra if the multiplication (usually called bracketracket and denoted by [, ]) in this algebra is a
40. COMMUTATORS
175
skew-commutative, i.e., [A, B] = [B, A], and satisfies Jacobi identity. The map
adA : Mn,n Mn,n determined by the formula adA (X) = [A, X] is a linear operator in the space of matrices. The map which to every matrix A assigns the operator
adA is called the adjoint representation of Mn,n . The adjoint representation has
important applications in the theory of Lie algebras.
The following properties of adA are easy to verify:
1) ad[A,B] = adA adB adB adA (this equality is equivalent to the Jacobi identity);
2) the operator D = adA is a derivatiation of the matrix algebra, i.e.,
D(XY ) = XD(Y ) + (DX)Y ;
P
n
3) Dn (XY ) = k=0 nk (Dk X)(Dnk Y );
Pn1
4) D(X n ) = k=0 X k (DX)X n1k .
40.2. If A = [X, Y ], then tr A = 0. It turns out that the converse is also true:
if tr A = 0 then there exist matrices X and Y such that A = [X, Y ]. Moreover, we
can impose various restrictions on the matrices X and Y .
40.2.1. Theorem ([Fregus, 1966]). Let tr A = 0; then there exist matrices X
and Y such that X is an Hermitian matrix, tr Y = 0, and A = [X, Y ].
Proof. There
a unitary matrix U such that all the diagonal elements of
exists
n
U AU = B = bij 1 are zeros (see 15.2). Consider a matrix D = diag(d1 , . . . , dn ),
n
where d1 , . . . , dn are arbitrary distinct real numbers. Let Y1 = yij 1 , where yii = 0
bij
for i 6= j. Then
and yij =
di dj
n n
DY1 Y1 D = (di dj )yij 1 = bij 1 = U AU .
Therefore,
A = U DY1 U U Y1 DU = XY Y X,
n
DC CD = (i j )cij 1 = B.
It remains to set X = P 1 DP and Y = P 1 CP .
176
40.3. Theorem ([Smiley, 1961]). Suppose the matrices A and B are such that
for a certain integer s > 0 the identity adsA X = 0 implies adsX B = 0. Then B can
be expressed as a polynomial of A.
Proof. The case s = 1 was considered in Section 39.3; therefore, in what follows
we will assume that s 2. Observe that for s 2 the identity adsA X = 0 does not
necessarily imply adsX A = 0.
We may assume that A = diag(J1 , . . . , Jt ), where Ji is a Jordan block. Let X =
diag(1, . . . , n). It is easy to verify that ad2A X = 0 (see Problem 40.1); therefore,
adsA X = 0 and adsX B = 0. The matrix X is diagonalizable and, therefore, adX B =
0 (see Problem 40.6). Hence, B is a diagonal matrix (see Problem 39.1 a)). In
accordance with the block notation A = diag(J1 , . . . , Jt ) let us express the matrices
B and X in the form B = diag(B1 , . . . , Bt ) and X = diag(X1 , . . . , Xt ). Let
Y = diag((J1 1 I)X1 , . . . , (Jt t I)Xt ),
where i is the eigenvalue of the Jordan block Ji . Then ad2A Y = 0 (see Problem 40.1). Hence, ad2A (X +Y ) = 0 and, therefore, adsX+Y B = 0. The matrix X +Y
is diagonalizable, since its eigenvalues are equal to 1, . . . , n. Hence, adX+Y B = 0
and, therefore, adY B = 0.
The equations [X, B] = 0 and [Y, B] = 0 imply that Bi = bi I (see Problem 39.1).
Let us prove that if the eigenvalues of Ji and Ji+1 are equal, then bi = bi+1 .
Consider the matrix
0 ... 0 1
0 ... 0 0
U =
... ... ...
0
...
of order equal to the sum of the orders of Ji and Ji+1 . In accordance with the block
expression A = diag(J1 , . . . , Jt ) introduce the matrix Z = diag(0, U, 0). It is easy
to verify that ZA = AZ = Z, where is the common eigenvalue of Ji and Ji+1 .
Hence,
adA (X + Z) = adA Z = 0, adsA (X + Y ) = 0,
and adsX+Z B = 0. Since the eigenvalues of X + Z are equal to 1, . . . , n, it follows
that X + Z is diagonalizable and, therefore, adX+Z B = 0. Since [X, B] = 0, then
[Z, B] = [X + Z, B] = 0, i.e., bi = bi+1 .
We can assume that A = diag(M1 , . . . , Mq ), where Mi is the union of Jordan
blocks with equal eigenvalues. Then B = diag(B10 , . . . , Bq0 ), where Bi0 = b0i I. The
identity [W, A] = 0 implies that W = diag(W1 , . . . , Wq ) (see 39.1) and, therefore,
[W, B] = 0. Thus, the case s 2 reduces to the case s = 1.
40.4. Matrices A1 , . . . , Am are said to be simultaneously triangularizable if there
exists a matrix P such that all matrices P 1 Ai P are upper triangular.
Theorem ([Drazin, Dungey, Greunberg, 1951]). Matrices A1 , . . . , Am are simultaneously triangularizable if and only if the matrix p(A1 , . . . , Am )[Ai , Aj ] is
nilpotent for every polynomial p(x1 , . . . , xm ) in noncommuting indeterminates.
Proof. If the matrices A1 , . . . , Am are simultaneously triangularizable then
the matrices P 1 [Ai , Aj ]P and P 1 p(A1 , . . . , Am )P are upper triangular and all
40. COMMUTATORS
177
diagonal elements of the first matrix are zeros. Hence, the product of these matrices
is a nilpotent matrix, i.e., the matrix p(A1 , . . . , Am )[Ai , Aj ] is nilpotent.
Now, suppose that every matrix of the form p(A1 , . . . , Am )[Ai , Aj ] is nilpotent;
let us prove that then the matrices A1 , . . . , Am are simultaneously triangularizable.
First, let us prove that for every nonzero vector u there exists a polynomial
h(x1 , . . . , xm ) such that h(A1 , . . . , Am )u is a nonzero common eigenvector of the
matrices A1 , . . . , Am .
Proof by induction on m. For m = 1 there exists a number k such that the vectors
u, A1 u, . . . , Ak1
u are linearly independent and Ak1 u = ak1 Ak1
u + + a0 u.
1
1
g(x)
k
k1
, where x0 is a root of
Let g(x) = x ak1 x
a0 and g0 (x) =
(x x0 )
the polynomial g. Then g0 (A1 )u 6= 0 and (A1 x0 I)g0 (A1 )u = g(A1 )u = 0, i.e.,
g0 (A1 )u is an eigenvector of A1 .
Suppose that our statement holds for any m 1 matrices A1 , . . . , Am1 .
For a given nonzero vector u a certain nonzero vector v1 = h(A1 , . . . , Am1 )u is
a common eigenvector of the matrices A1 , . . . , Am1 . The following two cases are
possible.
1) [Ai , Am ]f (Am )v1 = 0 for all i and any polynomial f . For f = 1 we get
Ai Am v1 = Am Ai v1 ; hence, Ai Akm v1 = Akm Ai v1 , i.e., Ai g(Am )v1 = g(Am )Ai v1 for
any g. For a matrix Am there exists a polynomial g1 such that g1 (Am )v1 is an
eigenvector of this matrix. Since Ai g1 (Am )v1 = g1 (Am )Ai v1 and v1 is an eigenvector of A1 , . . . , Am , then g1 (Am )v1 = g1 (Am )h(A1 , . . . , Am1 )u is an eigenvector of
A1 , . . . , Am .
2) [Ai , Am ]f1 (Am )v1 6= 0 for a certain f1 and certain i. The vector C1 f1 (Am )v1 ,
where C1 = [Ai , Am ], is nonzero and, therefore, the matrices A1 , . . . , Am1 have
a common eigenvector v2 = g1 (A1 , . . . , Am1 )C1 f1 (Am )v1 . We can apply the same
argument to the vector v2 , etc. As a result we get a sequence v1 , v2 , v3 , . . . , where
vk is an eigenvector of the matrices A1 , . . . , Am1 and where
vk+1 = gk (A1 , . . . , Am1 )Ck fk (Am )vk ,
This sequence terminates with a vector vp if [Ai , Am ]f (Am )vp = 0 for all i and all
polynomials f .
For Am there exists a polynomial gp (x) such that gp (Am )vp is an eigenvector of
Am . As in case 1), we see that this vector is an eigenvector of A1 , . . . , Am and
gp (Am )vp = gp (Am )g(A1 , . . . , Am )h(A1 , . . . , Am1 )u.
It remains to show that the sequence v1 , v2 , . . . terminates. Suppose that this
is not so. Then there exist numbers 1 , . . . , n+1 not all equal to zero for which
1 v1 + + n+1 vn+1 = 0 and, therefore, there exists a number j such that j 6= 0
and
j vj = j+1 vj+1 + + n+1 vn+1 .
Clearly,
vj+1 = gj (A1 , . . . , Am1 )Cj fj (Am )vj , vj+2 = uj+1 (A1 , . . . , Am )Cj fj (Am )vj ,
etc. Hence,
j vj = u(A1 , . . . , Am )Cj fj (Am )vj
178
and, therefore,
fj (Am )u(A1 , . . . , Am )Cj fj (Am )vj = j fj (Am )vj .
It follows that the nonzero vector fj (Am )vj is an eigenvector of the operator
fj (Am )u(A1 , . . . , Am )Cj coresponding to the nonzero eigenvalue j . But by
hypothesis this operator is nilpotent and, therefore, it has no nonzero eigenvalues.
Contradiction.
We turn directly to the proof of the theorem by induction on n. For n = 1 the
statement is obvious. As we have already demonstrated the operators A1 , . . . , Am
have a common eigenvector y corresponding to certain eigenvalues 1 , . . . , m . We
can assume that |y| = 1, i.e., y y = 1. There exists a unitary matrix Q whose first
column is y. Clearly,
i
Q Ai Q = Q (i y . . . ) =
0 A0i
and the matrices A01 , . . . , A0m of order n 1 satisfy the condition of the theorem.
By inductive hypothesis there exists a unitary matrix P1
of order
n 1 such that
1
0
the matrices P1 A0i P1 are upper triangular. Then P = Q
is the desired
0 P1
matrix. (It even turned out to be unitary.)
40.5. Theorem. Let A and B be operators in a vector space V over C and let
rank[A, B] 1. Then A and B are simultaneously triangularizable.
Proof. It suffices to prove that the operators A and B have a common eigenvector v V . Indeed, then the operators A and B induce operators A1 and B1 in
the space V1 = V / Span(v) and rank[A1 , B1 ] 1. It follows that A1 and B1 have a
common eigenvector in V1 , etc. Besides, we can assume that Ker A 6= 0 (otherwise
we can replace A by A I).
The proof will be carried out by induction on n = dim V . If n = 1, then the
statement is obvious. Let C = [A, B]. In the proof of the inductive step we will
consider two cases.
1) Ker A Ker C. In this case B(Ker A) Ker A, since if Ax = 0, then Cx = 0
and ABx = BAx + Cx = 0. Therefore, we can consider the restriction of B to
Ker A 6= 0 and select in Ker A an eigenvector v of B; the vector v is then also an
eigenvector of A.
2) Ker A 6 Ker C, i.e., Ax = 0 and Cx 6= 0 for a vector x. Since rank C = 1,
then Im C = Span(y), where y = Cx. Besides,
y = Cx = ABx BAx = ABx Im A.
It follows that B(Im A) Im A. Indeed, BAz = ABz Cz, where ABz Im A
and Cz Im C Im A. We have Ker A 6= 0; hence, dim Im A < n. Let A0 and B 0
be the restrictions of A and B to Im A. Then rank[A0 , B 0 ] 1 and, therefore, by
the inductive hypothesis the operators A0 and B 0 have a common eigenvector.
Problems
40.1. Let J = N + I be a Jordan block of order n, A = diag(1, 2, . . . , n) and
B = N A. Prove that ad2J A = ad2J B = 0.
179
180
ij = ji = k,
jk = kj = i,
ki = ik = j.
1
2 (qr
Proof. The function B(q, r) = 12 (qr +rq) is symmetric and bilinear. Therefore,
it suffices to verify that B(q, r) = (q, r) for basis elements. It is easy to see that
B(1, i) = 0, B(i, i) = 1 and B(i, j) = 0 and the remaining equalities are similarly
checked.
q
Corollary. The element
is a two-sided inverse for q.
|q|2
Indeed, qq = |q|2 = qq.
41.2.2. Theorem. |qr| = |q| |r|.
Proof. Clearly,
|qr|2 = qrqr = qrr q = q|r|2 q = |q|2 |r|2 .
181
0
x
P (xi + yj + zk) =
y
z
x
0
z
y
y
z
0
x
z
y
so(4, R).
x
0
Similarly, the map Q(q) : u 7 uq belongs to so(4, R). It is easy to verify that the
maps q 7 P (q) and q 7 Q(q) are Lie algebra homomorphisms, i.e.,
P (qr rq) = P (q)P (r) P (r)P (q) and Q(qr rq) = Q(q)Q(r) Q(r)Q(q).
Therefore, the map
so(3, R) so(3, R) so(4, R)
(q, r) 7 P (q) + Q(r)
is a Lie algebra homomorphism. Since the dimensions of these algebras coincide, it
suffices to verify that this map is a monomorphism. The identity P (q) + Q(r) = 0
means that qx+xr = 0 for all x. For x = 1 we get q = r and, therefore, qxxq = 0
for all x. Hence, q is a real quaternion; on the other hand, by definition, q is a
purely imaginary quaternion and, therefore, q = r = 0.
41.4. Let us consider the algebra of quaternions H as a space over R. In H H,
we can introduce an algebra structure by setting
(x1 x2 )(y1 y2 ) = x1 y1 x2 y2 .
Let us identify R4 with H. It is easy to check that the map w : H H M4 (R)
given by the formula [w(x1 x2 )]x = x1 xx2 is an algebra homomorphism, i.e.,
w(uv) = w(u)w(v).
Theorem. The map w : H H M4 (R) is an algebra isomorphism.
Proof. The dimensions of H H and M4 (R) are equal. Still, unlike the case
considered in 41.3, the calculation of the kernel of w is not as easy as the calculation
of the kernel of the map (q, r) 7 P (q) + Q(r) since the space H H contains not
only elements of the form x y. Instead we should better prove that the image of
w coincides with M4 (R). The matrices
e=
1
0
0
1
, =
1 0
0 1
, a=
0 1
1 0
, b=
0 1
1 0
182
Table 1. Values of x y
xy
1
i
j
k
e 0
b
0 e
0
b 0
e
0 b
0
0
0
0
a
0 a
0
a 0
j
k
0 e
0 b
e 0
b 0
0
0 b
0 e
e
b 0
e 0
a
0
a 0
0
0
0 a
a 0
0
0
0 a
0
0
b
Figure 7
The product of two elements belonging to one line or one circle is the third
element that belongs to the same line or circle and the sign is determined by the
orientation; for example ie = f , if = e.
Let = a + be, where a and b are quaternions. The conjugation in O is given by
the formula (a, b) = (a, b), i.e., a + be = a be. Clearly,
= (a, b)(a, b) = (a, b)(a, b) = (aa + bb, ba ba) = aa + bb,
p
p
i.e., is the sum of squares of coordinates of . Therefore, || = = is
the length of .
183
Theorem. || = || ||.
Proof. For quaternions a similar theorem is proved quite simply, cf. 41.2. In
our case the lack of associativity is a handicap. Let = a + be and = u + ve,
where a, b, u, v are quaternions. Then
||2 = (au vb)(u a bv) + (bu + va)(ub + a v).
Let us express a quaternion v in the form v = + v1 , where is a real number and
v 1 = v1 . Then
||2 = (au b + v1 b)(u a b bv1 )+
+ (bu + a + v1 a)(ub + a av1 ).
Besides,
||2 ||2 = (aa + bb)(uu + 2 v1 v1 ).
Since uu and bb are real numbers, auu a = aauu and bbv1 = v1 bb. Making use of
similar equalities we get
||2 ||2 ||2 = (bu a aub + bu a + aub)
+ v1 (bu a + aub) (aub + bu a)v1 = 0
because bua + aub is a real number.
1
1
(xy xy) = (xy yx).
2
2
It is possible to verify that the inner product (x, y) of octanions x and y is equal to
1
1
2 (xy + yx) and for purely imaginary octanions we get (x, y) = 2 (xy + yx).
Theorem. The vector product of purely imaginary octanions possesses the following properties:
a) x y x,
x y y;
184
Since x(yx) = (xy)x (see Problem 41.8 b)), we see that (1) is equivalent to x(xy) =
(yx)x. By Problem 41.8, a) we have x(xy) = (xx)y and (yx)x = y(xx). It remains
to notice that xx = xx = (x, x) is a real number.
b) We have to prove that
(xy yx)(xy yx) = 4|x|2 |y|2 (xy + yx)(xy + yx),
i.e.,
2|x|2 |y|2 = (xy)(yx) + (yx)(xy).
Let a = xy. Then a = yx and
2|x|2 |y|2 = 2(a, a) = aa + aa = (xy)(yx) + (yx)(xy).
41.7. The remaining part of this section will be devoted to the solution of the
following
Problem (Hurwitz-Radon). What is the maximal number of orthogonal operators A1 , . . . , Am in Rn satisfying the relations A2i = I and Ai Aj + Aj Ai = 0
for i 6= j?
This problem might look quite artificial. There are, however, many important
problems in one way or another related to quaternions or octonions that reduce to
this problem. (Observe that the operators of multiplication by i, j, . . . , h satisfy the
required relations.)
We will first formulate the answer and then tell which problems reduce to our
problem.
Theorem (Hurwitz-Radon). Let us express an integer n in the form n = (2a +
1)2b , where b = c + 4d and 0 c 3. Let (n) = 2c + 8d; then the maximal number
of required operators in Rn is equal to (n) 1.
41.7.1. The product of quadratic forms. Let a = x1 + ix2 and b = y1 + iy2 .
Then the identity |a|2 |b|2 = |ab|2 can be rewritten in the form
(x21 + x22 )(y12 + y22 ) = z12 + z22 ,
where z1 = x1 y1 x2 y2 and z2 = x1 y2 + x2 y1 . Similar identities can be written for
quaternions and octonions.
Theorem. Let m and n be fixed natural numbers; let z1 (x, y), . . . , zn (x, y) be
real bilinear functions of x = (x1 , . . . , xm ) and y = (y1 , . . . , yn ). Then the identity
(x21 + + x2m )(y12 + + yn2 ) = z12 + + zn2
185
b2ij (x)yj2 + 2
j<k
n
P
P
Therefore, i b2ij = x21 + +x2m and j<k bij (x)bik (x) = 0. Let B(x) = bij (x)1 .
Then B T (x)B(x) = (x21 + + x2m )I. The matrix B(x) can be expressed in the
form B(x) = x1 B1 + + xm Bm . Hence,
T
B T (x)B(x) = x21 B1T B1 + + x2m Bm
Bm +
X
(BiT Bj + BjT Bi )xi xj ;
i<j
therefore, BiT Bi = I and BiT Bj + BjT Bi = 0. The operators Bi are orthogonal and
Bi1 Bj = Bj1 Bi for i 6= j.
1
Let us consider the orthogonal operators A1 , . . . , Am1 , where Ai = Bm
Bi .
1
1
1
2
Then Bm Bi = Bi Bm and, therefore, Ai = Ai , i.e., Ai = I. Besides,
Bi1 Bj = Bj1 Bi for i 6= j; hence,
1
1
1
Ai Aj = Bm
Bi Bm
Bj = Bi1 Bm Bm
Bj = Bj1 Bi = Aj Ai .
It is also easy to verify that if the orthogonal operators A1 , . . . , Am1 are such that
A2i = I and Ai Aj + Aj Ai = 0 then the operators B1 = A1 , . . . , Bm1 = Am1 ,
Bm = I possess the required properties. To complete the proof of Theorem 41.7.1
it remains to make use of Theorem 41.7.
41.7.2. Normed algebras.
Theorem. Let a real algebra A be endowed with the Euclidean space structure
so that |xy| = |x| |y| for any x, y A. Then the dimension of A is equal to 1, 2,
4 or 8.
Proof. Let e1 , . . . , en be an orthonormal basis of A. Then
(x1 e1 + + xn en )(y1 e1 + + yn en ) = z1 e1 + + zn en ,
where z1 , . . . , zn are bilinear functions in x and y. The equality |z|2 = |x|2 |y|2
implies that
(x21 + + x2n )(y12 + + yn2 ) = z12 + + zn2 .
It remains to make use of Theorem 41.7.1 and notice that (n) = n if and only if
n = 1, 2, 4 or 8.
41.7.3. The vector product.
Theorem ([Massey, 1983]). Let a bilinear operation f (v, w) = v w Rn be
defined in Rn , where n 3; let f be such that v w is perpendicular to v and w
and |v w|2 = |v|2 |w|2 (v, w)2 . Then n = 3 or 7.
The product determined by the above operator f is called the vector product
of vectors.
186
187
41.8. Now, we turn to the proof of Theorem 41.7. Consider the algebra Cm
over R with generators e1 , . . . , em and relations e2i = 1 and ei ej + ej ei = 0 for
i 6= j. To every set of orthogonal matrices A1 , . . . , Am satisfying A2i = I and
Ai Aj + Aj Ai = 0 for i 6= j there corresponds a representation (see 42.1) of Cm
that maps the elements e1 , . . . , em to orthogonal matrices A1 , . . . , Am . In order to
0
study the structure of Cm , we introduce an auxiliary algebra Cm
with generators
2
1 , . . . , m and relations i = 1 and i j + j i = 0 for i 6= j.
0
The algebras Cm and Cm
are called Clifford algebraslifford algebra.
41.8.1. Lemma. C1
= C, C2
= H, C10
= R R and C20
= M2 (R).
Proof. The isomorphisms are explicitely given as follows:
C1 C
1 7 1, e1 7 i;
C2 H
1 7 1, e1 7 i, e2 7 j;
C10
C20
R R
M2 (R)
1 0
1 0
0
1 7
, 1 7
, 2 7
0 1
0 1
1
1
0
Corollary. C H
= M2 (C).
Indeed, the complexifications of C2 and C20 are isomorphic.
41.8.2. Lemma. Ck+2
Since
C20 C2
= H M2 (R)
= M2 (H),
we have Ck+4
= Ck M2 (H). Similarly, Ck+4
= Ck0 M2 (H).
41.8.4. Lemma. Ck+8
= Ck M16 (R).
188
Table 2
k
Ck
1
C
2
H
3
HH
4
M2 (H)
Ck0
RR
M2 (R)
M2 (C)
M2 (H)
k
Ck
5
M4 (C)
6
7
8
M8 (R) M8 (R) M8 (R) M16 (R)
M8 (C)
M16 (R)
C6 = C2 M2 (H) = M2 (H H) = M8 (R),
etc. The results of calculations are given in Table 2.
Lemma 41.8.4 makes it possible now to calculate Ck for any k. The algebras C1 ,
. . . , C8 have natural representations in the spaces C, H, H, H2 , C4 , R8 , R8 and R16
whose dimensions over R are equal to 2, 4, 4, 8, 8, 8, 8 and 16. Besides, under the
passage from Ck to Ck+8 the dimension of the space of the natural representation
is multiplied by 16. The simplest case-by-case check indicates that for n = 2k the
largest m for which Cm has the natural representation in Rn is equal to (n) 1.
Now, let us show that under these natural representations of Cm in Rn the
elements e1 , . . . , em turn into orthogonal matrices if we chose an appropriate basis
in Rn . First, let us consider the algebra H = R4 . Let us assign to an element a H
the map x 7 ax of the space H into itself. If we select basis 1, i, j, k in the space
H = R4 , then to elements 1, i, j, k the correspondence indicated assigns orthogonal
matrices. We may proceed similarly in case of the algebra C = R2 .
We have shown how to select bases in C = R2 and H = R4 in order for the
elements ei and j of the algebras C1 , C2 , C10 and C20 were represented by orthogonal
matrices. Lemmas 41.8.2-4 show that the elements ei and j of the algebras Cm
0
and Cm
are represented by matrices obtained consequtevely with the help of the
Kronecker product, and the initial matrices are orthogonal. It is clear that the
Kronecker product of two orthogonal matrices is an orthogonal matrix (cf. 27.4).
Let f : Cm Mn (R) be a representation of Cm under which the elements
e1 , . . . , em turn into orthogonal matrices. Then f (1 ei ) = f (1)f (ei ) and the matrix
f (ei ) is invertible. Hence, f (1) = f (1 ei )f (ei )1 = I is the unit matrix. The
algebra Cm is either of the form Mp (F ) or of the form Mp (F ) Mp (F ), where
F = R, C or H. Therefore, if f is a representation of Cm such that f (1) = I, then
f is completely reducible and its irreducible components are isomorphic to F p (see
42.1); so its dimension is divisible by p. Therefore, for any n the largest m for
which Cm has a representation in Rn such that f (1) = I is equal to (n) 1.
Problems
41.1. Prove that the real part of the product of quaternions x1 i + y1 j + z1 k
and x2 i + y2 j + z2 k is equal to the inner product of the vectors (x1 , y1 , z1 ) and
(x2 , y2 , z2 ) taken with the minus sign, and that the imaginary part is equal to their
vector product.
189
190
hand, Aa Ker h. Therefore, Ker h = Fi , i.e., h is the zero map. Hence, either h
is an isomorphism or the zero map.
This proof remains valid for the algebra of matrices over H, i.e., when V and
W are spaces over H. Note that if A = Mat(V n ), where V n is a space over H and
f : A Mat(W m ) a representation such that f (In ) = Im , then W m necessarily
has the structure of a vector space over H. Indeed, the multiplication of elements
of W m by i, j, k is determined by operators f (iIn ), f (jIn ), f (kIn ).
In section 41 we have made use of not only Theorem 42.1.1 but also of the
following statement.
42.1.2. Theorem. Let A = Mat(V n ) Mat(V n ) and f : A Mat(W m ) a
representation such that f (In ) = Im . Then W m = W1 Wk , where the Wi
are invariant subspaces isomorphic to V n .
Proof. Let Fi be the set of matrices defines in the proof of Theorem 42.1.1.
The space A can be represented as the direct sum of its subspaces Fi1 = Fi 0 and
Fi2 = 0 Fi . Similarly to the proof of Theorem 42.1.1 we see that the space W can
be represented as the direct sum of certain nonzero subspaces Fik ej each of which
is invariant and isomorphic to V n .
43. The resultant
Pm
Pn
mi
ni
,
and g(x) =
43.1. Consider polynomials f (x) =
i=0 bi x
i=0 ai x
where a0 6= 0 and b0 6= 0. Over an algebraically closed field, f and g have a
common divisor if and only if they have a common root. If the field is not algebraically closed then the common divisor can happen to be a polynomial without
roots.
The presence of a common divisor for f and g is equivalent to the fact that there
exist polynomials p and q such that f q = gp, where deg p n1 and deg q m1.
Let q = u0 xm1 + + um1 and p = v0 xn1 + + vn1 . The equality f q = gp
can be expressed in the form of a system of equations
a0 u0 = b0 v0
a1 u0 + a0 u1 = b1 v0 + b0 v1
a2 u0 + a1 u1 + a0 u2 = b2 v0 + b1 v1 + b0 v2
......
The polynomials f and g have a common root if and only if this system of
equations has a nonzero solution (u0 , u1 , . . . , v0 , v1 , . . . ). If, for example, m = 3
and n = 2, then the determinant of this system is of the form
a0
a1
a2
0
a0
a1
a2
0
0
0
a0
a1
a2
b0
b1
b2
b3
0
0
a0
b0
0
b1 = 0
b2
b0
b3
0
a1
a0
0
b1
b0
a2
a1
a0
b2
b1
0
a2
a1
b3
b2
a2 = |S(f, g)|.
b3
The matrix S(f, g) is called Sylvesters matrix of polynomials f and g. The determinant of S(f, g) is called the resultant of f and g and is denoted by R(f, g). It
191
n
Y
i=1
g(xi ) = am
0
n
Y
m1
(b0 xm
+ + bm )
i + b1 xi
i=1
=
1.
0 0
192
43.3. Bezouts matrix. The size of Sylvesters matrix is too large and, therefore, to compute the resultant with its help is inconvenient. There are many various
ways to diminish the order of the matrix used to compute the resultant. For example, we can replace the polynomial g by the remainder of its division by f (see
Problem 43.1).
There are other ways to diminish the order of the matrix used for the computations.
Suppose that m = n.
A1 A2
Let us express Sylvesters matrix in the form
, where the Ai , Bi are
B1 B2
square matrices. It is easy to verify that
c
0
..
.
c1
c0
..
.
...
...
..
.
cn1
cn2
..
.
0
0
0
0
...
...
c0
0
cn
cn1
k
X
..
= B1 A1 , where ck =
ai bki ;
.
i=0
c1
c0
A1
B1
A2
B2
A1 B1 =
hence,
I
B1
0
A1
A1
0
A2
A1 B2 B1 A2
c04
c14
c24
c34
c03 c04 + c13 c14 + c23 c24
.
c02 c03 + c12 c04 + c13 c14
c01
c02
c03
c04
n
1 for i + j = n + 1
Let J = antidiag(1, . . . , 1), i.e., J = aij 1 , where aij =
.
0 otherwise
Then the matrix Z = |wij |n1 J is symmetric. It is called the Bezoutian or Bezouts
matrix of f and g.
43.4. Barnetts matrix. Let us describe one more way to diminish the order of
the matrix to compute the resultant ([Barnett, 1971]). For simplicity, let us assume
that a0 = 1, i.e., f (x) = xn +a1 xn1 + +an and g(x) = b0 xm +b1 xm1 + +bm .
To f and g assign Barnetts matrix R = g(A), where
A=
0
0
..
.
..
.
1
0
..
.
..
.
0
1
..
.
..
.
0
an
0
an2
0
an3
...
...
..
.
..
.
..
.
...
0
0
..
.
.
0
1
a1
193
r1 =
(bm , bm1 , . . . , b1 , b0 , 0, . . . , 0)
for m < n
(dn , . . . , d1 )
for m = n,
Q
Proof. By Theorem 43.2 R(f, f 0 ) = a0n1 i f 0 (xi ).
Q
It is easy to verify that if xi is a root of f , then f 0 (xi ) = a0 j6=i (xj xi ).
Therefore,
R(f, f 0 ) = a2n1
0
Y
Y
(xi xj ) = a2n1
(xi xj )2 .
0
j6=i
i<j
194
an1,0 . . . an1,n1
..
..
..
R(f, g) = am
.
0
.
.
.
a00
...
a0,n1
43.3. The characteristic polynomials of matrices A and B of size nn and mm
are equal to f and g, respectively. Prove that the resultant of the polynomials f
and g is equal to the determinant of the operator X 7 AX XB in the space of
matrices of size n m.
Pn
ni
43.4. Let 1 , . . . , n be the roots of a polynomial f (x) =
and
i=0 ai x
2n2
k
k
det S, where
sk = 1 + + n . Prove that D(f ) = a0
s0
s1
S=
...
sn1
s1
s2
..
.
...
...
..
.
sn1
sn
.
..
.
sn
...
s2n2
195
is no standard notation for the generalized inverse of a matrix A. Many authors took
after R. Penrose who denoted it by A+ which is confusing: might be mistaken for the Hermitian
conjugate. In the original manuscript of this book Penroses notation was used. I suggest a more
dynamic and noncontroversal notation approved by the author. Translator.
196
it follows that
|Ax b|2 = |Ax AA1 b|2 + |b AA1 b|2 |b AA1 b|2
and equality is attained if and only if Ax = AA1 b. If Ax = AA1 b, then
|x|2 = |A1 b + (I A1 A)x|2 = |A1 b|2 + |x A1 Ax|2 |A1 b|2
and equality is attained if and only if
x = A1 Ax = A1 AA1 b = A1 b.
Remark. The equality Ax = AA1 b is equivalent to the equality A Ax =
A x. Indeed, if Ax = AA1 b then A b = A (A1 ) A b = A AA1 b = A Ax
and if A Ax = A b then
AXB = C
has a solution if and only if AA1 CB 1 B = C. The solutions of (1) are of the
form
X = A1 CB 1 + Y A1 AY BB 1 , where Y is an arbitrary matrix.
Proof. If AXB = C, then
C = AXB = AA1 (AXB)B 1 B = AA1 CB 1 B.
Conversely, if C = AA1 CB 1 B, then X0 = A1 CB 1 is a particular
solution of the equation
AXB = C.
It remains to demonstrate that the general solution of the equation AXB = 0 is of
the form X = Y A1 AY BB 1 . Clearly, A(Y A1 AY BB 1 )B = 0. On
the other hand, if AXB = 0 then X = Y A1 AY BB 1 , where Y = X.
Remark. The notion of generalized inverse matrix appeared independently in
the papers of [Moore, 1935] and [Penrose, 1955]. The equivalence of Moores and
Penroses definitions was demonstrated in the paper [Rado, 1956].
197
P Q
, where
Proof (Following [Flanders, Wimmer, 1977]). a) Let K =
R S
P Mm,m and S Mn,n . First, suppose that the matrices from the theorem are
similar. For i = 0, 1 consider the maps i : Mm,n Mm,n given by the formulas
AP P A AQ QB
A 0
A 0
,
=
K K
0 (K) =
0 B
0 B
BR RA BS SB
A 0
AP + CR P A AQ + CS QB
A C
=
.
K K
1 (K) =
0 B
BR RA
BS SB
0 B
The equations F K = KF and GF G1 K 0 = K 0 F have isomorphic spaces of solutions; this isomorphism is given by the formula K = G1 K 0 . Hence, dim Ker 0 =
dim Ker 1 . If K Ker i , then BR = RA and BS = SB. Therefore, we can
consider the space
V = {(R, S) Mn,m+n | BR = RA, BS = SB}
and determine the projection i : Ker i V , where i (X) = (R, S). It is easy
to verify that
P Q
Ker i = {
| AP = P A, AQ = QB}.
0 0
For 0 this is obvious and for 1 it follows from the fact that CR = 0 and CS = 0
since R = 0 and S = 0.
0 0
Let us prove that Im 0 = Im 1 . If (R, S) V , then
Ker 0 . Hence,
R S
Im 0 = V and, therefore, Im 1 Im 0 . On the other hand,
dim Im 0 + dim Ker 0 = dim Ker 0 = dim Ker 1 = dim Im 1 + dim Ker 1 .
I 0
The matrix
belongs to Ker 0 and, therefore, (0, I) Im 0 = Im 1 .
0 I
P Q
Hence, there is a matrix of the form
in Ker 1 . Thus, AQ+CSQB = 0,
0 I
where S = I. Therefore, X = Q is a solution of the equation AX XB = C.
Conversely, if X is a solution of this equation, then
A 0
I X
A AX
A C + XB
I X
A C
=
=
=
0 B
0 I
0
B
0
B
0 I
0 B
and, therefore
I
0
X
I
A 0
0 B
I
0
X
I
A
0
C
B
198
b) First, suppose that the indicated matrices are of the same rank. For i = 0, 1
consider the map i : Mm+n,2(m+n) Mm+n,m+n given by formulas
A 0
A 0
AU11 W11 A AU12 W12 B
0 (U, W ) =
U W
=
,
0 B
0 B
BU21 W21 A BU22 W22 B
A C
A 0
1 (U, W ) =
U W
0 B
0 B
U=
U11
U21
U12
U22
and W =
W11
W21
W12
W22
The spaces of solutions of equations F U = W F and GF G1 U 0 = W 0 F are isomorphic and this isomorphism is given by the formulas U = G1 U 0 and W = G1 W 0 .
Hence, dim Ker 0 = dim Ker 1 .
Consider the space
Z = {(U21 , U22 W21 , W22 ) | BU21 = W21 A, BU22 = W22 B}
and define a map i : Ker i Z, where i (U, W ) = (U21 , U22 , W21 , W22 ). Then
Im 1 Im 0 = Z and
Ker 1 =Ker 0 . Therefore, Im 1 = Im 0 . The matrix
I 0
(U, W ), where U = W =
, belongs to Ker 0 . Hence, Ker 1 also contains
0 I
an element for which U22 = I. For this element the equality AU12 +CU22 = W12 B
is equivalent to the equality AU12 W12 B = C.
Conversely, if a solution X, Y of the given equation exists, then
I Y
A 0
I X
A AX Y B
A C
=
=
.
0
I
0 B
0 I
0
B
0 B
Problems
44.1. Prove that if C = AX = Y B, then there exists a matrix Z such that
C = AZB.
44.2. Prove that any solution of a system of matrix equations AX = 0, BX = 0
is of the form X = (I A1 A)Y (I BB 1 ), where Y is an arbitrary matrix.
44.3. Prove that the system of equations AX = C, XB = D has a solution if and
only if each of the equations AX = C and XB = D has a solution and AD = CB.
45. Hankel matrices and rational functions
Consider a proper rational function
R(z) =
b0
a1 z m1 + + am
,
+ b1 z m1 + + bm
zm
199
where
b0 s0 = a1 ,
b0 s1 + b1 s0 = a2 ,
(1)
b0 s2 + b1 s1 + b2 s0 = a3 ,
..................
b0 sm1 + + bm1 s0 = am
sq = 1 sq1 + + m sqm ,
s0 s1 s2 . . .
s1 s2 s3 . . .
S = s2 s3 s4 . . .
.
.
.
.
.
..
..
.
A matrix of such a form is called a Hankel matrix. Relation (2) means that the
(m + 1)th row of S is a linear combination of the first m rows (with coefficients
1 , . . . , m ). If we delete the first element of each of these rows, we see that the
(m + 2)th row of S is a linear combination of the m rows preceding it and therefore,
the linear combination of the first m rows. Continuing these arguments, we deduce
that any row of the matrix S is expressed in terms of its first m rows, i.e., rank S
m.
Thus, if the series
(3)
R(z) = s0 z 1 + s1 z 2 + s2 z 3 + . . .
corresponds to a rational function R(z) then the Hankel matrix S constructed from
s0 , s1 , . . . is of finite rank.
Now, suppose that the Hankel matrix S is of finite rank m. Let us construct from
S a series (3). Let us prove that this series corresponds to a rational function. The
first m + 1 rows of S are linearly dependent and, therefore, there exists a number
h m such that the m + 1-st row can be expressed linearly in terms of the first m
rows. As has been demonstrated, in this case all rows of S are expressed in terms
of the first h rows. Hence, h = m. Thus, the numbers si are connected by relation
(2) for all q m. The coefficients i in this relation enable us to determine the
numbers b0 = 1, b1 = 1 , . . . , bm = m . Next, with the help of relation (1) we can
determine the numbers a1 , . . . , am . For the numbers ai and bj determined in this
way we have
s0
s1
a1 z m1 + + am
+ 2 + =
,
z
z
b0 z m + + bm
i.e., R(z) is a rational function.
Remark. Matrices of finite size of the form
s
s1
...
sn
0
s2
. . . sn+1
s1
.
.
..
..
.
.
..
.
.
.
sn
sn+1
...
s2n
200
are also sometimes referredto as Hankel matrices. Let J = antidiag(1, . . . , 1), i.e.,
n
1 for i + j = n,
J = aij 0 , where aij =
If H is a Hankel matrix, then the
0 otherwise.
matrix JH is called a Toeplitz matrix; it is of the form
a
a
a
...
a
0
a1
a2
.
.
.
an
a0
a1
..
.
a1
a0
..
.
...
...
..
.
an+1
an+2
...
an1
an2 .
..
.
a0
X
Ak
k=0
k!
Let us prove that this series converges. If A and B are square matrices of order
n and |aij | a, |bij | b, then the absolute value of each element of AB does
not exceed nab. Hence, the absolute value of the elements of Ak does not exceed
k
P
P
k
nk1 ak = (na)k /n and, since n1 k=0 (na)
= n1 ena , the series k=0 Ak! converges
k!
to a matrix denoted by eA = exp A; this matrix is called the exponent of A.
If A1 = P 1 AP , then Ak1 = P 1 Ak P . Therefore, exp(P 1 AP ) = P 1 (exp A)P .
Hence, the computation of the exponent of an arbitrary matrix reduces to the
computation of the exponent of its Jordan blocks.
Let J = I + N be a Jordan block of order n. Then
k
X
k
(I + N ) =
km N m .
m
m=0
k
Hence,
exp(tJ) =
k k
X
t J
k=0
k!
=
X
tk
k,m=0
k
m
km N m
k!
n1
X
X
X tm
tm t m
(t)km tm N m
=
e N =
et N m ,
(k m)! m!
m!
m!
m=0
m=0
m=0 k=m
since N m = 0 for m n.
By reducing a matrix A to the Jordan normal form we get the following statement.
46.1.1. Theorem. If the minimal polynomial of A is equal to
(x 1 )n1 . . . (x k )nk ,
then the elements of eAt are of the form p1 (t)e1 t + + pk (t)ek t , where pi (t) is
a polynomial of degree not greater than ni 1.
201
n
46.2. Consider a family of matrices X(t) = xij (t)1 whose elements are dif
ferentiable functions of t. Let X(t)
= dX(t)
be the element-wise derivative of the
dt
matrix-valued function X(t).
+ X Y .
46.2.1. Theorem. (XY ). = XY
P
P
P
Proof. If Z = XY , then zij =
ik ykj + k xik ykj .
k xik ykj hence zij =
kx
+ X Y .
therefore, Z = XY
1 .
46.2.2. Theorem. a) (X 1 ). = X 1 XX
1 .
1
b) tr(X X) = tr((X ) X).
Proof. a) On the one hand, (X 1 X). = I = 0. On the other hand, (X 1 X). =
Therefore, (X 1 ). X = X 1 X and (X 1 ). = X 1 XX
1 .
(X 1 ). X + X 1 X.
1
b) Since tr(X X) = n, it follows that
.
.
X d
d At
(e ) =
dt
dt
k=0
(tA)k
k!
X
ktk1 Ak
k=0
k!
=A
X
(tA)k1
k=1
(k 1)!
= AeAt .
202
1 ). In our case XX
1 =
Proof. By Problem 46.6 a) (det X). = (det X)(tr XX
.
A(t). Therefore, the function y(t) = det X(t) satisfies the condition (ln y) = y/y
=
Rt
tr A(t). Therefore, y(t) = c exp( 0 tr A(s)ds), where c = y(0) = det X(0).
Problems
0 t
. Compute eA .
46.1. Let A =
t 0
46.2. a) Prove that if [A, B] = 0, then eA+B = eA eB .
b) Prove that if e(A+B)t = eAt eBt for all t, then [A, B] = 0.
46.3. Prove that for any unitary matrix U there exists an Hermitian matrix H
such that U = eiH .
46.4. a) Prove that if a real matrix X is skew-symmetric, then eX is orthogonal.
b) Prove that any orthogonal matrix U with determinant 1 can be represented
in the form eX , where X is a real skew-symmetric matrix.
46.5. a) Let A be a real matrix. Prove that det eA = 1 if and only if tr A = 0.
b) Let B be a real matrix and det B = 1. Is there a real matrix A such that
B = eA ?
46.6. a) Prove that
.
1 ).
(det A) = tr(A adj AT ) = (det A) tr(AA
b) Let A be an n n-matrix. Prove that tr(A(adj AT ). ) = (n 1) tr(A adj AT ).
a map F : Mn,n Mn,n . Let F (X) =
46.7. n[Aitken, 1953]. Consider
ij (X) , where ij (X) = tr F (X). Prove that if F (X) = X m , where m is
xji
1
an integer, then F (X) = mX m1 .
47. Lax pairs and integrable systems
47.1. Consider a system of differential equations
x(t)
f = (f1 , . . . , fn ).
203
1 + BL(B 1 ).
(BLB 1 ) = BLB
+ B LB
= BALB 1 + B(AL LA)B 1 + BLB 1 (BA)B 1 = 0.
Therefore, the Jordan normal form of L does not depend on t; hence, its eigenvalues
are constants.
Representation of systems of differential equations in the Lax form is an important method for finding first integrals of Hamiltonian systems of differential
equations.
= M , which describe the motion of a
For example, the Euler equations M
solid body with a fixed point, are easy to express in the Lax form. For this we
should take
0
3 2
0
M3 M2
0
1 .
0
M1 and A = 3
L = M3
2 1
0
M2 M1
0
The first integral of this equation is tr L2 = 2(M12 + M22 + M32 ) .
47.2. A more instructive example is that of the Toda lattice:
x
i =
U, where U = exp(x1 x2 ) + + exp(xn1 xn ).
xi
This system of equations can be expressed in the Lax form with the following L
and A:
b1 a1 0
0
0
a1
0
0
0
a2
a1 b2 a2
a1
..
..
.
.
L = 0 a2 b3
0 , A = 0
a2 0
0 ,
..
..
..
..
.
.
.
.
an1
an1
0
0 an1
bn
0
0 an1
0
1
where 2ak = exp 2 (xk xk+1 ) and 2bk = x k . Indeed, the equation L = [A, L] is
equivalent to the system of equations
b 1 = 2a21 , b 2 = 2(a22 a21 ), . . . , b n = 2a2n1 ,
a 1 = a1 (b2 b1 ), . . . , a n1 = an1 (bn bn1 ).
The equation
x k x k+1
2
implies that ln ak = 12 (xk xk+1 ) + ck , i.e., ak = dk exp 12 (xk xk+1 ). Therefore,
the equation b k = 2(a2k a2k1 ) is equivalent to the equation
x
k
= 2(d2k exp(xk xk+1 ) d2k1 exp(xk1 xk )).
2
If d1 = = dn1 = 12 we get the required equations.
a k = ak (bk+1 bk ) = ak
204
47.3. The motion of a multidimensional solid body with the inertia matrix J is
described by the equation
= [M, ],
M
(1)
.
(M + J 2 ) = [M + J 2 , + J],
b s s .
(1)ks bs s .
a i = ai
p1
X
k=1
ai+k
p1
X
!
aik
k=1
where p 2 and ai+n = ai , can also be expressed in the form of a family of Lax
equations depending on a parameter
is given in the book
.nSuch a representation
n
[Bogoyavlenskii, 1991]. Let M = mij 1 and A = aij 1 , where in every matrix
only n elements mi,i+1 = 1 and ai,i+1p = ai are nonzero. Consider the
equation
(2)
.
(A + M ) = [A + M, B M p ].
Pp1 p1j
If B =
AM j , then [M, B] + [A, M p ] = 0 and, therefore, equation
j=0 M
(2) is equivalent to the equation A = [A, B]. It is easy to verify that bij =
Pp1
ai+p1,j + + ai,j+p1 . Therefore, bij = 0 for i 6= j and bi = bii = k=0 ai+k .
The equation A = [A, B] is equivalent to the system of equations
a ij = aij (bi bj ), where aij 6= 0 only for j = i + 1 p.
205
a i = ai
p1
X
ai+k
k=0
p1
X
!
aj+k
= ai
p1
X
k=0
ai+k
k=1
p1
X
!
aik
k=1
a i = ai
p1
Y
ai+k
k=1
p1
Y
!
aik
k=1
a1
0
A=
1
a2
n1
0
1
..
.
..
...
.
...
..
..
an1
2
0
0
0 .
1
an
206
B P
Proof. Let us seek A in the form A =
, where P and Q are arbitrary
QT b
columns of length n 1 and b is an arbitrary number. Clearly,
det(xIn A) = (x b) det(xIn1 B) QT adj(xIn1 B)P
Pn2
(see Theorem 3.1.3). Let us prove that adj(xIn1 B) = r=0 ur (x)B r , where
the polynomials u0 , . . . , un2 form a basis in the space of polynomials of degree not
exceeding n 2. Let
g(x) = det(xIn1 B) = xn1 + t1 xn2 + . . .
and (x, ) = (g(x) g())/(x ). Then
(xIn1 B)(x, B) = g(x)In1 g(B) = g(x)In1 ,
since g(B) = 0 by the Cayley-Hamilton theorem. Therefore,
(x, B) = g(x)(xIn1 B)1 = adj(xIn1 B).
Besides, since (xk k )/(x ) =
(x, ) =
n2
X
tnr2
r=0
Pk1
s=0
r
X
xrs s =
s=0
Pn2
s=0
n2
X
s=0
n2
X
tnr2 xrs
r=s
s us (x), where
n2
X
us QT B s P
s=0
n2
X
us QT B s P,
s=0
207
48.2. Theorem ([Friedland, 1972]). Given all offdiagonal elements in a complex matrix A, it is possible to select diagonal elements x1 , . . . , xn so that the eigenvalues of A are given complex numbers; there are finitely many sets {x1 , . . . , xn }
satisfying this condition.
Proof. Clearly,
det(A + I) = (x1 + ) . . . (xn + ) +
(xi1 + ) . . . (xik + )
kn2
nk k (x1 , . . . , xn ) +
nk pk (x1 , . . . , xn ),
kn2
208
d1 dn ,
d1 + + dk 1 + + k
2 d1 d1 1
P = (2 1 )1/2
d1 1
2 d1
is the desired one.
Now, suppose that the statement holds for some n 2 and consider the sets of
n + 1 numbers. Since 1 d1 dn+1 n+1 , there exists a number j > 1 such
that j1 d1 j . Let P1 be a permutation matrix such that
cj , . . . , n+1 ).
P1T P1 = diag(1 , j , 2 , . . . ,
It is easy to verify that
1 min(d1 , 1 + j d1 ) max(d1 , 1 + j d1 ) j .
Therefore, there exists an orthogonal 2 2 matrix Q such that on the diagonal
of the matrix QT diag(1 , j )Q there
stand the numbers d1 and 1+ j d1 .
d1 bT
Q
0
,
Consider the matrix P2 =
. Clearly, P2T (P1T P1 )P2 =
b 1
0 In1
cj , . . . , n+1 ).
where 1 = diag(1 + j d1 , 2 , . . . ,
The diagonal elements of 1 arranged in increasing order and the numbers
d2 , . . . , dn+1 satisfy the conditions of the theorem. Indeed,
(1)
d2 + + dk (k 1)d1 2 + + k
for k = 2, . . . , j 1 and
(2)
d2 + + dk = d1 + + dk d1 1 + + k d1
= (1 + j d1 ) + 2 + + j1 + j+1 + + k
for k = j, . . . , n + 1. In both cases (1), (2) the right-hand sides of the inequalities,
i.e., 2 + + k and (1 + j d1 ) + 2 + + j1 + j+1 + + k , are
not less than the sum of k 1 minimal diagonal elements of 1 . Therefore, there
exists an orthogonal matrix Q1 such that
diagonal of QT1 1 Q1 is occupied by
the
1 0
the numbers d2 , . . . , dn+1 . Let P3 =
; then P = P1 P2 P3 is the desired
0 Q1
matrix.
SOLUTIONS
209
Solutions
n
39.1. a) Clearly, AX = i xij 1 and XA = j xij 1 ; therefore, i xij = j xij .
Hence, xij = 0 for i 6= j.
b) By heading a) X = diag(x1 , . . . , xn ). As is easy to verify (N AX)i,i+1 =
i+1 xi+1 and (XN A)i,i+1 = i+1 xi . Hence, xi = xi+1 for i = 1, 2, . . . , n 1.
39.2. It suffices to make use of the result of Problem 39.1.
39.3. Let p1 , . . . , pn be the sums of the elements of the rows of the matrix X and
q1 , . . . , qn the sums of the elements of its columns. Then
q1 . . . qn
p1 . . . p1
.. and XE = ..
.. .
EX = ...
.
.
.
q1 . . . qn
pn . . . pn
Therefore, AX = XA if and only if
q1 = = qn = p1 = = pn .
39.4. Theequality
AP = P A can be rewritten in the form A = P1 AP . If
n
= bij 1 , then bij = a(i)(j) . For any numbers p and q there exists a
permutation such that p = (q). Therefore, aqq = bqq = a(q)(q) = app , i.e., all
diagonal elements of A are equal. If i 6= j and p 6= q, then there exists a permutation
such that i = (p) and j = (q). Hence, apq = bpq = a(p) a(q) = aij , i.e., all
off-diagonal elements of A are equal. It follows that
P1 AP
A = I + (E I) = ( )I + E.
39.5. We may assume that A = diag(A1 , . . . , Ak ), where Ai is a Jordan block.
Let 1 , . . . , k be distinct numbers and Bi the Jordan block corresponding to the
eigenvalue i and of the same size as Ai . Then for B we can take the matrix
diag(B1 , . . . , Bk ).
39.6. a) For commuting matrices A and B we have
X n
(A + B)n =
Ak B nk .
k
Let Am = B m = 0. If n = 2m 1 then either k m or n k m; hence,
(A + B)n = 0.
b) By Theorem 39.2.2 the operators A and B have a common eigenbasis; this
basis is the eigenbasis for the operator A + B.
39.7. Involutions are diagonalizable operators whose diagonal form has 1 on
the diagonal (see 26.1). Therefore, there exists a basis in which all matrices Ai are
of the form diag(1, . . . , 1). There are 2n such matrices.
39.8. Let us decompose the space V into the direct sum of invariant subspaces
Vi such that every operator Aj has on every subspace Vi only one eigenvalue ij .
Consider the diagonal operator D whose restriction to Vi is of the form i I and all
numbers i are distinct. For every j there exists an interpolation polynomial fj
such that fj (i ) = ij for all i (see Appendix 3). Clearly,
fj (D) =Aj .
I A
39.9. It is easy to verify that all matrices of the form
, where A is an
0 I
arbitrary matrix of order m, commute.
210
(1)
Ai BAni+1
i
i
i=0
i=0
n+1
n
X
X
n
n i
=
(1)ni+1
Ai BAni+1 +
(1)ni+1
A BAni+1
i
1
i
i=1
i=0
n+1
X
n+1 i
=
(1)n+1i
A BAn+1i .
i
i=0
adn+1
A (B)
n
X
ni
n+1
(B ) = D[D (B )] = n!D[(DB) ] = n!
n1
X
i=0
Clearly,
D
n+1
(B
n+1
)=D
n+1
(B B ) =
n+1
X
i=0
n+1
(Di B)(Dn+1i (B n )).
i
SOLUTIONS
211
40.5. First, let us prove the required statement for n = 1. For m = 1 the
statement is clear. It is also obvious that if the statement holds for some m then
[Am+1 , B] = A(Am B BAm ) + (AB BA)Am
= mA[A, B]Am1 + [A, B]Am = (m + 1)[A, B]Am .
Now, let m > n > 0. Multiplying the equality [An , B] = n[A, B]An1 by mAmn
from the right we get
m[An , B]Amn = mn[A, B]Am1 = n[Am , B].
40.6. To the operator adA in the space Hom(V, V ) there corresponds operator
L = I A AT I in the space V V ; cf. 27.5. If A is diagonal with respect to
a basis e1 , . . . , en , then L is diagonal with respect to the basis ei ej . Therefore,
Ker Ln = Ker L.
40.7. a) If tr Z = 0 then Z = [X, Y ] (see 40.2); hence,
tr(AZ) = tr(AXY ) tr(AY X) = 0.
Therefore, A = I; cf. Problem 5.1.
b) For any linear function f on the space of matrices there exists a matrix A
such that f (X) = tr(AX). Now, since f (XY ) = f (Y X), it follows that tr(AXY ) =
tr(AY X) and, therefore, A = I.
41.1. The product of the indicated quaternions is equal to
(x1 x2 + y1 y2 + z1 z2 ) + (y1 z2 z1 y2 )i + (z1 x2 z2 x1 )j + (x1 y2 x2 y1 )k.
41.2. Let q = a + v, where a is the real part of the quaternion and v is its
imaginary part. Then
(a + v)2 = a2 + 2av + v 2 .
By Theorem 41.2.1, v 2 = vv = |v|2 0. Therefore, the quaternion a2 + 2av + v 2
is real if and only if av is a real quaternion, i.e., a = 0 or v = 0.
41.3. It follows from the solution of Problem 41.2 that q 2 = 1 if and only if
q = xi + yj + zk, where x2 + y 2 + z 2 = 1.
41.4. Let the quaternion q = a + v, where a is the real part of q, commute with
any purely imaginary quaternion w. Then (a + v)w = w(a + v) and aw = wa;
hence, vw = wv. Since vw = w v = wv, we see that vw is a real quaternion. It
remains to notice that if v 6= 0 and w is not proportional to v, then vw 6 R.
41.5. Let B = W1 + W2 j, where W1 and W2 are complex matrices. Then
AB = Z1 W1 + Z2 jW1 + Z1 W2 j + Z2 jW2 j
and
Ac Bc =
Z1 W1 Z2 W 2
Z 2 W1 Z 1 W 2
Z1 W2 + Z2 W 1
Z 2 W2 + Z 1 W 1
212
SOLUTIONS
213
m1
n1
where c is the column
g(x), . . . , g(x))T . Clearly, if k
P (x i f (x), . . . , f (x), x
k
n1, then x g(x) =
i x f (x)+rk (x), where i are certain numbers and i m1.
It follows that by adding linear combinations of the first m elements to the last n
elements of the column c we can reduce this column to the form
a0
A=
0
..
an1,0
..
B= .
a0
a00
...
...
an1,n1
..
.
.
a0,n1
1 1 . . . 1n1
..
.. .
V = ...
.
.
1 n
...
nn1
Q
Hence, det S = (det V )2 = i<j (i j )2 .
44.1. The equations AX = C and Y B = C are solvable; therefore, AA1 C = C
and CB 1 B = C; see 45.2. It follows that
C = AA1 C = AA1 CB 1 B = AZB, where Z = A1 CB 1 .
44.2. If X is a matrix of size m n and rank X = r, then X = P Q, where P and
Q are matrices of size m r and r n, respectively; cf. 8.2. The spaces spanned
by the columns of matrices X and P coincide and, therefore, the equation AX = 0
implies AP = 0, which means that P = (I A1 A)Y1 ; cf. 44.2. Similarly, the
equality XB = 0 implies that Q = Y2 (I BB 1 ). Hence,
X = P Q = (I A1 A)Y (I BB 1 ), where Y = Y1 Y2 .
It is also clear that if X = (I A1 A)Y (I BB 1 ), then AX = 0 and XB = 0.
44.3. If AX = C and XB = D, then AD = AXB = CB. Now, suppose
that AD = CB and each of the equations AX = C and XB = C is solvable. In
this case AA1 C = C and DB 1 B = D. Therefore, A(A1 C + DB 1
A1 ADB 1 ) = C and (A1 C + DB 1 A1 CBB 1 )B = D, i.e.,
X0 = A1 C +DB 1 A1 ADB 1 is the solution of the system of equations
considered.
214
46.1. Let J =
0
1
1
. Then A2 = t2 I, A3 = t3 J, A4 = t4 I, A5 = t5 J,
0
etc. Therefore,
eA = (1
t4
t3
t5
t2
+ . . . )I + (t + . . . )J
2! 4!
3! 5!
cos t
sin t
sin t
cos t
46.2. a) Newtons binomial formula holds for the commuting matrices and,
therefore,
e
A+B
X
X
X
(A + B)n
=
=
n!
n=0
n=0
n
k
k=0
Ak B nk
k!
X
X
Ak
B nk
= eA eB .
=
k! (n k)!
k=0 n=k
b) Since
e(A+B)t = I + (A + B)t + (A2 + AB + BA + B 2 )
and
eAt eBt = I + (A + B)t + (A2 + 2AB + B 2 )
it follows that
t2
+ ...
2
t2
+ ...,
2
A2 + AB + BA + B 2 = A2 + 2BA + B 2
cos i
sin i
sin i
cos i
SOLUTIONS
215
i
i
the eigenvalues of eA are equal to either e and
e or e and
e , where in either
2
0
is not the exponent of
case is a real number. It follows that B =
0 1/2
a real matrix.
P
46.6. a) Let Aij be the cofactor of aij . Then tr(A adj AT ) = i,j a ij Aij .
Since det A = aij Aij + . . . , where the ellipsis stands for the terms that do not
contain aij , it follows that
.
(det A) = a ij Aij + aij A ij + = a ij Aij + . . . ,
where
the ellipsis stands for the terms that do not contain a ij . Hence, (det A). =
P
ij Aij .
i,j a
b) Since A adj AT = (det A)I, then tr(A adj AT ) = n det A and, therefore,
.
.
n(det A) = tr(A adj AT ) + tr(A(adj AT ) ).
It remains to make use of the result of heading a).
46.7. First, suppose that m > 0. Then
X
(X m )ij =
xia xab . . . xpq xqj ,
a,b,...,p,q
tr X m =
a,b,...,p,q,r
Therefore,
(tr X m ) =
xji
=
X
a,b,...,p,q,r
xra
xqr
xab . . . xpq xqr + + xra xab . . . xpq
xji
xji
b,...,p,q
a,b,...,p
n
Now, suppose that m < 0. Let X 1 = yij 1 . Then yij = Xji 1 , where Xji is
the cofactor of xji in X and = det X. By Jacobis Theorem (Theorem 2.5.2) we
have
xi3 j3 . . . xi3 jn
Xi j Xi j
..
..
1 2
11
=
(1)
.
Xi2 j1 Xi2 j2
.
xi j
. . . xin jn
n 3
and
Xi1 j1
X
Hence, i1 j1
Xi2 j1
xi2 j2 . . . xi2 jn
.. , where = i1 . . . in .
= (1) ...
j1 . . . jn
xi j
.
.
.
x
in jn
n 2
Xi1 j2
= xi j (Xi1 j1 ). It follows that
2 2
Xi2 j2
(X ) X Xji
xji
X
=
(X ) X
() = 2
,
xji
xji
xji
Xj Xi =
216
i.e.,
xji y
= yj yi . Since
X
(X m )ij =
a,b,...,q
a,b,...,q,r
it follows that
(tr X m ) =
xji
a,b,...,q,r
X
a,b,...,q,r
SOLUTIONS
217
APPENDIX
218
APPENDIX
(x + 1)p 1
p p2
p
p1
f (x + 1) =
=x
+
x
+ +
.
(x + 1) 1
1
p1
A.3. Theorem. Suppose the numbers
(1)
(1 1)
y1 , y1 , . . . , y1
, . . . , yn , yn(1) , . . . , yn(n 1)
n
X
j=1
Let n (x) = (xx1 ) . . . (xxn ). Take an arbitrary polynomial Hmn of degree not
greater than mn and assign to it the polynomial Hm (x) = Ln (x)+n (x)Hmn (x).
It is clear that Hm (xj ) = yj for any polynomial Hmn . Besides,
0
0
Hm
(x) = L0n (x) + n0 (x)Hmn (x) + n (x)Hmn
(x),
0
i.e., Hm
(xj ) = L0n (xj ) + n0 (xj )Hmn (xj ). Since n0 (xj ) 6= 0, then at points where
0
the values of Hm
(xj ) are given, we may determine the corresponding values of
Hmn (xj ). Further,
00
0
Hm
(xj ) = L00n (xj ) + n00 (xj )Hmn (xj ) + 2n0 (xj )Hmn
(xj ).
SOLUTIONS
219
00
Therefore, at points where the values of Hm
(xj ) are given we can determine the
0
corresponding values of Hmn (xj ), etc. Thus, our problem reduces to the construction of a polynomial Hmn (x) of degree not greater than m n for which
(i)
(i)
Hmn (xj ) = zj for i = 0, . . . , j 2 (if j = 1, then there are no restrictions on the
P
values of Hmn and its derivatives at xj ). It is also clear that mn = (j 1)1.
After k 1 of similar operations it remains to construct Lagranges interpolation
polynomial.
X
A={
zi1 ...in 1i1 . . . nin | zi1 ...in C} = C[1 , . . . , n ].
P
i
|ai As } =
Further, let A0 = C and As = C[1 , . . . , s ]. Then As+1 = { ai s+1
As [s+1 ]. Let us prove by induction on s that there exists a ring homomorphism
f : As C (which sends 1 to 1). For s = 0 the statement is obvious. Now, let us
show how to construct a homomorphism g : As+1 C from the homomorphism
f : As C. For this let us consider two cases.
a) The element x = s+1 is transcendental over As . Then for any C there is
determined a homomorphism g such that g(an xn + +a0 ) = f (an ) n + +f (a0 ).
Setting = 0 we get a homomorphism g such that g(1) = 1.
b) The element x = s+1 is algebraic over As , i.e., bm xm +bm1 xm1 + +b0 = 0
m
for certain bi As . Then for all P
C such that
+ + f (b0 ) = 0 there
P f (bm )
k
k
is determined a homomorphism g( ak x ) =
f (ak ) which sends 1 to 1.
As a result we get a homomorphism h : A C such that h(1) = 1. It is also
clear that h1 (0) is an ideal and there are no nontrivial ideals in the field A. Hence,
h is a monomorphism. Since A0 = C A and the restriction of h to A0 is the
identity map then h is an isomorphism.
Thus, we may assume that i C. The projection p maps the polynomial
fi (x1 , . . . , xn ) K to fi (1 , . . . , n ) C. Since f1 , . . . , fr I, then p(fi ) = 0 C.
Therefore, fi (1 , . . . , n ) = 0. Contradiction.
220
APPENDIX
i
A.5. Theorem. Polynomials fi (x1 , . . . , xn ) = xm
i + Pi (x1 , . . . , xn ), where i =
1, . . . , n, are such that deg Pi < mi ; let I(f1 , . . . , fn ) be the ideal generated by f1 ,
. . . , fn .
P
a) Let P (x1 , . . . , xn ) be a nonzero polynomial of the form
ai1 ...in xi11 . . . xinn ,
where ik < mk for all k = 1, . . . , n. Then P 6 I(f1 , . . . , fn ).
i
+ Pi (x1 , . . . , xn ) = 0 (i = 1, . . . , n) is always
b) The system of equations xm
i
solvable over C and the number of solutions is finite.
i ti +qi
Proof. Substituting the polynomial (fi Pi )ti xqi instead of xm
, where 0
i
ti and 0 qi < mi , we see that any polynomial Q(x1 , . . . , xn ), can be represented
in the form
X
Q(x1 , . . . , xn ) = Q (x1 , . . . , xn , f1 , . . . , fn ) =
ajs xj11 . . . xjnn f1s1 . . . fnsn ,
Pi (x1 , . . . , xn ) in any nonzero polynomial Q (x1 , . . . , xn , f1 , . . . , fn ) we get a non 1 , . . . , xn ). Among the terms of the polynomial Q , let us select
zero polynomial Q(x
the one for which the sum (s1 m1 + j1 ) + + (sn mn + jn ) = m is maximal. Clearly,
m. Let us compute the coefficient of the monomial xs1 m1 +j1 . . . xsnn mn +jn
deg Q
1
Since the sum
in Q.
(s1 m1 + j1 ) + + (sn mn + jn )
is maximal, this monomial can only come from the monomial xj11 . . . xjnn f1s1 . . . fnsn .
= m.
Therefore, the coefficients of these two monomials are equal and deg Q
Typeset by AMS-TEX
REFERENCES
221
Recommended literature
Bellman R., Introduction to Matrix Analysis, McGraw-Hill, New York, 1960.
Growe M. J., A History of Vector Analysis, Notre Dame, London, 1967.
Gantmakher F. R., The Theory of Matrices, I, II, Chelsea, New York, 1959.
Gelfand I. M., Lectures on Linear Algebra, Interscience Tracts in Pure and Applied Math., New
York, 1961.
Greub W. H., Linear Algebra, Springer-Verlag, Berlin, 1967.
Greub W. H., Multilinear Algebra, Springer-Verlag, Berlin, 1967.
Halmos P. R., Finite-Dimensional Vector Spaces, Van Nostrand, Princeton, 1958.
Horn R. A., Johnson Ch. R., Matrix Analysis, Cambridge University Press, Cambridge, 1986.
Kostrikin A. I., Manin Yu. I., Linear Algebra and Geometry, Gordon & Breach, N.Y., 1989.
Marcus M., Minc H., A Survey of Matrix Theory and Matrix Inequalities, Allyn and Bacon,
Boston, 1964.
Muir T., Metzler W. H., A Treatise on the History of Determinants, Dover, New York, 1960.
Postnikov M. M., Lectures on Geometry. 2nd Semester. Linear algebra., Nauka, Moscow, 1986.
(Russian)
Postnikov M. M., Lectures on Geometry. 5th Semester. Lie Groups and Lie Algebras., Mir,
Moscow, 1986.
Shilov G., Theory of Linear Spaces, Prentice Hall Inc., 1961.
References
Adams J. F., Vector fields on spheres, Ann. Math. 75 (1962), 603632.
Afriat S. N., On the latent vectors and characteristic values of products of pairs of symmetric
idempotents, Quart. J. Math. 7 (1956), 7678.
Aitken A. C, A note on trace-differentiation and the -operator, Proc. Edinburgh Math. Soc. 10
(1953), 14.
Albert A. A., On the orthogonal equivalence of sets of real symmetric matrices, J. Math. and
Mech. 7 (1958), 219235.
Aupetit B., An improvement of Kaplanskys lemma on locally algebraic operators, Studia Math.
88 (1988), 275278.
Barnett S., Matrices in control theory, Van Nostrand Reinhold, London., 1971.
Bellman R., Notes on matrix theory IV, Amer. Math. Monthly 62 (1955), 172173.
Bellman R., Hoffman A., On a theorem of Ostrowski and Taussky, Arch. Math. 5 (1954), 123127.
Berger M., G
eometrie., vol. 4 (Formes quadratiques, quadriques et coniques), CEDIC/Nathan,
Paris, 1977.
Bogoyavlenskii O. I., Solitons that flip over, Nauka, Moscow, 1991. (Russian)
Chan N. N., Kim-Hung Li, Diagonal elements and eigenvalues of a real symmetric matrix, J.
Math. Anal. and Appl. 91 (1983), 562566.
Cullen C.G., A note on convergent matrices, Amer. Math. Monthly 72 (1965), 10061007.
On the Hadamard product of matrices, Math.Z. 86 (1964), 395.
Djokovi
c D.Z.,
Product of two involutions, Arch. Math. 18 (1967), 582584.
Djokovi
c D.Z.,
A determinantal inequality for projectors in a unitary space, Proc. Amer. Math.
Djokovi
c D.Z.,
Soc. 27 (1971), 1923.
Drazin M. A., Dungey J. W., Gruenberg K. W., Some theorems on commutative matrices, J.
London Math. Soc. 26 (1951), 221228.
Drazin M. A., Haynsworth E. V., Criteria for the reality of matrix eigenvalues, Math. Z. 78
(1962), 449452.
Everitt W. N., A note on positive definite matrices, Proc. Glasgow Math. Assoc. 3 (1958), 173
175.
Farahat H. K., Lederman W., Matrices with prescribed characteristic polynomials Proc. Edinburgh, Math. Soc. 11 (1958), 143146.
Flanders H., On spaces of linear transformations with bound rank, J. London Math. Soc. 37
(1962), 1016.
Flanders H., Wimmer H. K., On matrix equations AX XB = C and AX Y B = C, SIAM J.
Appl. Math. 32 (1977), 707710.
Franck P., Sur la meilleure approximation dune matrice donn
ee par une matrice singuli`
ere, C.R.
Ac. Sc.(Paris) 253 (1961), 12971298.
222
APPENDIX
Frank W. M., A bound on determinants, Proc. Amer. Math. Soc. 16 (1965), 360363.
Fregus G., A note on matrices with zero trace, Amer. Math. Monthly 73 (1966), 630631.
Friedland Sh., Matrices with prescribed off-diagonal elements, Israel J. Math. 11 (1972), 184189.
Gibson P. M., Matrix commutators over an algebraically closed field, Proc. Amer. Math. Soc. 52
(1975), 3032.
Green C., A multiple exchange property for bases, Proc. Amer. Math. Soc. 39 (1973), 4550.
Greenberg M. J., Note on the CayleyHamilton theorem, Amer. Math. Monthly 91 (1984), 193
195.
Grigoriev D. Yu., Algebraic complexity of computation a family of bilinear forms, J. Comp. Math.
and Math. Phys. 19 (1979), 9394. (Russian)
Haynsworth E. V., Applications of an inequality for the Schur complement, Proc. Amer. Math.
Soc. 24 (1970), 512516.
Hsu P.L., On symmetric, orthogonal and skew-symmetric matrices, Proc. Edinburgh Math. Soc.
10 (1953), 3744.
Jacob H. G., Another proof of the rational decomposition theorem, Amer. Math. Monthly 80
(1973), 11311134.
Kahane J., Grassmann algebras for proving a theorem on Pfaffians, Linear Algebra and Appl. 4
(1971), 129139.
Kleinecke D. C., On operator commutators, Proc. Amer. Math. Soc. 8 (1957), 535536.
Lanczos C., Linear systems in self-adjoint form, Amer. Math. Monthly 65 (1958), 665679.
Majindar K. N., On simultaneous Hermitian congruence transformations of matrices, Amer.
Math. Monthly 70 (1963), 842844.
Manakov S. V., A remark on integration of the Euler equation for an N -dimensional solid body.,
Funkts. Analiz i ego prilozh. 10 n.4 (1976), 9394. (Russian)
Marcus M., Minc H., On two theorems of Frobenius, Pac. J. Math. 60 (1975), 149151.
[a] Marcus M., Moyls B. N., Linear transformations on algebras of matrices, Can. J. Math. 11
(1959), 6166.
[b] Marcus M., Moyls B. N., Transformations on tensor product spaces, Pac. J. Math. 9 (1959),
12151222.
Marcus M., Purves R., Linear transformations on algebras of matrices: the invariance of the
elementary symmetric functions, Can. J. Math. 11 (1959), 383396.
Massey W. S., Cross products of vectors in higher dimensional Euclidean spaces, Amer. Math.
Monthly 90 (1983), 697701.
Merris R., Equality of decomposable symmetrized tensors, Can. J. Math. 27 (1975), 10221024.
Mirsky L., An inequality for positive definite matrices, Amer. Math. Monthly 62 (1955), 428430.
Mirsky L., On a generalization of Hadamards determinantal inequality due to Szasz, Arch. Math.
8 (1957), 274275.
Mirsky L., A trace inequality of John von Neuman, Monatshefte f
ur Math. 79 (1975), 303306.
Mohr E., Einfaher Beweis der verallgemeinerten Determinantensatzes von Sylvester nebst einer
Versch
arfung, Math. Nachrichten 10 (1953), 257260.
Moore E. H., General Analysis Part I, Mem. Amer. Phil. Soc. 1 (1935), 197.
Newcomb R. W., On the simultaneous diagonalization of two semi-definite matrices, Quart. Appl.
Math. 19 (1961), 144146.
Nisnevich L. B., Bryzgalov V. I., On a problem of n-dimensional geometry, Uspekhi Mat. Nauk
8 n. 4 (1953), 169172. (Russian)
Ostrowski A. M., On Schurs Complement, J. Comb. Theory (A) 14 (1973), 319323.
Penrose R. A., A generalized inverse for matrices, Proc. Cambridge Phil. Soc. 51 (1955), 406413.
Rado R., Note on generalized inverses of matrices, Proc. Cambridge Phil.Soc. 52 (1956), 600601.
Ramakrishnan A., A matrix decomposition theorem, J. Math. Anal. and Appl. 40 (1972), 3638.
Reid M., Undergraduate algebraic geometry, Cambridge Univ. Press, Cambridge, 1988.
Reshetnyak Yu. B., A new proof of a theorem of Chebotarev, Uspekhi Mat. Nauk 10 n. 3 (1955),
155157. (Russian)
Roth W. E., The equations AX Y B = C and AX XB = C in matrices, Proc. Amer. Math.
Soc. 3 (1952), 392396.
Schwert H., Direct proof of Lanczos decomposition theorem, Amer. Math. Monthly 67 (1960),
855860.
Sedl
a
cek I., O inciden
cnich maticich orientov
ych graf
u, Casop.
pest. mat. 84 (1959), 303316.
REFERENCES
223
Sidak
Z., O po
ctu kladn
ych prvk
u v mochin
ach nez
aporn
e matice, Casop.
pest. mat. 89 (1964),
2830.
Smiley M. F., Matrix commutators, Can. J. Math. 13 (1961), 353355.
Strassen V., Gaussian elimination is not optimal, Numerische Math. 13 (1969), 354356.
V
aliaho H., An elementary approach to the Jordan form of a matrix, Amer. Math. Monthly 93
(1986), 711714.
Zassenhaus H., A remark on a paper of O. Taussky, J. Math. and Mech. 10 (1961), 179180.
Index
Leibniz, 13
Liebs theorem, 133
minor, basic, 20
minor, principal, 20
order lexicographic, 129
Schurs theorem, 158
complex structure, 67
complexification of a linear space,
64
complexification of an operator, 65
conjugation, 180
content of a polynomial, 218
convex linear combination, 57
Courant-Fischers theorem, 100
Cramers rule, 14
cyclic block, 83
decomposition, Lanczos, 89
decomposition, Schur, 88
definite, nonnegative, 101
derivatiation, 176
determinant, 13
determinant Cauchy , 15
diagonalization, simultaneous, 102
double, 180
b, 175
Barnetts matrix, 193
basis, orthogonal, 60
basis, orthonormal, 60
Bernoulli numbers, 34
Bezout matrix, 193
Bezoutian, 193
Binet-Cauchys formula, 21
eigenvalue, 55, 71
eigenvector, 71
Eisensteins criterion, 219
elementary divisors, 92
equation Euler, 204
equation Lax, 203
equation Volterra, 205
ergodic theorem, 115
Euclids algorithm, 218
Euler equation, 204
expontent of a matrix, 201
C, 188
canonical form, cyclic, 83
canonical form, Frobenius, 83
canonical projection, 54
Cauchy, 13
Cauchy determinant, 15
Cayley algebra, 183
Cayley transformation, 107
Cayley-Hamiltons theorem, 81
characteristic polynomial, 55, 71
Chebotarevs theorem, 26
cofactor of a minor, 22
cofactor of an element, 22
commutator, 175
factorisation, Gauss, 90
factorisation, Gram, 90
first integral, 203
form bilinear, 98
form quadratic, 98
form quadratic positive definite, 98
form, Hermitian, 98
form, positive definite, 98
form, sesquilinear, 98
Fredholm alternative, 53
Frobenius block, 83
Frobenius inequality, 58
Frobenius matrix, 15
Frobenius-Konigs theorem , 164
Kronecker-Capellis theorem, 53
L, 175
lHospital, 13
Lagranges interpolation polynomial,
219
Lagranges theorem, 99
Lanczoss decomposition, 89
Laplaces theorem, 22
Lax differential equation, 203
Lax pair, 203
lemma, Gauss, 218
idempotent, 111
image, 52
inequality Oppenheim, 158
inequality Weyl, 166
inequality, Hadamard, 148
inequality, Schur, 151
inequality, Szasz, 148
inequality, Weyl, 152
inertia, law of, Sylvesters, 99
inner product, 60
invariant factors, 91
involution, 115
Jacobi, 13
Jacobi identity, 175
Jacobis theorem, 24
Jordan basis, 77
Jordan block, 76
Jordan decomposition, additive, 79
Jordan decomposition, multiplicative,
79
Jordan matrix, 77
Jordans theorem, 77
kernel, 52
Kronecker product, 124
2
scalar matrix, 11
Schur complement, 28
Schurs inequality, 151
Schurs theorem, 89
Seki Kova, 13
singular values, 153
skew-symmetrization, 126
Smith normal form, 91
snake in a matrix, 164
space, dual, 48
space, Hermitian , 65
space, unitary, 65
spectral radius, 154
Strassens algorithm, 138
Sylvesters criterion, 99
Sylvesters identity, 25, 130
Sylvesters inequality, 58
Sylvesters law of inertia, 99
Sylvesters matrix, 191
symmetric functions, 30
symmetrization, 126
Szaszs inequality, 148
Takakazu, 13
tensor decomposable, 134
tensor product of operators, 124
tensor product of vector spaces, 122
tensor rank, 137
tensor simple, 134
tensor skew-symmetric, 126
tensor split, 134
tensor symmetric, 126
tensor, convolution of, 123