14064444
14064444
14064444
com
https://ebooknice.com/product/algebras-and-representations-
math-3193-2016-7032222
OR CLICK HERE
DOWLOAD EBOOK
(Ebook) Master SAT II Math 1c and 2c 4th ed (Arco Master the SAT Subject Test: Math
Levels 1 & 2) by Arco ISBN 9780768923049, 0768923042
https://ebooknice.com/product/master-sat-ii-math-1c-and-2c-4th-ed-arco-master-
the-sat-subject-test-math-levels-1-2-2326094
ebooknice.com
(Ebook) SAT II Success MATH 1C and 2C 2002 (Peterson's SAT II Success) by Peterson's
ISBN 9780768906677, 0768906679
https://ebooknice.com/product/sat-ii-success-math-1c-and-2c-2002-peterson-s-sat-
ii-success-1722018
ebooknice.com
https://ebooknice.com/product/creative-representations-of-place-43401426
ebooknice.com
https://ebooknice.com/product/quaternion-algebras-in-number-theory-
spring-2016-7188434
ebooknice.com
(Ebook) Representations of Algebras: 224 by Hector A. Merklen, Flavio Ulhoa Coelho
ISBN 9780824707330, 0824707338
https://ebooknice.com/product/representations-of-algebras-224-1528264
ebooknice.com
https://ebooknice.com/product/representations-of-finite-groups-i-
math-240a-7165866
ebooknice.com
(Ebook) Classical Hopf Algebras and Their Applications (Algebra and Applications,
29) by Pierre Cartier, Frédéric Patras ISBN 9783030778446, 3030778444
https://ebooknice.com/product/classical-hopf-algebras-and-their-applications-
algebra-and-applications-29-34794516
ebooknice.com
(Ebook) Math 7350: Differential Graded Algebras and Differential Graded Categories
by Yuri Berest, David Mehrle (scribe)
https://ebooknice.com/product/math-7350-differential-graded-algebras-and-
differential-graded-categories-7106830
ebooknice.com
https://ebooknice.com/product/math-223a-algebraic-number-theory-notes-7018940
ebooknice.com
Algebras and Representations
MATH 3193
2016
An algebra is a set A which at the same time has the structure of a ring and a vector space in
a compatible way. Thus you can add and multiply elements of A, as well as multiply by a scalar.
One example is the set of matrices of size n over a field.
A representation is an action of an algebra on a vector space, similar to how matrices of size
n act on an n-dimensional vector space. It is often the case that information about the algebra
can be deduced from knowing enough about its representations. An analogy might be that one
can begin to understand a complicated function by computing its derivatives, or more generally a
surface by computing its tangent planes.
There are lots of interesting examples of algebras, with applications to mathematics and
physics. In this course we will introduce some of these algebras, as well as some of the general
theory of algebras and their representations.
1
Contents
Chapter 1. Quaternions 5
1.1. Complex Numbers 5
1.2. Quaternions 5
1.3. Some Remarks (non-examinable) 9
Chapter 2. Algebras 10
2.1. Basic Definition 10
2.2. Division algebras 10
2.3. Characteristic of a Field 11
2.4. Algebras given by a basis and structure coefficients 11
2.5. Polynomials 12
2.6. Group algebras 13
2.7. Matrix algebras 13
2.8. Endomorphism algebras 14
2.9. Temperley-Lieb algebras 14
2.10. Direct Product of Algebras 15
Chapter 4. Ideals 21
4.1. Ideals — Definition and Example 21
4.2. Sums and intersections of ideals 22
4.3. Products of ideals 23
Chapter 7. Representations 34
7.1. Basic Definition 34
2
CONTENTS 3
7.2. Examples 35
7.3. Representations of quotient algebras 36
7.4. Representations of group algebras 37
7.5. Equivalence 37
7.6. Direct product of representations 38
Chapter 8. Modules 39
8.1. Basic Definition 39
8.2. Direct product of modules 40
Appendix A. Rotations 64
A.1. Orthogonal Matrices 64
A.2. Rotations in 2-Space 65
A.3. Rotations in 3-space 66
A.4. Rotations in n-space 67
Quaternions
Since the work of Wessel (1799) and Argand (1806) we think of complex numbers as formal
expressions
z = x + yi with x, y ∈ R,
which we can add and multiply by expanding out, substituting i2 = −1, and collecting terms. In
other words, we have a two-dimensional real vector space C with basis {1, i}, on which we have
defined a multiplication
C×C→C
satisfying the following properties for all a, b, c ∈ C and λ ∈ R:
Associative a(bc) = (ab)c.
Unital there exists 1 ∈ C with 1a = a = a1.
Bilinear a(b + λc) = ab + λac and (a + λb)c = ac + λbc.
Commutative ab = ba.
Complex numbers have wonderful properties, for example:
p
• The conjugate of z = x + yi is z̄ = x − yi, and its absolute value is |z| = x2 + y 2 . Thus
|z̄| = |z| and z z̄ = |z|2 . Also, |zw| = |z||w| for all complex numbers z, w.
• Every non-zero complex number z = x + yi has an inverse
z −1 = (x − yi)/(x2 + y 2 ) = z̄/|z|2 ,
1.2. Quaternions
i2 = −1 ij = k ik = −j
ji = −k j 2 = −1 jk = i
ki = j kj = −i k 2 = −1
5
1.2. QUATERNIONS 6
Remark 1.2.1. This looks a bit like the multiplication rule for cross product except i × i = 0
and not −1. So we can’t use the determinant trick to work out the product.
In other words we have a four-dimensional real vector space H with basis {1, i, j, k}, on which
we have defined a multiplication
H×H→H
Some basic definitions: these are all analogous to the definitions for the complex numbers.
The first property is clear, so we just need to prove the second. Since multiplication is
bilinear and conjugation is linear, we can reduce to the case when p, q ∈ {1, i, j, k}. If
p = 1 or q = 1, then the result is clear, so we may assume that p, q ∈ {i, j, k}.
1.2. QUATERNIONS 7
If p = q, then p2 = −1 = p̄2 . Otherwise p and q are distinct elements of {i, j, k}. Let
r be the third element. Then p̄ = −p, and similarly for q and r. Using the multiplication
rules we see that pq = ±r and qp = ∓r, so
q̄ p̄ = (−q)(−p) = qp = ∓r = ±r̄ = pq
as required.
(4) We have
q q̄ = q̄q = |q|2 .
For, write q = a + p with a ∈ R and p a pure quaternion, so that q̄ = a − p. Then
ap = pa,(as a is real), so
q q̄ = (a + p)(a − p) = a2 + pa − ap − p2 = a2 − p2 = (a − p)(a + p) = q̄ q.
q q̄ = a2 − p2 = a2 + |p|2 = |q|2 .
|pq| = |p||q|.
For
|pq|2 = pq pq = p q q̄ p̄ = p |q|2 p̄ = |q|2 p p̄ = |q|2 |p|2 .
The first equality follows from property 4, the second from property 3, the third from
property 4, the fourth as |q|2 is real, and the fifth by property 4 again. Finally the answer
follows by taking square roots.
Lemma 1.2.4. Any non-zero quaternion has a multiplicative inverse; that is, H is a division
algebra.
where r, θ ∈ R with r = |q| ≥ 0 and θ ∈ [0, 2π], and n is a pure quaternion of absolute value 1.
1 = |q|2 = a2 + |p|2 ,
so we can write a = cos( 21 θ) for some unique θ ∈ [0, 2π]. (Note that as a2 ≤ 1 we have −1 < a < 1.
Hence |p| = sin( 21 θ). (Note as θ ∈ [0, 2π] that this RHS is indeed non-negative.) Finally, if
θ = 0, 2π, then p = 0 so we can take n to be arbitrary; otherwise n = p/|p| is a pure quaternion of
absolute value 1. We then have
p
q = a + p = cos( 12 θ) + sin( 12 θ) = cos( 12 θ) + n sin( 12 θ)
|p|
1.2. QUATERNIONS 8
pq = −p · q + p × q ∈ H for all p, q ∈ P.
Note that the dot product of two elements of P is in R, and the cross product is in P , so the
sum makes sense in H.
Proof. Each operation is bilinear, so it suffices to check this for p, q ∈ {i, j, k}. This gives 9
possible cases and symmetry means we only need to check 3, namely i2 , ij, and ji.
−i · i + i × i = −1 + 0 = i2 , −i · j + i × j = 0 + k = ij, −j · i + j × i = 0 − k = ji.
The following theorem explains the reason for the 21 θ in Lemma 1.2.5.
Rq : P → P, Rq (p) := qpq −1
is a rotation. Explicitly, if q = cos( 21 θ) + sin( 12 θ)n, then Rq = Rn,θ is the rotation about axis n
through angle θ.
Two quaternions q, q 0 of absolute value 1 give the same rotation if and only if q 0 = ±q.
where (n, u, v) is any right-handed orthonormal basis. With respect to this basis the matrix of the
rotation is:
1 0 0
0 cos θ sin θ
0 − sin θ cos θ
Now let q be a quaternion of absolute value 1. By Lemma 1.2.5 we can write q = cos( 21 θ) +
sin( 12 θ)n with θ ∈ [0, 2π] and n a pure quaternion of absolute value 1. Let (n, u, v) be a right-handed
orthonormal basis for P . The previous lemma tells us that
nu = −n · u + n × u = v and un = −u · n + u × n = −n × u = −v.
Similarly
uv = n = −vu and vn = u = −nv.
1.3. SOME REMARKS (NON-EXAMINABLE) 9
For simplicity set c := cos( 12 θ) and s := sin( 21 θ). Then q = c + s and q −1 = c − s. Now, since
n2 = −|n|2 = −1 we have
Similarly
The set of quaternions of absolute value 1 form a group under multiplication, denoted Sp(1),
and it is not hard to see that the map q 7→ Rq from the previous theorem defines a surjective group
homomorphism R : Sp(1) → SO(3, R) to the group of rotations of R3 . This group homomorphism
is a double cover, meaning that there are precisely two elements of Sp(1) mapping to each rotation
in SO(3, R).
In fact, we can say more. A quaternion q = a + bi + cj + dk has absolute value 1 precisely
when a2 + b2 + c2 + d2 = 1, so Sp(1) can be thought of as a 3-sphere
S 3 = {(a, b, c, d) ∈ R4 : a2 + b2 + c2 + d2 = 1}.
Algebras
A × A → A, (a, b) 7→ ab,
Remark 2.1.3. (1) In the literature, the algebras we consider might be called unital,
associative algebras. There are other types: Banach algebras are usually non-unital; Lie
algebras and Jordan algebras are non-associative.
(2) Recall that a vector space V is finite dimensional if it has a finite basis. Not all of our
algebras will be finite dimensional.
(3) There is a very rich theory of commutative algebras, where one assumes that the mul-
tiplication is commutative, so ab = ba for all a, b ∈ A. This is related to, amongst
other things, algebraic geometry and algebraic number theory. In this course we will be
concerned with general, non-commutative, algebras.
Definition 2.2.1. A division algebra is a non-zero algebra A in which every non-zero element
has a multiplicative inverse; that is, for all a 6= 0 there exists a−1 such that aa−1 = 1 = a−1 a.
10
2.4. ALGEBRAS GIVEN BY A BASIS AND STRUCTURE COEFFICIENTS 11
Example 2.2.3. A field is the same as a commutative division algebra. The quaternions form
a division algebra, but are not commutative so do not form a field.
Definition 2.3.1. The characteristic of a field K is the smallest positive integer n such that
n · 1 = 0 ∈ K. If no such n exists, we define the characteristic to be zero.
Example 2.3.2. (1) The fields Q, R and C all have characteristic zero.
(2) If p ∈ Z is a prime, then Fp = Zp (the field with p elements) is a field of characteristic p.
(3) The characteristic of a field is either zero or a prime number. For, if char(K) = n and
n = ab with 1 < a, b < n, then ab = n = 0 ∈ K, whence either a = 0 ∈ K or b = 0 ∈ K,
contradicting the minimality of n.
One immediate question which arises is how to describe an algebra. We shall give two answers
to this question, both having their advantages and disadvantages.
The first answer starts from the fact that A is a vector space, so has a basis {ei | i ∈ I}, for
some indexing set I and we need to define a bilinear multiplication on A.
Recall that to define a linear map f , we just need to specify the images of a basis, since then
P P
f ( i λi ei ) = i λi f (ei ).
Similarly, to define a bilinear map A × A → A, we just need to specify the products ei ej for
all i, j. We can then multiply arbitrary elements of A by expanding out:
X X X
λ i ei µj ej = λi µj (ei ej ).
i j i,j
We may display this information in the multiplication table, having rows and columns indexed by
the basis elements, writing the product ei ej in the i-th row and j-th column. This is essentially
what we did when describing the quaternions.
More precisely, each product ei ej again lies in A, so can be expressed uniquely as a linear
combination of the basis elements. Thus one only needs to define the scalars ckij ∈ K, called the
structure coefficients, such that
X
ei ej = ckij ek .
k
We next need to ensure that the multiplication is associative and has a unit.
Using that the multiplication is bilinear, it is enough to check associativity for all triples of
basis elements. So, our multiplication is associative if and only if (ei ej )ek = ei (ej ek ) for all i, j, k.
Note that we can express this entirely in terms of the structure constants as
X p X
cij clpk = clip cpjk for all i, j, k, l.
p p
2.5. POLYNOMIALS 12
P
Similarly, to check that an element x = i λi ei is a unit, we just need to show that xej =
ej = ej x for all j. In practice, however, it is common to specify in advance that the unit is one of
the basis elements, as we did when describing the complex numbers and quaternions.
This method of describing an algebra is analogous to the method of describing a group by
giving the set of elements and the multiplication table, and for large-dimensional algebras it is just
as unwieldy as it is for large groups.
Example 2.4.1. (1) The vector space with basis {e, f } has a bilinear multiplication given
by
ee = e, ef = 0, f e = 0, f f = f.
a2 = a, b2 = b, ab = 0, ba = b + 1
2.5. Polynomials
Observe that if A has basis ei and structure coefficients ckij , then A[X] has basis ei X m and multi-
plication
X
ei X m · ej X n = ckij ek X m+n .
k
In particular, we can inductively define the polynomial algebra K[X1 , . . . , Xr ] to be K[X1 , . . . , Xr−1 ] [Xr ].
This has basis the set of monomials
and multiplication
One interesting way of defining an algebra is to start with a group G and take KG to be the
vector space with basis indexed by the elements of G, so the set {eg | g ∈ G}. We now use the
multiplication on G to define a bilinear multiplication on KG:
eg · eh := egh .
Since the multiplication for G is associative, it is easy to see that the multiplication for KG is also
associative. Moreover, the unit for KG is the basis element indexed by the identity element of G,
so 1 = e1 .
This is easiest to see in an example.
Example 2.6.1. Let G be the cyclic group of order 3 with generator g, so that G = {1, g, g 2 }
with g 3 = 1. Then the elements of KG are linear combinations λ + µeg + νeg2 with λ, µ, ν ∈ K,
and as an example of the multiplication we have
(1 + 2eg + 5eg2 )(2 + eg2 ) = 2 + 4eg + 10eg2 + eg2 + 2eg eg2 + 5eg2 eg2
= 2 + 4eg + 10eg2 + 2 + 5eg
= 4 + 9eg + 11eg2 .
The set of all n × n matrices with entries in K forms an algebra Mn (K). For a ∈ Mn (K)
we write a = (apq ), where apq ∈ K and 1 ≤ p, q ≤ n. Addition, scalar multiplication and matrix
multiplication are as usual:
X
(a + λb)pq = apq + λbpq , (ab)pq = apr brq .
r
The unit is the identity matrix, denoted 1 or I, or 1n or In if we want to emphasise the size of the
matrices.
The elementary matrices Epq have a 1 in the (p, q)-th place and 0s elsewhere. They form a
basis for Mn (K), so this algebra has dimension n2 . The multiplication is then given by
More abstractly let V be a vector space. Recall that, after fixing a basis for V , we can represent
every linear map f : V → V as a matrix. It is sometimes convenient not to have to choose a basis, in
which case we consider the set of all such linear maps, called endomorphisms and denoted End(V ).
This is again a vector space, with addition and scalar multiplication given by
This is an algebra with unit the identity map idV (v) = v. (It is associative since composition of
functions is always associative.)
Another interesting way of defining algebras is to index the basis elements by certain types of
diagrams and then to describe the multiplication in terms of these diagrams. Such algebras are
referred to as diagram algebras (although this is not a well defined term).
The Temperley-Lieb algebra T Ln (δ) for n ≥ 1 and δ ∈ K has basis indexed by planar diagrams
having two rows of n dots, one above the other, connected by n non-intersecting curves. Two such
diagrams are considered equal if the same vertices are connected.
To define the multiplication ab of two basis elements, we first stack the diagram for a on top of
that for b, and then concatenate the curves. The resulting diagram may contain some loops which
we must remove, and we multiply by the scalar δ for each such loop that we remove.
This is again easiest to understand once we have seen an explicit example. For the algebra
T L3 (δ) we have the following diagrams
1 u1 u2 p q
We have written the corresponding basis elements under the diagrams, so T L3 (δ) has basis
{1, u1 , u2 , p, q}.
To compute the product u21 we take two copies of the appropriate diagram, stacked one above
the other, then join curves and remove loops:
Similarly u1 u2 = p, since
It is hopefully clear that the basis element 1 is indeed a unit for the algebra. In general the
unit is the basis element indexed by the diagram given by joining the dots vertically, so the i-th
dot on the top row is joined to the i-th dot on the bottom row.
We can also use the diagrams to see that the multiplication is associative; for, if we take three
diagrams stacked one on top of the other, then joining the curves of the top two diagrams, and then
joining them with the third is the same as joining the bottom two together, and then joining with
the top diagram. In fact, both operations agree with simply joining all three diagrams together in
one go. This proves that the multiplication is associative when restricted to three basis elements,
and hence is associative.
In general we define ui ∈ T Ln (δ) for 1 ≤ i < n to be the diagram so we have joined togther
the i-th and (i + 1)-st dots on the top row, and the same on the bottom row; all other dots on the
top row are joined to their counterparts on the bottom row.
1 i − 1i i + 1i + 2 n
1 i − 1i i + 1i + 2 n
In fact, in a certain sense (which we will make precise later), these relations are sufficient to
completely determine the Temperley-Lieb algebra. (See Exercise Sheet 1, Tutorial Problem 4.)
Remark 2.9.1. The Temperley-Lieb algebra was invented to study Statistical Mechanics (see
for example Paul Martin’s homepage). It is now also important in Knot Theory, and Vaughan
Jones won a Fields Medal in 1990 for his work in this area.
Let A and B be algebras. Since they are a priori vector spaces, we may form their Cartesian
product, or direct product, A × B and this is again a vector space, with addition and scalar
multiplication given by
µ(a, b) + λ(a0 , b0 ) = (µa + λa0 , µb + λb0 ).
We give A × B the structure of an algebra via the following multiplication
It is easy to check that this is associative and bilinear. Moreover the multiplication is unital, with
unit 1 = (1A , 1B ).
CHAPTER 3
3.1. Homomorphisms
Example 3.1.2. Recall that we can view C as an algebra over Q. Given any element z ∈ C
there is a Q-algebra homomorphism Q[X] → C sending X n 7→ z n . This is called the evaluation
map and is denoted evz .
Example 3.1.3. Recall that both C and H are algebras over R. Then the R-linear map C → H
sending x + yi ∈ C to x + yi ∈ H is an R-algebra homomorphism. In fact, if p ∈ H is any pure
quaternion of absolute value 1, then there is an algebra homomorphism C → H such that i 7→ p.
Example 3.1.4. Let A and B be algebras, and consider their direct product A × B. The
projection map
πA : A × B → A, (a, b) 7→ a
is then a surjective algebra homomorphism, and similarly for the projection map πB : A × B → B.
Note, however, that the inclusion map
ιA : A → A × B, a 7→ (a, 0)
3.2. Subalgebras
It follows that, using the induced multiplication, B is an algebra in its own right.
Example 3.2.3. If K is a field and A a K-algebra, then there is a unique algebra homomor-
phism K → A, sometimes called the structure map. This sends the scalar λ ∈ K to the element
λ1A ∈ A.
For example, if A = Mn (K), then the structure map sends λ to the diagonal matrix λIn .
Example 3.2.5. Let A ≤ Mm (K) and B ≤ Mn (K) be subalgebras. Then the direct product
A × B is isomorphic to the subalgebra of Mm+n (K) consisting of block-diagonal matrices of the
form !
a 0
∈ Mm+n (K), a ∈ Mm (K) and b ∈ Mn (K).
0 b
Lemma 3.2.6. Let A be an algebra with basis {ei | i ∈ I} and structure coefficients
X
ei ej = ckij ek .
k
Then an algebra B is isomorphic to A if and only if B has a basis {fi | i ∈ I} with the same
structure coefficients:
X
fi fj = ckij fk .
k
So in particular, if θ is an isomorphism, then we have there is a basis for B with the same structure
coefficients as A.
k
P
Conversely, given basis elements fi ∈ B satisfying fi fj = k cij fk , let θ : A → B be the linear
map sending ei to fi . Then
X X X
ckij fk = ckij θ(ek ) = θ ckij ek = θ(ei ej ),
θ(ei )θ(ej ) = fi fj =
k k k
0 0 0
so by bilinearity θ(a)θ(a ) = θ(aa ) for all a, a ∈ A.
Now θ satisfies (a) and (b) and is bijective, so also satisfies (c’). Hence it is a bijective algebra
homomorphism, so an isomorphism.
e2 = e, f 2 = f, ef = f e = 0
E 2 = u2 = u = E, EF = u(1 − u) = u − u2 = 0,
F 2 = (1 − u)2 = 1 − 2u + u2 = 1 − u = F, F E = (1 − u)u = 0.
∼
Lemma 3.2.8. The vector space isomorphism Θ : EndK (K n ) −
→ Mn (K) sending a linear map
to its matrix, is an algebra isomorphism.
Moreover
Θ(f )Θ(g) = (θf θ−1 )(θgθ−1 ) = θf gθ−1 = Θ(f g)
and
Θ(idV ) = θidV θ−1 = θθ−1 = idW ,
so that Θ respects the multiplication and preserves the unit. Hence Θ is an algebra homomorphism,
and it is an isomorphism since it clearly has inverse
As a special case, let V be a finite-dimensional vector space. Choosing a basis {e1 , . . . , en } for
V is equivalent to choosing a vector space isomorphism V → K n , which then induces an algebra
∼
isomorphism EndK (V ) − → EndK (K n ) ∼= Mn (K). This is just the map sending a linear map to its
matrix with respect to the basis {e1 , . . . , en }.
Lemma 3.3.1. Let S and T be subalgebras of an algebra A. Then the vector space intersection
S ∩ T is again a subalgebra of A.
In particular, we say that X is a generating set for the algebra A provided that the only subalgebra
containing X is A itself.
Ideals
Let θ : A → B be a homomorphism of algebras. We have just seen that the image Im(θ) is
a subalgebra of B, but what about its kernel? Provided that B is not the zero algebra we have
θ(1A ) = 1B 6= 0. Hence 1A 6∈ Ker(θ) and so Ker(θ) cannot be a subalgebra of A.
On the other hand, Ker(θ) is closed under multiplication by any element of A. For, if x ∈
Ker(θ) and a ∈ A, then
so ax, xa ∈ Ker(θ).
Remark 4.1.2. (1) I = {0} (the zero ideal) and I = A (the unit ideal) are always ideals
of A.
(2) There is also the notion of a left ideal, for which one only demands (a) and (b), and a
right ideal, satisfying only (a) and (c). If A is commutative, then these are all the same.
(3) If θ : A → B is an algebra homomorphism, then Ker(θ) is an ideal in A.
Example 4.1.3. (1) The set of polynomials having zero constant term forms an ideal of
K[X].
(2) Let A and B be algebras, and consider their direct product A × B. We saw earlier that
the projection map πA : A × B → A, (a, b) 7→ a, is a surjective algebra homomorphism.
Its kernel is {(0, b) | b ∈ B}, which is an ideal of A × B.
(3) More generally let I C A and J C B ideals. Then the set
I × J = {(x, y) | x ∈ I, y ∈ J}
Lemma 4.1.5. If an ideal contains 1, or any invertible element, then it is the unit ideal. In
particular, the only ideals in a division algebra are the zero ideal and the unit ideal.
Proof. If I contains an invertible element b, then it also contains b−1 b = 1, and hence also
a1 = a for any a ∈ A.
Lemma 4.2.1. Let I and J be ideals of an algebra A. Then the vector space constructions
I +J and I ∩J
Hence I ∩ J is an ideal.
P T
More generally, if Ij is a collection of ideals, then j Ij and j Ij are again ideals. In particular,
it is now possible to define the smallest ideal containing any subset X ⊂ A — we just consider the
intersection of all such ideals containing X. We call this the ideal generated by X, and denote it
by (X). The elements of (X) are all possible finite linear combinations of the form
a1 x1 b1 + · · · + an xn bn with ai , bi ∈ A and xi ∈ X.
As a special case we have the principal ideal (x), an ideal generated by one element.
4.3. PRODUCTS OF IDEALS 23
NB: please don’t get the two notations confused. We use angle brackets hXi for the subalgebra
generated from the set X and round brackets for the ideal generated by X. In general these two
sets are very different.
We remark that these constructions also work for left (and right) ideals.
Definition 4.3.1. Let I and J be ideals of A. We define the ideal IJ to be the smallest ideal
containing the set {xy | x ∈ I, y ∈ J}. Since ax ∈ I for all a ∈ A and x ∈ I, and similarly yb ∈ J
for all b ∈ A and y ∈ J, we see that every element of IJ can be written as a finite sum of products
xy for x ∈ I and y ∈ J.
Lemma 4.3.2. Let I, J and L be ideals. Then I(J + L) = IJ + IL and (I + J)L = IL + JL.
Proof. Since J, L ⊆ J + L we must have IJ, IL ⊆ I(J + L), and hence IJ + IL ⊆ I(J + L).
Conversely, I(J + L) is the smallest ideal containing all elements of the form x(y + z) for x ∈ I,
y ∈ J and z ∈ L. Since x(y + z) = xy + xz ∈ IJ + IL we have I(J + L) ⊆ IJ + IL.
The proof for (I + J)L = IL + JL is analogous.
CHAPTER 5
Quotient Algebras
Recall that if V is a vector space and U ≤ V a subspace, then we can form the quotient vector
space V /U . This has elements the cosets v + U := {v + u | u ∈ U } for v ∈ V , so that v + U = v 0 + U
if and only if v − v 0 ∈ U . The addition and scalar multiplication are given by
Lemma 5.1.1. If I is an ideal in an algebra A, then the vector space quotient A/I becomes an
algebra via the multiplication
Clearly this has unit 1 + I. Moreover the natural map π : A → A/I is a surjective algebra homo-
morphism with kernel I.
In other words, ideals are the same as kernels of homomorphisms.
Proof. We have that the addition and scalar multiplication are well-defined as this is the
same as the quotient vector space construction — but as revision of this construction we will verify
the addition and scalar multiplication.
Suppose a + I = a0 + I and b + I = b0 + I. By definition, this implies that a − a0 and b − b0 ∈ I.
We check the addition is well defined. Now
a + I + b + I := (a + b) + I and a0 + I + b0 + I := (a0 + b0 ) + I.
Also
a + b − (a0 + b0 ) = a − a0 + b − b0 ∈ I
Also
λa + λa0 = λ(a − a0 ) ∈ I
as a − a0 ∈ I and I is an ideal. Thus (λa) + I = (λa0 ) + I if a + I = a0 + I.
We now check that the multiplication is well-defined;
Also
ab − (a0 b0 ) = (a − a0 )b + a0 (b − b0 ) ∈ I
as (a − a0 )b ∈ I and a0 (b − b0 ) ∈ I as I is an ideal. Thus (ab) + I = (a0 b0 ) + I if a + I = a0 + I and
b + I = b0 + I.
The product is clearly associative and bilinear (since it is induced from the product in A), and
1 + I is a unit. Thus A/I is an algebra.
The natural map is easily seen to be an algebra homomorphism, since it is surjective π(ab) =
ab + I = (a + I)(b + I) = π(a)π(b) and π(1) = 1 + I is the identity element of A/I.
Proof. Using the Factor Lemma for vector spaces B.8.5, we know that there is a unique linear
map f¯: A/I → B such that f = f¯π. Using this we have f¯(a + I) = f¯π(a) = f (a). We therefore
only need to check that f¯ is an algebra homomorphism.
Let a, a0 ∈ A then
f (a) − f (a0 ) = f (a − a0 ) = 0
where the first equality follows as f is linear and the second follows as a − a0 ∈ I and I ⊆ Ker(f ).
Thus it is unambiguous to to define f¯(a + I) = f (a).
x3 + 3x2 + x − 3 + I = (x + 3)(x2 + 1) − 6 + I = −6 + I
p(x) = q(x)(x2 + 1) + ax + b
5.2. THE FACTOR LEMMA 26
p(x) + I = ax + b + I,
θ = evi : R[x] → C, x 7→ i.
Now U has basis Eij for 1 ≤ i ≤ j ≤ 3 and I is spanned by E13 . Thus U/I has basis
Example 5.2.4. The augmentation ideal of the group algebra KG is the kernel I of the
homomorphism
: KG → K, eg 7→ 1 for all g ∈ G.
P P
The ideal I consists of all elements g λg eg such that g λg = 0, so has basis eg −1 for g ∈ G\{1}.
Observe that KG/I ∼ = K.
θ̄ : A/ Ker(θ) → Im(θ)
is an isomorphism.
Proof. The map θ̄ exists by the Factor Lemma for algebras, 5.2.1. On the other hand, we
know it is an isomorphism of vector spaces, so it is an algebra isomorphism.
To prove it’s an isomorphism of vector spaces (without using theorem B.8.6) observe that θ̄ is
onto. Also observe that it is injective as
θ̄(a+Ker θ) = θ̄(a0 +Ker θ) ⇔ θ(a) = θ(a0 ) ⇔ θ(a−a0 ) = 0 ⇔ a−a0 ∈ Ker θ ⇔ a+Ker θ = a0 +Ker θ.
Presentations of Algebras
So far we have defined algebras in terms of a basis and structure coefficients. Similarly, to
define an algebra homomorphism, we need to give the images of the basis elements (so give a linear
map) and then check that this preserves the unit and respects the multiplication. This is fine for
small examples, but becomes very inefficient for larger algebras.
Compare with the situation for finite groups. You have probably only seen groups as subgroups
of symmetric groups, or else via their multiplication tables. It is then quite laborious to check
whether a given map defines a group homomorphism. Now imagine that you want to work with
the Monster Group, having roughly 8 × 1053 elements.
The purpose of this section is to introduce a more compact way of exhibiting an algebra A.
The basic idea is to write A as a quotient algebra B/I, where B is a larger algebra, but easier to
understand, and I C B is some ideal.
Consider for example the quotient algebra given in Example 5.2.3. This algebra is quite
awkward to work with, but if we regard it as the quotient U/I, then it becomes much easier to
think about.
The type of algebra B one chooses may depend on the particular situation, but one choice
which always works is when we take B to be a free algebra. In this case the description of A as a
quotient of a free algebra is called a presentation by generators and relations. Such a description
is particularly useful when constructing homomorphisms between algebras, since we only need to
give the images of the generators and then check that the relations still hold. This is the content
of the Proposition 6.2.5.
A word of warning: it is not always easy to transfer between these two descriptions, so it is
often best to know both a basis for the algebra as well as a presentation in terms of generators and
relations. Also, as for bases, generators are not unique.
Example 6.1.1. There are two ways to think about the group algebra KG where G is the
cyclic group of order 4. If we set G = {1, g, g 2 , g 3 } where g 4 = 1 then KG has basis {1, eg , eg2 , eg3 }.
An algebra that is isomorphic to KG is the following algebra
KhXi/(X 4 − 1).
Here KhXi is the free algebra on a single letter X. It consists of all linear combinations of the
powers of X. I.e.
KhXi = Span{1, X, X 2 , X 3 , . . .}.
In this case, KhXi = K[X] the polynomial algebra in a single variable X. The word free is meant
to indicate that there are no relations on the elements.
28
6.1. FREE ALGEBRAS 29
As we have seen quotienting out by the ideal (X 4 −1) effectively makes X 4 = 1 in the quotient,
the same relation that appears in the group algebra KG. We call KhXi/(X 4 − 1) a presentation
of KG.
X1 X2 · · · Xr · Y1 Y2 · · · Ys = X1 X2 · · · Xr Y1 Y2 · · · Ys .
This multiplication is clearly associative with unit the empty word 1. As usual, we often abbreviate
XX by X 2 , and XXX by X 3 , and so on.
If S = {Xi | i ∈ I}, then we also write KhXi | i ∈ Ii for KhSi.
Example 6.1.3. (1) If S = ∅, then there is only one possible word, so Kh∅i = K.
(2) If S = {X}, then KhXi has basis 1, X, X 2 , X 3 , . . . and multiplication X m · X n = X m+n .
Thus KhSi = K[X], the polynomial algebra.
(3) If S = {X, Y }, then KhX, Y i has basis (in order of increasing length)
1
X, Y
X 2 , XY, Y X, Y 2
X 3 , X 2 Y, XY X, Y X 2 , XY 2 , Y XY, Y 2 X, Y 3
...
(2X + XY + 3Y 2 X)(4X − Y X)
= 2X(4X − Y X) + XY (4X − Y X) + 3Y 2 X(4X − Y X)
= 8X 2 − 2XY X + 4XY X − XY 2 X + 12Y 2 X 2 − 3Y 2 XY X
= 8X 2 + 2XY X − XY 2 X + 12Y 2 X 2 − 3Y 2 XY X.
Proposition 6.1.4 (The Universal Property). Let S be a set and A an algebra. Given a map
f : S → A there exists a unique algebra homomorphism θ : KhSi → A extending f , so θ(X) = f (X)
for all X ∈ S.
In other words there is a bijection between algebra homomorphisms KhSi → A and maps
S → A.
Exploring the Variety of Random
Documents with Different Content
VII
Elle répéta deux fois ces quatre vers ; puis elle me regarda, et sa
figure me parut singulière. Elle ôta sa toque ; l’air jouait avec ses
cheveux, qui voltigeaient sur son front ; elle avait les joues ardentes,
et au fond de ses yeux braqués sur moi une mystérieuse folie agitait
ses grelots.
— Votre bohémienne, s’écria-t-elle, était une menteuse ; ne m’a-t-
elle pas prédit que je vivrais cent ans ? — Et, baissant la voix, elle
ajouta : — Nous devions décider aujourd’hui si nous passerions
notre vie ensemble ; puisque vous n’y pensez plus, je veux mourir
avec vous.
A ces mots, elle imprima au gouvernail une si violente secousse
que la seconde d’après notre bateau avait sa coque en l’air et votre
serviteur six pieds d’eau au-dessus de la tête.
Madame, on ne sait dans ce monde ce qui sert et ce qui nuit. Je
n’aurais jamais imaginé que le commerce de mon ami Harris pût
avoir pour moi la moindre utilité. Cependant, lorsque je revins de
mon étourdissement et du fond de l’eau à la surface, ma première
pensée fut de me féliciter d’avoir passé avec lui trois mois à Genève,
parce que, nous baignant tous les jours dans le lac, il avait fait de
moi un habile nageur ; — soyez sûre que dans ce moment tous mes
tableaux passés et futurs me semblaient bien peu de chose au prix
de la faculté que je possédais de me tenir sur l’eau.
Mes idées se débrouillant, ma seconde pensée fut qu’il y avait
près de moi une femme qui se noyait, et que j’étais résolu à la
sauver ou à périr avec elle. Vous croirez ce qui vous plaira,
madame ; mais ce n’était pas un mouvement d’humanité ni de
compassion qui me poussait : je ressentais pour la première fois une
sorte de fureur amoureuse. J’avais tout pardonné à Meta en faveur
de la charmante et louable intention qu’elle avait eue de noyer Tony
Flamerin ; il me semblait que la vie n’était pas possible sans elle. Ce
sentiment vous paraîtra extravagant, et vous allez croire que l’eau
du lac Paladru, dont j’avais avalé un grand coup, joint à ses autres
vertus celle d’être plus capiteuse que le vin du Rhin. Madame, il
n’est pas besoin de boire pour extravaguer ; il y a un peu de
déraison dans toutes les passions humaines. C’est le cœur de
l’homme qui est capiteux.
Je plongeai, et je n’aperçus pas Meta. L’épouvante me gagnait
quand je m’avisai que, sa robe s’étant accrochée à la barre du
gouvernail, elle se trouvait prise sous le bateau. Je l’eus bientôt
dégagée. Elle avait entièrement perdu connaissance ; mais je ne
pouvais avoir de sérieuses alarmes, elle n’avait pas demeuré plus
d’une minute sous l’eau. Un léger mouvement qu’elle fit avec les
doigts me rassura tout à fait. Lui soutenant la tête de ma main
gauche, je m’escrimai si vigoureusement du bras droit et des deux
jambes que le grand Harris lui-même eût été content de moi. Au
bout de quelques instants que je trouvai longs, j’eus l’infini bonheur
de prendre terre.
Mon premier soin fut de coucher Meta sur le côté ; elle rouvrit les
yeux, les referma aussitôt. Je l’enlevai dans mes bras et me mis à
courir vers l’auberge, qui n’était pas loin. Je fus accosté à mi-chemin
par deux bateliers furieux, qui, m’accablant d’injures, me
redemandaient leur bateau. Je le leur montrai du doigt, les assurant
qu’il se portait bien, quoiqu’il n’y parût pas. Dans le fond, ils étaient
débonnaires, et ma bourse, que je leur donnai, était si bien garnie,
qu’ils changèrent de ton et voulurent m’aider à porter ma précieuse
charge ; mais je n’entendais pas que personne m’en soulageât. Mme
de Mauserre, qui s’était réveillée, s’étonnant de ne pas nous voir,
venait de sortir de l’hôtel avec Lulu pour nous chercher. Elles nous
aperçurent, et, croyant à un irréparable malheur, elles poussèrent
l’une et l’autre des cris perçants. J’avais eu facilement raison des
bateliers qui me réclamaient leur bateau ; j’eus plus de peine à
calmer Lulu, qui me demandait compte de sa gouvernante. Le pis
est que ses hurlements furent entendus de M. de Mauserre. Il
abandonna sa partie d’échecs, se précipita dans la cour, et je crus
que j’aurais une affaire sérieuse avec lui. Il me regardait d’un air
menaçant et furibond. Je me hâtai de dissiper son inquiétude en lui
affirmant que Meta était vivante ; mais l’inquiétude le tourmentait
moins que l’âpre chagrin de la voir étendue dans mes bras, qui la
serraient étroitement, sa joue pressée contre la mienne, ses
cheveux collés à mes tempes.
Il s’élança sur moi, les poings levés, et s’écria : — Vous êtes un
misérable fou !
Ce cri me fit mesurer la profondeur de sa blessure. — Vous vous
oubliez, monsieur, lui répondis-je froidement. — Et, le repoussant de
l’épaule, j’entrai dans l’auberge, où je déposai mon fardeau. Il n’y a
pas d’enthousiasme qui tienne, j’étais à bout de forces.
M. d’Arci était accouru ; il haussa les épaules en lorgnant Meta,
qui était pâle comme la mort, et il me dit : — Quelle comédienne ! —
Puis il grommela entre ses dents : — L’idée était ingénieuse ; mais le
cœur vous a manqué.
VIII
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com